Sample records for utility maximization model

  1. Evidence for surprise minimization over value maximization in choice behavior

    PubMed Central

    Schwartenbeck, Philipp; FitzGerald, Thomas H. B.; Mathys, Christoph; Dolan, Ray; Kronbichler, Martin; Friston, Karl

    2015-01-01

    Classical economic models are predicated on the idea that the ultimate aim of choice is to maximize utility or reward. In contrast, an alternative perspective highlights the fact that adaptive behavior requires agents’ to model their environment and minimize surprise about the states they frequent. We propose that choice behavior can be more accurately accounted for by surprise minimization compared to reward or utility maximization alone. Minimizing surprise makes a prediction at variance with expected utility models; namely, that in addition to attaining valuable states, agents attempt to maximize the entropy over outcomes and thus ‘keep their options open’. We tested this prediction using a simple binary choice paradigm and show that human decision-making is better explained by surprise minimization compared to utility maximization. Furthermore, we replicated this entropy-seeking behavior in a control task with no explicit utilities. These findings highlight a limitation of purely economic motivations in explaining choice behavior and instead emphasize the importance of belief-based motivations. PMID:26564686

  2. Three-class ROC analysis--the equal error utility assumption and the optimality of three-class ROC surface using the ideal observer.

    PubMed

    He, Xin; Frey, Eric C

    2006-08-01

    Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.

  3. Planning Routes Across Economic Terrains: Maximizing Utility, Following Heuristics

    PubMed Central

    Zhang, Hang; Maddula, Soumya V.; Maloney, Laurence T.

    2010-01-01

    We designed an economic task to investigate human planning of routes in landscapes where travel in different kinds of terrain incurs different costs. Participants moved their finger across a touch screen from a starting point to a destination. The screen was divided into distinct kinds of terrain and travel within each kind of terrain imposed a cost proportional to distance traveled. We varied costs and spatial configurations of terrains and participants received fixed bonuses minus the total cost of the routes they chose. We first compared performance to a model maximizing gain. All but one of 12 participants failed to adopt least-cost routes and their failure to do so reduced their winnings by about 30% (median value). We tested in detail whether participants’ choices of routes satisfied three necessary conditions (heuristics) for a route to maximize gain. We report failures of one heuristic for 7 out of 12 participants. Last of all, we modeled human performance with the assumption that participants assign subjective utilities to costs and maximize utility. For 7 out 12 participants, the fitted utility function was an accelerating power function of actual cost and for the remaining 5, a decelerating power function. We discuss connections between utility aggregation in route planning and decision under risk. Our task could be adapted to investigate human strategy and optimality of route planning in full-scale landscapes. PMID:21833269

  4. The Self in Decision Making and Decision Implementation.

    ERIC Educational Resources Information Center

    Beach, Lee Roy; Mitchell, Terence R.

    Since the early 1950's the principal prescriptive model in the psychological study of decision making has been maximization of Subjective Expected Utility (SEU). This SEU maximization has come to be regarded as a description of how people go about making decisions. However, while observed decision processes sometimes resemble the SEU model,…

  5. Expected Power-Utility Maximization Under Incomplete Information and with Cox-Process Observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fujimoto, Kazufumi, E-mail: m_fuji@kvj.biglobe.ne.jp; Nagai, Hideo, E-mail: nagai@sigmath.es.osaka-u.ac.jp; Runggaldier, Wolfgang J., E-mail: runggal@math.unipd.it

    2013-02-15

    We consider the problem of maximization of expected terminal power utility (risk sensitive criterion). The underlying market model is a regime-switching diffusion model where the regime is determined by an unobservable factor process forming a finite state Markov process. The main novelty is due to the fact that prices are observed and the portfolio is rebalanced only at random times corresponding to a Cox process where the intensity is driven by the unobserved Markovian factor process as well. This leads to a more realistic modeling for many practical situations, like in markets with liquidity restrictions; on the other hand itmore » considerably complicates the problem to the point that traditional methodologies cannot be directly applied. The approach presented here is specific to the power-utility. For log-utilities a different approach is presented in Fujimoto et al. (Preprint, 2012).« less

  6. Choice Inconsistencies among the Elderly: Evidence from Plan Choice in the Medicare Part D Program: Comment.

    PubMed

    Ketcham, Jonathan D; Kuminoff, Nicolai V; Powers, Christopher A

    2016-12-01

    Consumers' enrollment decisions in Medicare Part D can be explained by Abaluck and Gruber’s (2011) model of utility maximization with psychological biases or by a neoclassical version of their model that precludes such biases. We evaluate these competing hypotheses by applying nonparametric tests of utility maximization and model validation tests to administrative data. We find that 79 percent of enrollment decisions from 2006 to 2010 satisfied basic axioms of consumer theory under the assumption of full information. The validation tests provide evidence against widespread psychological biases. In particular, we find that precluding psychological biases improves the structural model's out-of-sample predictions for consumer behavior.

  7. Utilization of Model Predictive Control to Balance Power Absorption Against Load Accumulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbas, Nikhar; Tom, Nathan M

    2017-06-03

    Wave energy converter (WEC) control strategies have been primarily focused on maximizing power absorption. The use of model predictive control strategies allows for a finite-horizon, multiterm objective function to be solved. This work utilizes a multiterm objective function to maximize power absorption while minimizing the structural loads on the WEC system. Furthermore, a Kalman filter and autoregressive model were used to estimate and forecast the wave exciting force and predict the future dynamics of the WEC. The WEC's power-take-off time-averaged power and structural loads under a perfect forecast assumption in irregular waves were compared against results obtained from the Kalmanmore » filter and autoregressive model to evaluate model predictive control performance.« less

  8. Utilization of Model Predictive Control to Balance Power Absorption Against Load Accumulation: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbas, Nikhar; Tom, Nathan

    Wave energy converter (WEC) control strategies have been primarily focused on maximizing power absorption. The use of model predictive control strategies allows for a finite-horizon, multiterm objective function to be solved. This work utilizes a multiterm objective function to maximize power absorption while minimizing the structural loads on the WEC system. Furthermore, a Kalman filter and autoregressive model were used to estimate and forecast the wave exciting force and predict the future dynamics of the WEC. The WEC's power-take-off time-averaged power and structural loads under a perfect forecast assumption in irregular waves were compared against results obtained from the Kalmanmore » filter and autoregressive model to evaluate model predictive control performance.« less

  9. Optimization in the utility maximization framework for conservation planning: a comparison of solution procedures in a study of multifunctional agriculture

    PubMed Central

    Stoms, David M.; Davis, Frank W.

    2014-01-01

    Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management. PMID:25538868

  10. Optimization in the utility maximization framework for conservation planning: a comparison of solution procedures in a study of multifunctional agriculture

    USGS Publications Warehouse

    Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.

    2014-01-01

    Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.

  11. Limit order placement as an utility maximization problem and the origin of power law distribution of limit order prices

    NASA Astrophysics Data System (ADS)

    Lillo, F.

    2007-02-01

    I consider the problem of the optimal limit order price of a financial asset in the framework of the maximization of the utility function of the investor. The analytical solution of the problem gives insight on the origin of the recently empirically observed power law distribution of limit order prices. In the framework of the model, the most likely proximate cause of this power law is a power law heterogeneity of traders' investment time horizons.

  12. Using Debate to Maximize Learning Potential: A Case Study

    ERIC Educational Resources Information Center

    Firmin, Michael W.; Vaughn, Aaron; Dye, Amanda

    2007-01-01

    Following a review of the literature, an educational case study is provided for the benefit of faculty preparing college courses. In particular, we provide a transcribed debate utilized in a General Psychology course as a best practice example of how to craft a debate which maximizes student learning. The work is presented as a model for the…

  13. Maintaining homeostasis by decision-making.

    PubMed

    Korn, Christoph W; Bach, Dominik R

    2015-05-01

    Living organisms need to maintain energetic homeostasis. For many species, this implies taking actions with delayed consequences. For example, humans may have to decide between foraging for high-calorie but hard-to-get, and low-calorie but easy-to-get food, under threat of starvation. Homeostatic principles prescribe decisions that maximize the probability of sustaining appropriate energy levels across the entire foraging trajectory. Here, predictions from biological principles contrast with predictions from economic decision-making models based on maximizing the utility of the endpoint outcome of a choice. To empirically arbitrate between the predictions of biological and economic models for individual human decision-making, we devised a virtual foraging task in which players chose repeatedly between two foraging environments, lost energy by the passage of time, and gained energy probabilistically according to the statistics of the environment they chose. Reaching zero energy was framed as starvation. We used the mathematics of random walks to derive endpoint outcome distributions of the choices. This also furnished equivalent lotteries, presented in a purely economic, casino-like frame, in which starvation corresponded to winning nothing. Bayesian model comparison showed that--in both the foraging and the casino frames--participants' choices depended jointly on the probability of starvation and the expected endpoint value of the outcome, but could not be explained by economic models based on combinations of statistical moments or on rank-dependent utility. This implies that under precisely defined constraints biological principles are better suited to explain human decision-making than economic models based on endpoint utility maximization.

  14. Maintaining Homeostasis by Decision-Making

    PubMed Central

    Korn, Christoph W.; Bach, Dominik R.

    2015-01-01

    Living organisms need to maintain energetic homeostasis. For many species, this implies taking actions with delayed consequences. For example, humans may have to decide between foraging for high-calorie but hard-to-get, and low-calorie but easy-to-get food, under threat of starvation. Homeostatic principles prescribe decisions that maximize the probability of sustaining appropriate energy levels across the entire foraging trajectory. Here, predictions from biological principles contrast with predictions from economic decision-making models based on maximizing the utility of the endpoint outcome of a choice. To empirically arbitrate between the predictions of biological and economic models for individual human decision-making, we devised a virtual foraging task in which players chose repeatedly between two foraging environments, lost energy by the passage of time, and gained energy probabilistically according to the statistics of the environment they chose. Reaching zero energy was framed as starvation. We used the mathematics of random walks to derive endpoint outcome distributions of the choices. This also furnished equivalent lotteries, presented in a purely economic, casino-like frame, in which starvation corresponded to winning nothing. Bayesian model comparison showed that—in both the foraging and the casino frames—participants’ choices depended jointly on the probability of starvation and the expected endpoint value of the outcome, but could not be explained by economic models based on combinations of statistical moments or on rank-dependent utility. This implies that under precisely defined constraints biological principles are better suited to explain human decision-making than economic models based on endpoint utility maximization. PMID:26024504

  15. Program Monitoring: Problems and Cases.

    ERIC Educational Resources Information Center

    Lundin, Edward; Welty, Gordon

    Designed as the major component of a comprehensive model of educational management, a behavioral model of decision making is presented that approximates the synoptic model of neoclassical economic theory. The synoptic model defines all possible alternatives and provides a basis for choosing that alternative which maximizes expected utility. The…

  16. Optimum electric utility spot price determinations for small power producing facilities operating under PURPA provisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghoudjehbaklou, H.; Puttgen, H.B.

    This paper outlines an optimum spot price determination procedure in the general context of the Public Utility Regulatory Policies Act, PURPA, provisions. PURPA stipulates that local utilities must offer to purchase all available excess electric energy from Qualifying Facilities, QF, at fair market prices. As a direct consequence of these PURPA regulations, a growing number of owners are installing power producing facilities and optimize their operational schedules to minimize their utility related costs or, in some cases, actually maximize their revenues from energy sales to the local utility. In turn, the utility strives to use spot prices which maximize itsmore » revenues from any given Small Power Producing Facility, SPPF, a schedule while respecting the general regulatory and contractual framework. the proposed optimum spot price determination procedure fully models the SPPF operation, it enforces the contractual and regulatory restrictions, and it ensures the uniqueness of the optimum SPPF schedule.« less

  17. Optimum electric utility spot price determinations for small power producing facilities operating under PURPA provisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghoudjehbaklou, H.; Puttgen, H.B.

    The present paper outlines an optimum spot price determination procedure in the general context of the Public Utility Regulatory Policies Act, PURPA, provisions. PURPA stipulates that local utilities must offer to purchase all available excess electric energy from Qualifying Facilities, QF, at fair market prices. As a direct consequence of these PURPA regulations, a growing number of owners are installing power producing facilities and optimize their operational schedules to minimize their utility related costs or, in some cases, actually maximize their revenues from energy sales to the local utility. In turn, the utility will strive to use spot prices whichmore » maximize its revenues from any given Small Power Producing Facility, SPPF, schedule while respecting the general regulatory and contractual framework. The proposed optimum spot price determination procedure fully models the SPPF operation, it enforces the contractual and regulatory restrictions, and it ensures the uniqueness of the optimum SPPF schedule.« less

  18. Modeling Adversaries in Counterterrorism Decisions Using Prospect Theory.

    PubMed

    Merrick, Jason R W; Leclerc, Philip

    2016-04-01

    Counterterrorism decisions have been an intense area of research in recent years. Both decision analysis and game theory have been used to model such decisions, and more recently approaches have been developed that combine the techniques of the two disciplines. However, each of these approaches assumes that the attacker is maximizing its utility. Experimental research shows that human beings do not make decisions by maximizing expected utility without aid, but instead deviate in specific ways such as loss aversion or likelihood insensitivity. In this article, we modify existing methods for counterterrorism decisions. We keep expected utility as the defender's paradigm to seek for the rational decision, but we use prospect theory to solve for the attacker's decision to descriptively model the attacker's loss aversion and likelihood insensitivity. We study the effects of this approach in a critical decision, whether to screen containers entering the United States for radioactive materials. We find that the defender's optimal decision is sensitive to the attacker's levels of loss aversion and likelihood insensitivity, meaning that understanding such descriptive decision effects is important in making such decisions. © 2014 Society for Risk Analysis.

  19. Relevance of a Managerial Decision-Model to Educational Administration.

    ERIC Educational Resources Information Center

    Lundin, Edward.; Welty, Gordon

    The rational model of classical economic theory assumes that the decision maker has complete information on alternatives and consequences, and that he chooses the alternative that maximizes expected utility. This model does not allow for constraints placed on the decision maker resulting from lack of information, organizational pressures,…

  20. Partitioning-based mechanisms under personalized differential privacy.

    PubMed

    Li, Haoran; Xiong, Li; Ji, Zhanglong; Jiang, Xiaoqian

    2017-05-01

    Differential privacy has recently emerged in private statistical aggregate analysis as one of the strongest privacy guarantees. A limitation of the model is that it provides the same privacy protection for all individuals in the database. However, it is common that data owners may have different privacy preferences for their data. Consequently, a global differential privacy parameter may provide excessive privacy protection for some users, while insufficient for others. In this paper, we propose two partitioning-based mechanisms, privacy-aware and utility-based partitioning, to handle personalized differential privacy parameters for each individual in a dataset while maximizing utility of the differentially private computation. The privacy-aware partitioning is to minimize the privacy budget waste, while utility-based partitioning is to maximize the utility for a given aggregate analysis. We also develop a t -round partitioning to take full advantage of remaining privacy budgets. Extensive experiments using real datasets show the effectiveness of our partitioning mechanisms.

  1. Partitioning-based mechanisms under personalized differential privacy

    PubMed Central

    Li, Haoran; Xiong, Li; Ji, Zhanglong; Jiang, Xiaoqian

    2017-01-01

    Differential privacy has recently emerged in private statistical aggregate analysis as one of the strongest privacy guarantees. A limitation of the model is that it provides the same privacy protection for all individuals in the database. However, it is common that data owners may have different privacy preferences for their data. Consequently, a global differential privacy parameter may provide excessive privacy protection for some users, while insufficient for others. In this paper, we propose two partitioning-based mechanisms, privacy-aware and utility-based partitioning, to handle personalized differential privacy parameters for each individual in a dataset while maximizing utility of the differentially private computation. The privacy-aware partitioning is to minimize the privacy budget waste, while utility-based partitioning is to maximize the utility for a given aggregate analysis. We also develop a t-round partitioning to take full advantage of remaining privacy budgets. Extensive experiments using real datasets show the effectiveness of our partitioning mechanisms. PMID:28932827

  2. Optimal Energy Management for a Smart Grid using Resource-Aware Utility Maximization

    NASA Astrophysics Data System (ADS)

    Abegaz, Brook W.; Mahajan, Satish M.; Negeri, Ebisa O.

    2016-06-01

    Heterogeneous energy prosumers are aggregated to form a smart grid based energy community managed by a central controller which could maximize their collective energy resource utilization. Using the central controller and distributed energy management systems, various mechanisms that harness the power profile of the energy community are developed for optimal, multi-objective energy management. The proposed mechanisms include resource-aware, multi-variable energy utility maximization objectives, namely: (1) maximizing the net green energy utilization, (2) maximizing the prosumers' level of comfortable, high quality power usage, and (3) maximizing the economic dispatch of energy storage units that minimize the net energy cost of the energy community. Moreover, an optimal energy management solution that combines the three objectives has been implemented by developing novel techniques of optimally flexible (un)certainty projection and appliance based pricing decomposition in an IBM ILOG CPLEX studio. A real-world, per-minute data from an energy community consisting of forty prosumers in Amsterdam, Netherlands is used. Results show that each of the proposed mechanisms yields significant increases in the aggregate energy resource utilization and welfare of prosumers as compared to traditional peak-power reduction methods. Furthermore, the multi-objective, resource-aware utility maximization approach leads to an optimal energy equilibrium and provides a sustainable energy management solution as verified by the Lagrangian method. The proposed resource-aware mechanisms could directly benefit emerging energy communities in the world to attain their energy resource utilization targets.

  3. Deriving the Dividend Discount Model in the Intermediate Microeconomics Class

    ERIC Educational Resources Information Center

    Norman, Stephen; Schlaudraff, Jonathan; White, Karianne; Wills, Douglas

    2013-01-01

    In this article, the authors show that the dividend discount model can be derived using the basic intertemporal consumption model that is introduced in a typical intermediate microeconomics course. This result will be of use to instructors who teach microeconomics to finance students in that it demonstrates the value of utility maximization in…

  4. Optimal Resource Allocation in Library Systems

    ERIC Educational Resources Information Center

    Rouse, William B.

    1975-01-01

    Queueing theory is used to model processes as either waiting or balking processes. The optimal allocation of resources to these processes is defined as that which maximizes the expected value of the decision-maker's utility function. (Author)

  5. Can differences in breast cancer utilities explain disparities in breast cancer care?

    PubMed

    Schleinitz, Mark D; DePalo, Dina; Blume, Jeffrey; Stein, Michael

    2006-12-01

    Black, older, and less affluent women are less likely to receive adjuvant breast cancer therapy than their counterparts. Whereas preference contributes to disparities in other health care scenarios, it is unclear if preference explains differential rates of breast cancer care. To ascertain utilities from women of diverse backgrounds for the different stages of, and treatments for, breast cancer and to determine whether a treatment decision modeled from utilities is associated with socio-demographic characteristics. A stratified sample (by age and race) of 156 English-speaking women over 25 years old not currently undergoing breast cancer treatment. We assessed utilities using standard gamble for 5 breast cancer stages, and time-tradeoff for 3 therapeutic modalities. We incorporated each subject's utilities into a Markov model to determine whether her quality-adjusted life expectancy would be maximized with chemotherapy for a hypothetical, current diagnosis of stage II breast cancer. We used logistic regression to determine whether socio-demographic variables were associated with this optimal strategy. Median utilities for the 8 health states were: stage I disease, 0.91 (interquartile range 0.50 to 1.00); stage II, 0.75 (0.26 to 0.99); stage III, 0.51 (0.25 to 0.94); stage IV (estrogen receptor positive), 0.36 (0 to 0.75); stage IV (estrogen receptor negative), 0.40 (0 to 0.79); chemotherapy 0.50 (0 to 0.92); hormonal therapy 0.58 (0 to 1); and radiation therapy 0.83 (0.10 to 1). Utilities for early stage disease and treatment modalities, but not metastatic disease, varied with socio-demographic characteristics. One hundred and twenty-two of 156 subjects had utilities that maximized quality-adjusted life expectancy given stage II breast cancer with chemotherapy. Age over 50, black race, and low household income were associated with at least 5-fold lower odds of maximizing quality-adjusted life expectancy with chemotherapy, whereas women who were married or had a significant other were 4-fold more likely to maximize quality-adjusted life expectancy with chemotherapy. Differences in utility for breast cancer health states may partially explain the lower rate of adjuvant therapy for black, older, and less affluent women. Further work must clarify whether these differences result from health preference alone or reflect women's perceptions of sources of disparity, such as access to care, poor communication with providers, limitations in health knowledge or in obtaining social and workplace support during therapy.

  6. A steady-state stomatal model of balanced leaf gas exchange, hydraulics and maximal source-sink flux.

    PubMed

    Hölttä, Teemu; Lintunen, Anna; Chan, Tommy; Mäkelä, Annikki; Nikinmaa, Eero

    2017-07-01

    Trees must simultaneously balance their CO2 uptake rate via stomata, photosynthesis, the transport rate of sugars and rate of sugar utilization in sinks while maintaining a favourable water and carbon balance. We demonstrate using a numerical model that it is possible to understand stomatal functioning from the viewpoint of maximizing the simultaneous photosynthetic production, phloem transport and sink sugar utilization rate under the limitation that the transpiration-driven hydrostatic pressure gradient sets for those processes. A key feature in our model is that non-stomatal limitations to photosynthesis increase with decreasing leaf water potential and/or increasing leaf sugar concentration and are thus coupled to stomatal conductance. Maximizing the photosynthetic production rate using a numerical steady-state model leads to stomatal behaviour that is able to reproduce the well-known trends of stomatal behaviour in response to, e.g., light, vapour concentration difference, ambient CO2 concentration, soil water status, sink strength and xylem and phloem hydraulic conductance. We show that our results for stomatal behaviour are very similar to the solutions given by the earlier models of stomatal conductance derived solely from gas exchange considerations. Our modelling results also demonstrate how the 'marginal cost of water' in the unified stomatal conductance model and the optimal stomatal model could be related to plant structural and physiological traits, most importantly, the soil-to-leaf hydraulic conductance and soil moisture. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Decision Making Analysis: Critical Factors-Based Methodology

    DTIC Science & Technology

    2010-04-01

    the pitfalls associated with current wargaming methods such as assuming a western view of rational values in decision - making regardless of the cultures...Utilization theory slightly expands the rational decision making model as it states that “actors try to maximize their expected utility by weighing the...items to categorize the decision - making behavior of political leaders which tend to demonstrate either a rational or cognitive leaning. Leaders

  8. A test of ecological optimality for semiarid vegetation. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Salvucci, Guido D.; Eagleson, Peter S.; Turner, Edmund K.

    1992-01-01

    Three ecological optimality hypotheses which have utility in parameter reduction and estimation in a climate-soil-vegetation water balance model are reviewed and tested. The first hypothesis involves short term optimization of vegetative canopy density through equilibrium soil moisture maximization. The second hypothesis involves vegetation type selection again through soil moisture maximization, and the third involves soil genesis through plant induced modification of soil hydraulic properties to values which result in a maximum rate of biomass productivity.

  9. Using whole disease modeling to inform resource allocation decisions: economic evaluation of a clinical guideline for colorectal cancer using a single model.

    PubMed

    Tappenden, Paul; Chilcott, Jim; Brennan, Alan; Squires, Hazel; Glynne-Jones, Rob; Tappenden, Janine

    2013-06-01

    To assess the feasibility and value of simulating whole disease and treatment pathways within a single model to provide a common economic basis for informing resource allocation decisions. A patient-level simulation model was developed with the intention of being capable of evaluating multiple topics within National Institute for Health and Clinical Excellence's colorectal cancer clinical guideline. The model simulates disease and treatment pathways from preclinical disease through to detection, diagnosis, adjuvant/neoadjuvant treatments, follow-up, curative/palliative treatments for metastases, supportive care, and eventual death. The model parameters were informed by meta-analyses, randomized trials, observational studies, health utility studies, audit data, costing sources, and expert opinion. Unobservable natural history parameters were calibrated against external data using Bayesian Markov chain Monte Carlo methods. Economic analysis was undertaken using conventional cost-utility decision rules within each guideline topic and constrained maximization rules across multiple topics. Under usual processes for guideline development, piecewise economic modeling would have been used to evaluate between one and three topics. The Whole Disease Model was capable of evaluating 11 of 15 guideline topics, ranging from alternative diagnostic technologies through to treatments for metastatic disease. The constrained maximization analysis identified a configuration of colorectal services that is expected to maximize quality-adjusted life-year gains without exceeding current expenditure levels. This study indicates that Whole Disease Model development is feasible and can allow for the economic analysis of most interventions across a disease service within a consistent conceptual and mathematical infrastructure. This disease-level modeling approach may be of particular value in providing an economic basis to support other clinical guidelines. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  10. Multiple-Use Site Demand Analysis: An Application to the Boundary Waters Canoe Area Wilderness.

    ERIC Educational Resources Information Center

    Peterson, George L.; And Others

    1982-01-01

    A single-site, multiple-use model for analyzing trip demand is derived from a multiple site regional model based on utility maximizing choice theory. The model is used to analyze and compare trips to the Boundary Waters Canoe Area Wilderness for several types of use. Travel cost elasticities of demand are compared and discussed. (Authors/JN)

  11. Modeling static and dynamic human cardiovascular responses to exercise.

    PubMed

    Stremel, R W; Bernauer, E M; Harter, L W; Schultz, R A; Walters, R F

    1975-08-01

    A human performance model has been developed and described [9] which portrays the human circulatory, thermo regulatory and energy-exchange systems as an intercoupled set. In this model, steady state or static relationships are used to describe oxygen consumption and blood flow. For example, heart rate (HTRT) is calculated as a function of the oxygen and the thermo-regulatory requirements of each body compartment, using the steady state work values of cardiac output (CO, sum of all compartment blood flows) and stroke volume (SV, assumed maximal after 40% maximal oxygen consumption): HTRT=CO/SV. The steady state model has proven to be an acceptable first approximation, but the inclusion of transient characteristics are essential in describing the overall systems' adjustment to exercise stress. In the present study, the dynamic transient characteristics of heart rate, stroke volume and cardiac output were obtained from experiments utilizing step and sinusoidal forcing of work. The gain and phase relationships reveal a probable first order system with a six minute time constant, and are utilized to model the transient characteristics of these parameters. This approach leads to a more complex model but a more accurate representation of the physiology involved. The instrumentation and programming essential to these experiments are described.

  12. Determining relative error bounds for the CVBEM

    USGS Publications Warehouse

    Hromadka, T.V.

    1985-01-01

    The Complex Variable Boundary Element Methods provides a measure of relative error which can be utilized to subsequently reduce the error or provide information for further modeling analysis. By maximizing the relative error norm on each boundary element, a bound on the total relative error for each boundary element can be evaluated. This bound can be utilized to test CVBEM convergence, to analyze the effects of additional boundary nodal points in reducing the modeling error, and to evaluate the sensitivity of resulting modeling error within a boundary element from the error produced in another boundary element as a function of geometric distance. ?? 1985.

  13. Taxi trips distribution modeling based on Entropy-Maximizing theory: A case study in Harbin city-China

    NASA Astrophysics Data System (ADS)

    Tang, Jinjun; Zhang, Shen; Chen, Xinqiang; Liu, Fang; Zou, Yajie

    2018-03-01

    Understanding Origin-Destination distribution of taxi trips is very important for improving effects of transportation planning and enhancing quality of taxi services. This study proposes a new method based on Entropy-Maximizing theory to model OD distribution in Harbin city using large-scale taxi GPS trajectories. Firstly, a K-means clustering method is utilized to partition raw pick-up and drop-off location into different zones, and trips are assumed to start from and end at zone centers. A generalized cost function is further defined by considering travel distance, time and fee between each OD pair. GPS data collected from more than 1000 taxis at an interval of 30 s during one month are divided into two parts: data from first twenty days is treated as training dataset and last ten days is taken as testing dataset. The training dataset is used to calibrate model while testing dataset is used to validate model. Furthermore, three indicators, mean absolute error (MAE), root mean square error (RMSE) and mean percentage absolute error (MPAE), are applied to evaluate training and testing performance of Entropy-Maximizing model versus Gravity model. The results demonstrate Entropy-Maximizing model is superior to Gravity model. Findings of the study are used to validate the feasibility of OD distribution from taxi GPS data in urban system.

  14. Model-based sensor-less wavefront aberration correction in optical coherence tomography.

    PubMed

    Verstraete, Hans R G W; Wahls, Sander; Kalkman, Jeroen; Verhaegen, Michel

    2015-12-15

    Several sensor-less wavefront aberration correction methods that correct nonlinear wavefront aberrations by maximizing the optical coherence tomography (OCT) signal are tested on an OCT setup. A conventional coordinate search method is compared to two model-based optimization methods. The first model-based method takes advantage of the well-known optimization algorithm (NEWUOA) and utilizes a quadratic model. The second model-based method (DONE) is new and utilizes a random multidimensional Fourier-basis expansion. The model-based algorithms achieve lower wavefront errors with up to ten times fewer measurements. Furthermore, the newly proposed DONE method outperforms the NEWUOA method significantly. The DONE algorithm is tested on OCT images and shows a significantly improved image quality.

  15. Model-based intensification of a fed-batch microbial process for the maximization of polyhydroxybutyrate (PHB) production rate.

    PubMed

    Penloglou, Giannis; Vasileiadou, Athina; Chatzidoukas, Christos; Kiparissides, Costas

    2017-08-01

    An integrated metabolic-polymerization-macroscopic model, describing the microbial production of polyhydroxybutyrate (PHB) in Azohydromonas lata bacteria, was developed and validated using a comprehensive series of experimental measurements. The model accounted for biomass growth, biopolymer accumulation, carbon and nitrogen sources utilization, oxygen mass transfer and uptake rates and average molecular weights of the accumulated PHB, produced under batch and fed-batch cultivation conditions. Model predictions were in excellent agreement with experimental measurements. The validated model was subsequently utilized to calculate optimal operating conditions and feeding policies for maximizing PHB productivity for desired PHB molecular properties. More specifically, two optimal fed-batch strategies were calculated and experimentally tested: (1) a nitrogen-limited fed-batch policy and (2) a nitrogen sufficient one. The calculated optimal operating policies resulted in a maximum PHB content (94% g/g) in the cultivated bacteria and a biopolymer productivity of 4.2 g/(l h), respectively. Moreover, it was demonstrated that different PHB grades with weight average molecular weights of up to 1513 kg/mol could be produced via the optimal selection of bioprocess operating conditions.

  16. A Model for Long Range Planning for Seminole Community College.

    ERIC Educational Resources Information Center

    Miner, Norris

    A model for long-range planning designed to maximize involvement of college personnel, to improve communication among various areas of the college, to provide a process for evaluation of long-range plans and the planning process, to adjust to changing conditions, to utilize data developed at a level useful for actual operations, and to have…

  17. SLA Negotiation for VO Formation

    NASA Astrophysics Data System (ADS)

    Paurobally, Shamimabi

    Resource management systems are changing from localized resources and services towards virtual organizations (VOs) sharing millions of heterogeneous resources across multiple organizations and domains. The virtual organizations and usage models include a variety of owners and consumers with different usage, access policies, cost models, varying loads, requirements and availability. The stakeholders have private utility functions that must be satisfied and possibly maximized.

  18. Applying Intermediate Microeconomics to Terrorism

    ERIC Educational Resources Information Center

    Anderton, Charles H.; Carter, John R.

    2006-01-01

    The authors show how microeconomic concepts and principles are applicable to the study of terrorism. The utility maximization model provides insights into both terrorist resource allocation choices and government counterterrorism efforts, and basic game theory helps characterize the strategic interdependencies among terrorists and governments.…

  19. Effects of lung ventilation–perfusion and muscle metabolism–perfusion heterogeneities on maximal O2 transport and utilization

    PubMed Central

    Cano, I; Roca, J; Wagner, P D

    2015-01-01

    Previous models of O2 transport and utilization in health considered diffusive exchange of O2 in lung and muscle, but, reasonably, neglected functional heterogeneities in these tissues. However, in disease, disregarding such heterogeneities would not be justified. Here, pulmonary ventilation–perfusion and skeletal muscle metabolism–perfusion mismatching were added to a prior model of only diffusive exchange. Previously ignored O2 exchange in non-exercising tissues was also included. We simulated maximal exercise in (a) healthy subjects at sea level and altitude, and (b) COPD patients at sea level, to assess the separate and combined effects of pulmonary and peripheral functional heterogeneities on overall muscle O2 uptake ( and on mitochondrial (). In healthy subjects at maximal exercise, the combined effects of pulmonary and peripheral heterogeneities reduced arterial () at sea level by 32 mmHg, but muscle by only 122 ml min−1 (–3.5%). At the altitude of Mt Everest, lung and tissue heterogeneity together reduced by less than 1 mmHg and by 32 ml min−1 (–2.4%). Skeletal muscle heterogeneity led to a wide range of potential among muscle regions, a range that becomes narrower as increases, and in regions with a low ratio of metabolic capacity to blood flow, can exceed that of mixed muscle venous blood. For patients with severe COPD, peak was insensitive to substantial changes in the mitochondrial characteristics for O2 consumption or the extent of muscle heterogeneity. This integrative computational model of O2 transport and utilization offers the potential for estimating profiles of both in health and in diseases such as COPD if the extent for both lung ventilation–perfusion and tissue metabolism–perfusion heterogeneity is known. PMID:25640017

  20. Integrating epidemiology, psychology, and economics to achieve HPV vaccination targets.

    PubMed

    Basu, Sanjay; Chapman, Gretchen B; Galvani, Alison P

    2008-12-02

    Human papillomavirus (HPV) vaccines provide an opportunity to reduce the incidence of cervical cancer. Optimization of cervical cancer prevention programs requires anticipation of the degree to which the public will adhere to vaccination recommendations. To compare vaccination levels driven by public perceptions with levels that are optimal for maximizing the community's overall utility, we develop an epidemiological game-theoretic model of HPV vaccination. The model is parameterized with survey data on actual perceptions regarding cervical cancer, genital warts, and HPV vaccination collected from parents of vaccine-eligible children in the United States. The results suggest that perceptions of survey respondents generate vaccination levels far lower than those that maximize overall health-related utility for the population. Vaccination goals may be achieved by addressing concerns about vaccine risk, particularly those related to sexual activity among adolescent vaccine recipients. In addition, cost subsidizations and shifts in federal coverage plans may compensate for perceived and real costs of HPV vaccination to achieve public health vaccination targets.

  1. Fitting Nonlinear Ordinary Differential Equation Models with Random Effects and Unknown Initial Conditions Using the Stochastic Approximation Expectation-Maximization (SAEM) Algorithm.

    PubMed

    Chow, Sy-Miin; Lu, Zhaohua; Sherwood, Andrew; Zhu, Hongtu

    2016-03-01

    The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation-maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed.

  2. FITTING NONLINEAR ORDINARY DIFFERENTIAL EQUATION MODELS WITH RANDOM EFFECTS AND UNKNOWN INITIAL CONDITIONS USING THE STOCHASTIC APPROXIMATION EXPECTATION–MAXIMIZATION (SAEM) ALGORITHM

    PubMed Central

    Chow, Sy- Miin; Lu, Zhaohua; Zhu, Hongtu; Sherwood, Andrew

    2014-01-01

    The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation–maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed. PMID:25416456

  3. Application of a forest-simulation model to assess the energy yield and ecological impact of forest utilization for energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doyle, T W; Shugart, H H; West, D C

    1981-01-01

    This study examines the utilization and management of natural forest lands to meet growing wood-energy demands. An application of a forest simulation model is described for assessing energy returns and long-term ecological impacts of wood-energy harvesting under four general silvicultural practices. Results indicate that moderate energy yields could be expected from mild cutting operations which would significantly effect neither the commercial timber market nor the composition, structure, or diversity of these forests. Forest models can provide an effective tool for determining optimal management strategies that maximize energy returns, minimize environmental detriment, and complement existing land-use plans.

  4. Collective states in social systems with interacting learning agents

    NASA Astrophysics Data System (ADS)

    Semeshenko, Viktoriya; Gordon, Mirta B.; Nadal, Jean-Pierre

    2008-08-01

    We study the implications of social interactions and individual learning features on consumer demand in a simple market model. We consider a social system of interacting heterogeneous agents with learning abilities. Given a fixed price, agents repeatedly decide whether or not to buy a unit of a good, so as to maximize their expected utilities. This model is close to Random Field Ising Models, where the random field corresponds to the idiosyncratic willingness to pay. We show that the equilibrium reached depends on the nature of the information agents use to estimate their expected utilities. It may be different from the systems’ Nash equilibria.

  5. Wildlife tradeoffs based on landscape models of habitat preference

    USGS Publications Warehouse

    Loehle, C.; Mitchell, M.S.; White, M.

    2000-01-01

    Wildlife tradeoffs based on landscape models of habitat preference were presented. Multiscale logistic regression models were used and based on these models a spatial optimization technique was utilized to generate optimal maps. The tradeoffs were analyzed by gradually increasing the weighting on a single species in the objective function over a series of simulations. Results indicated that efficiency of habitat management for species diversity could be maximized for small landscapes by incorporating spatial context.

  6. Power maximization of variable-speed variable-pitch wind turbines using passive adaptive neural fault tolerant control

    NASA Astrophysics Data System (ADS)

    Habibi, Hamed; Rahimi Nohooji, Hamed; Howard, Ian

    2017-09-01

    Power maximization has always been a practical consideration in wind turbines. The question of how to address optimal power capture, especially when the system dynamics are nonlinear and the actuators are subject to unknown faults, is significant. This paper studies the control methodology for variable-speed variable-pitch wind turbines including the effects of uncertain nonlinear dynamics, system fault uncertainties, and unknown external disturbances. The nonlinear model of the wind turbine is presented, and the problem of maximizing extracted energy is formulated by designing the optimal desired states. With the known system, a model-based nonlinear controller is designed; then, to handle uncertainties, the unknown nonlinearities of the wind turbine are estimated by utilizing radial basis function neural networks. The adaptive neural fault tolerant control is designed passively to be robust on model uncertainties, disturbances including wind speed and model noises, and completely unknown actuator faults including generator torque and pitch actuator torque. The Lyapunov direct method is employed to prove that the closed-loop system is uniformly bounded. Simulation studies are performed to verify the effectiveness of the proposed method.

  7. A Bayesian Approach to Interactive Retrieval

    ERIC Educational Resources Information Center

    Tague, Jean M.

    1973-01-01

    A probabilistic model for interactive retrieval is presented. Bayesian statistical decision theory principles are applied: use of prior and sample information about the relationship of document descriptions to query relevance; maximization of expected value of a utility function, to the problem of optimally restructuring search strategies in an…

  8. Influencing Busy People in a Social Network

    PubMed Central

    Sarkar, Kaushik; Sundaram, Hari

    2016-01-01

    We identify influential early adopters in a social network, where individuals are resource constrained, to maximize the spread of multiple, costly behaviors. A solution to this problem is especially important for viral marketing. The problem of maximizing influence in a social network is challenging since it is computationally intractable. We make three contributions. First, we propose a new model of collective behavior that incorporates individual intent, knowledge of neighbors actions and resource constraints. Second, we show that the multiple behavior influence maximization is NP-hard. Furthermore, we show that the problem is submodular, implying the existence of a greedy solution that approximates the optimal solution to within a constant. However, since the greedy algorithm is expensive for large networks, we propose efficient heuristics to identify the influential individuals, including heuristics to assign behaviors to the different early adopters. We test our approach on synthetic and real-world topologies with excellent results. We evaluate the effectiveness under three metrics: unique number of participants, total number of active behaviors and network resource utilization. Our heuristics produce 15-51% increase in expected resource utilization over the naïve approach. PMID:27711127

  9. Influencing Busy People in a Social Network.

    PubMed

    Sarkar, Kaushik; Sundaram, Hari

    2016-01-01

    We identify influential early adopters in a social network, where individuals are resource constrained, to maximize the spread of multiple, costly behaviors. A solution to this problem is especially important for viral marketing. The problem of maximizing influence in a social network is challenging since it is computationally intractable. We make three contributions. First, we propose a new model of collective behavior that incorporates individual intent, knowledge of neighbors actions and resource constraints. Second, we show that the multiple behavior influence maximization is NP-hard. Furthermore, we show that the problem is submodular, implying the existence of a greedy solution that approximates the optimal solution to within a constant. However, since the greedy algorithm is expensive for large networks, we propose efficient heuristics to identify the influential individuals, including heuristics to assign behaviors to the different early adopters. We test our approach on synthetic and real-world topologies with excellent results. We evaluate the effectiveness under three metrics: unique number of participants, total number of active behaviors and network resource utilization. Our heuristics produce 15-51% increase in expected resource utilization over the naïve approach.

  10. Retrieval of cloud cover parameters from multispectral satellite images

    NASA Technical Reports Server (NTRS)

    Arking, A.; Childs, J. D.

    1985-01-01

    A technique is described for extracting cloud cover parameters from multispectral satellite radiometric measurements. Utilizing three channels from the AVHRR (Advanced Very High Resolution Radiometer) on NOAA polar orbiting satellites, it is shown that one can retrieve four parameters for each pixel: cloud fraction within the FOV, optical thickness, cloud-top temperature and a microphysical model parameter. The last parameter is an index representing the properties of the cloud particle and is determined primarily by the radiance at 3.7 microns. The other three parameters are extracted from the visible and 11 micron infrared radiances, utilizing the information contained in the two-dimensional scatter plot of the measured radiances. The solution is essentially one in which the distributions of optical thickness and cloud-top temperature are maximally clustered for each region, with cloud fraction for each pixel adjusted to achieve maximal clustering.

  11. Comprehensive home-based care coordination for vulnerable elders with dementia: Maximizing Independence at Home-Plus-Study protocol.

    PubMed

    Samus, Quincy M; Davis, Karen; Willink, Amber; Black, Betty S; Reuland, Melissa; Leoutsakos, Jeannie; Roth, David L; Wolff, Jennifer; Gitlin, Laura N; Lyketsos, Constantine G; Johnston, Deirdre

    2017-12-01

    Despite availability of effective care strategies for dementia, most health care systems are not yet organized or equipped to provide comprehensive family-centered dementia care management. Maximizing Independence at Home-Plus is a promising new model of dementia care coordination being tested in the U.S. through a Health Care Innovation Award funded by the Centers for Medicare and Medicaid Services that may serve as a model to address these delivery gaps, improve outcomes, and lower costs. This report provides an overview of the Health Care Innovation Award aims, study design, and methodology. This is a prospective, quasi-experimental intervention study of 342 community-living Medicare-Medicaid dual eligibles and Medicare-only beneficiaries with dementia in Maryland. Primary analyses will assess the impact of Maximizing Independence at Home-Plus on risk of nursing home long-term care placement, hospitalization, and health care expenditures (Medicare, Medicaid) at 12, 18 (primary end point), and 24 months, compared to a propensity-matched comparison group. The goals of the Maximizing Independence at Home-Plus model are to improve care coordination, ability to remain at home, and life quality for participants and caregivers, while reducing total costs of care for this vulnerable population. This Health Care Innovation Award project will provide timely information on the impact of Maximizing Independence at Home-Plus care coordination model on a variety of outcomes including effects on Medicaid and Medicare expenditures and service utilization. Participant characteristic data, cost savings, and program delivery costs will be analyzed to develop a risk-adjusted payment model to encourage sustainability and facilitate spread.

  12. Timber and Amenities on Nonindustrial Private Forest Land

    Treesearch

    Subhrendu K. Pattanayak; Karen Lee Abt; Thomas P. Holmes

    2003-01-01

    Economic analyses of the joint production timber and amenities from nonindustrial private forest lands (NIPF) have been conducted for several decades. Binkley (1981) summarized this strand of research and elegantly articulated a microeconomic household model in which NIPF owners maximize utility by choosing optimal combinations of timber income and amenities. Most...

  13. Maximally Expressive Modeling

    NASA Technical Reports Server (NTRS)

    Jaap, John; Davis, Elizabeth; Richardson, Lea

    2004-01-01

    Planning and scheduling systems organize tasks into a timeline or schedule. Tasks are logically grouped into containers called models. Models are a collection of related tasks, along with their dependencies and requirements, that when met will produce the desired result. One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed; the information sought is at the cutting edge of scientific endeavor; and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a maximally expressive modeling schema.

  14. An entropic framework for modeling economies

    NASA Astrophysics Data System (ADS)

    Caticha, Ariel; Golan, Amos

    2014-08-01

    We develop an information-theoretic framework for economic modeling. This framework is based on principles of entropic inference that are designed for reasoning on the basis of incomplete information. We take the point of view of an external observer who has access to limited information about broad macroscopic economic features. We view this framework as complementary to more traditional methods. The economy is modeled as a collection of agents about whom we make no assumptions of rationality (in the sense of maximizing utility or profit). States of statistical equilibrium are introduced as those macrostates that maximize entropy subject to the relevant information codified into constraints. The basic assumption is that this information refers to supply and demand and is expressed in the form of the expected values of certain quantities (such as inputs, resources, goods, production functions, utility functions and budgets). The notion of economic entropy is introduced. It provides a measure of the uniformity of the distribution of goods and resources. It captures both the welfare state of the economy as well as the characteristics of the market (say, monopolistic, concentrated or competitive). Prices, which turn out to be the Lagrange multipliers, are endogenously generated by the economy. Further studies include the equilibrium between two economies and the conditions for stability. As an example, the case of the nonlinear economy that arises from linear production and utility functions is treated in some detail.

  15. The role of data assimilation in maximizing the utility of geospace observations (Invited)

    NASA Astrophysics Data System (ADS)

    Matsuo, T.

    2013-12-01

    Data assimilation can facilitate maximizing the utility of existing geospace observations by offering an ultimate marriage of inductive (data-driven) and deductive (first-principles based) approaches to addressing critical questions in space weather. Assimilative approaches that incorporate dynamical models are, in particular, capable of making a diverse set of observations consistent with physical processes included in a first-principles model, and allowing unobserved physical states to be inferred from observations. These points will be demonstrated in the context of the application of an ensemble Kalman filter (EnKF) to a thermosphere and ionosphere general circulation model. An important attribute of this approach is that the feedback between plasma and neutral variables is self-consistently treated both in the forecast model as well as in the assimilation scheme. This takes advantage of the intimate coupling between the thermosphere and ionosphere described in general circulation models to enable the inference of unobserved thermospheric states from the relatively plentiful observations of the ionosphere. Given the ever-growing infrastructure for the global navigation satellite system, this is indeed a promising prospect for geospace data assimilation. In principle, similar approaches can be applied to any geospace observing systems to extract more geophysical information from a given set of observations than would otherwise be possible.

  16. Method and system for controlling a gasification or partial oxidation process

    DOEpatents

    Rozelle, Peter L; Der, Victor K

    2015-02-10

    A method and system for controlling a fuel gasification system includes optimizing a conversion of solid components in the fuel to gaseous fuel components, controlling the flux of solids entrained in the product gas through equipment downstream of the gasifier, and maximizing the overall efficiencies of processes utilizing gasification. A combination of models, when utilized together, can be integrated with existing plant control systems and operating procedures and employed to develop new control systems and operating procedures. Such an approach is further applicable to gasification systems that utilize both dry feed and slurry feed.

  17. Maximizing Resource Utilization in Video Streaming Systems

    ERIC Educational Resources Information Center

    Alsmirat, Mohammad Abdullah

    2013-01-01

    Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…

  18. Establishing rational networking using the DL04 quantum secure direct communication protocol

    NASA Astrophysics Data System (ADS)

    Qin, Huawang; Tang, Wallace K. S.; Tso, Raylin

    2018-06-01

    The first rational quantum secure direct communication scheme is proposed, in which we use the game theory with incomplete information to model the rational behavior of the participant, and give the strategy space and utility function. The rational participant can get his maximal utility when he performs the protocol faithfully, and then the Nash equilibrium of the protocol can be achieved. Compared to the traditional schemes, our scheme will be more practical in the presence of rational participant.

  19. Optimizing Medical Kits for Space Flight

    NASA Technical Reports Server (NTRS)

    Minard, Charles G.; FreiredeCarvalho, Mary H.; Iyengar, M. Sriram

    2010-01-01

    The Integrated Medical Model (IMM) uses Monte Carlo methodologies to predict the occurrence of medical events, their mitigation, and the resources required during space flight. The model includes two modules that utilize output from a single model simulation to identify an optimized medical kit for a specified mission scenario. This poster describes two flexible optimization routines built into SAS 9.1. The first routine utilizes a systematic process of elimination to maximize (or minimize) outcomes subject to attribute constraints. The second routine uses a search and mutate approach to minimize medical kit attributes given a set of outcome constraints. There are currently 273 unique resources identified that are used to treat at least one of 83 medical conditions currently in the model.

  20. The energy allocation function of sleep: a unifying theory of sleep, torpor, and continuous wakefulness.

    PubMed

    Schmidt, Markus H

    2014-11-01

    The energy allocation (EA) model defines behavioral strategies that optimize the temporal utilization of energy to maximize reproductive success. This model proposes that all species of the animal kingdom share a universal sleep function that shunts waking energy utilization toward sleep-dependent biological investment. For endotherms, REM sleep evolved to enhance energy appropriation for somatic and CNS-related processes by eliminating thermoregulatory defenses and skeletal muscle tone. Alternating REM with NREM sleep conserves energy by decreasing the need for core body temperature defense. Three EA phenotypes are proposed: sleep-wake cycling, torpor, and continuous (or predominant) wakefulness. Each phenotype carries inherent costs and benefits. Sleep-wake cycling downregulates specific biological processes in waking and upregulates them in sleep, thereby decreasing energy demands imposed by wakefulness, reducing cellular infrastructure requirements, and resulting in overall energy conservation. Torpor achieves the greatest energy savings, but critical biological operations are compromised. Continuous wakefulness maximizes niche exploitation, but endures the greatest energy demands. The EA model advances a new construct for understanding sleep-wake organization in ontogenetic and phylogenetic domains. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Cournot games with network effects for electric power markets

    NASA Astrophysics Data System (ADS)

    Spezia, Carl John

    The electric utility industry is moving from regulated monopolies with protected service areas to an open market with many wholesale suppliers competing for consumer load. This market is typically modeled by a Cournot game oligopoly where suppliers compete by selecting profit maximizing quantities. The classical Cournot model can produce multiple solutions when the problem includes typical power system constraints. This work presents a mathematical programming formulation of oligopoly that produces unique solutions when constraints limit the supplier outputs. The formulation casts the game as a supply maximization problem with power system physical limits and supplier incremental profit functions as constraints. The formulation gives Cournot solutions identical to other commonly used algorithms when suppliers operate within the constraints. Numerical examples demonstrate the feasibility of the theory. The results show that the maximization formulation will give system operators more transmission capacity when compared to the actions of suppliers in a classical constrained Cournot game. The results also show that the profitability of suppliers in constrained networks depends on their location relative to the consumers' load concentration.

  2. The University Professor As a Utility Maximizer and Producer of Learning, Research, and Income

    ERIC Educational Resources Information Center

    Becker, William E., Jr.

    1975-01-01

    A professonial decision-making model is presented for the purpose of exploring alternative plans to raise teaching quality. It is demonstrated that an increase in the pecuniary return to teaching will raise teaching quality while exogenous changes in teaching and/or research technology need not. (Author/EA)

  3. Examining the Effects of Technology Attributes on Learning: A Contingency Perspective

    ERIC Educational Resources Information Center

    Nicholson, Jennifer; Nicholson, Darren; Valacich, Joseph S.

    2008-01-01

    In today's knowledge economy, technology is utilized more than ever to deliver instructional material to the learner. Nonetheless, information may not always be presented in a manner that maximizes the learning experience, resulting in a negative impact on learning outcomes. Drawing on the Task-Technology Fit model, a research framework was…

  4. Input/output behavior of supercomputing applications

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.

    1991-01-01

    The collection and analysis of supercomputer I/O traces and their use in a collection of buffering and caching simulations are described. This serves two purposes. First, it gives a model of how individual applications running on supercomputers request file system I/O, allowing system designer to optimize I/O hardware and file system algorithms to that model. Second, the buffering simulations show what resources are needed to maximize the CPU utilization of a supercomputer given a very bursty I/O request rate. By using read-ahead and write-behind in a large solid stated disk, one or two applications were sufficient to fully utilize a Cray Y-MP CPU.

  5. Evaluating gambles using dynamics

    NASA Astrophysics Data System (ADS)

    Peters, O.; Gell-Mann, M.

    2016-02-01

    Gambles are random variables that model possible changes in wealth. Classic decision theory transforms money into utility through a utility function and defines the value of a gamble as the expectation value of utility changes. Utility functions aim to capture individual psychological characteristics, but their generality limits predictive power. Expectation value maximizers are defined as rational in economics, but expectation values are only meaningful in the presence of ensembles or in systems with ergodic properties, whereas decision-makers have no access to ensembles, and the variables representing wealth in the usual growth models do not have the relevant ergodic properties. Simultaneously addressing the shortcomings of utility and those of expectations, we propose to evaluate gambles by averaging wealth growth over time. No utility function is needed, but a dynamic must be specified to compute time averages. Linear and logarithmic "utility functions" appear as transformations that generate ergodic observables for purely additive and purely multiplicative dynamics, respectively. We highlight inconsistencies throughout the development of decision theory, whose correction clarifies that our perspective is legitimate. These invalidate a commonly cited argument for bounded utility functions.

  6. Use of the hyperinsulinemic euglycemic clamp to assess insulin sensitivity in guinea pigs: dose response, partitioned glucose metabolism, and species comparisons.

    PubMed

    Horton, Dane M; Saint, David A; Owens, Julie A; Gatford, Kathryn L; Kind, Karen L

    2017-07-01

    The guinea pig is an alternate small animal model for the study of metabolism, including insulin sensitivity. However, only one study to date has reported the use of the hyperinsulinemic euglycemic clamp in anesthetized animals in this species, and the dose response has not been reported. We therefore characterized the dose-response curve for whole body glucose uptake using recombinant human insulin in the adult guinea pig. Interspecies comparisons with published data showed species differences in maximal whole body responses (guinea pig ≈ human < rat < mouse) and the insulin concentrations at which half-maximal insulin responses occurred (guinea pig > human ≈ rat > mouse). In subsequent studies, we used concomitant d-[3- 3 H]glucose infusion to characterize insulin sensitivities of whole body glucose uptake, utilization, production, storage, and glycolysis in young adult guinea pigs at human insulin doses that produced approximately half-maximal (7.5 mU·min -1 ·kg -1 ) and near-maximal whole body responses (30 mU·min -1 ·kg -1 ). Although human insulin infusion increased rates of glucose utilization (up to 68%) and storage and, at high concentrations, increased rates of glycolysis in females, glucose production was only partially suppressed (~23%), even at high insulin doses. Fasting glucose, metabolic clearance of insulin, and rates of glucose utilization, storage, and production during insulin stimulation were higher in female than in male guinea pigs ( P < 0.05), but insulin sensitivity of these and whole body glucose uptake did not differ between sexes. This study establishes a method for measuring partitioned glucose metabolism in chronically catheterized conscious guinea pigs, allowing studies of regulation of insulin sensitivity in this species. Copyright © 2017 the American Physiological Society.

  7. Product-line selection and pricing with remanufacturing under availability constraints

    NASA Astrophysics Data System (ADS)

    Aras, Necati; Esenduran, G.÷k.‡e.; Altinel, I. Kuban

    2004-12-01

    Product line selection and pricing are two crucial decisions for the profitability of a manufacturing firm. Remanufacturing, on the other hand, may be a profitable strategy that captures the remaining value in used products. In this paper we develop a mixed-integer nonlinear programming model form the perspective of an original equipment manufacturer (OEM). The objective of the OEM is to select products to manufacture and remanufacture among a set of given alternatives and simultaneously determine their prices so as to maximize its profit. It is assumed that the probability a customer selects a product is proportional to its utility and inversely proportional to its price. The utility of a product is an increasing function of its perceived quality. In our base model, products are discriminated by their unit production costs and utilities. We also analyze a case where remanufacturing is limited by the available quantity of collected remanufacturable products. We show that the resulting problem is decomposed into the pricing and product line selection subproblems. Pricing problem is solved by a variant of the simplex search procedure which can also handle constraints, while complete enumeration and a genetic algorithm are used for the solution of the product line selection problem. A number of experiments are carried out to identify conditions under which it is economically viable for the firm to sell remanufactured products. We also determine the optimal utility and unit production cost values of a remanufactured product, which maximizes the total profit of the OEM.

  8. Stochastic user equilibrium model with a tradable credit scheme and application in maximizing network reserve capacity

    NASA Astrophysics Data System (ADS)

    Han, Fei; Cheng, Lin

    2017-04-01

    The tradable credit scheme (TCS) outperforms congestion pricing in terms of social equity and revenue neutrality, apart from the same perfect performance on congestion mitigation. This article investigates the effectiveness and efficiency of TCS on enhancing transportation network capacity in a stochastic user equilibrium (SUE) modelling framework. First, the SUE and credit market equilibrium conditions are presented; then an equivalent general SUE model with TCS is established by virtue of two constructed functions, which can be further simplified under a specific probability distribution. To enhance the network capacity by utilizing TCS, a bi-level mathematical programming model is established for the optimal TCS design problem, with the upper level optimization objective maximizing network reserve capacity and lower level being the proposed SUE model. The heuristic sensitivity analysis-based algorithm is developed to solve the bi-level model. Three numerical examples are provided to illustrate the improvement effect of TCS on the network in different scenarios.

  9. Maximally Expressive Modeling of Operations Tasks

    NASA Technical Reports Server (NTRS)

    Jaap, John; Richardson, Lea; Davis, Elizabeth

    2002-01-01

    Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed, the information sought is at the cutting edge of scientific endeavor, and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a "maximally expressive" modeling schema.

  10. Team Formation in Partially Observable Multi-Agent Systems

    NASA Technical Reports Server (NTRS)

    Agogino, Adrian K.; Tumer, Kagan

    2004-01-01

    Sets of multi-agent teams often need to maximize a global utility rating the performance of the entire system where a team cannot fully observe other teams agents. Such limited observability hinders team-members trying to pursue their team utilities to take actions that also help maximize the global utility. In this article, we show how team utilities can be used in partially observable systems. Furthermore, we show how team sizes can be manipulated to provide the best compromise between having easy to learn team utilities and having them aligned with the global utility, The results show that optimally sized teams in a partially observable environments outperform one team in a fully observable environment, by up to 30%.

  11. A Note on the Treatment of Uncertainty in Economics and Finance

    ERIC Educational Resources Information Center

    Carilli, Anthony M.; Dempster, Gregory M.

    2003-01-01

    The treatment of uncertainty in the business classroom has been dominated by the application of risk theory to the utility-maximization framework. Nonetheless, the relevance of the standard risk model as a positive description of economic decision making often has been called into question in theoretical work. In this article, the authors offer an…

  12. A note on the modelling of circular smallholder migration.

    PubMed

    Bigsten, A

    1988-01-01

    "It is argued that circular migration [in Africa] should be seen as an optimization problem, where the household allocates its labour resources across activities, including work which requires migration, so as to maximize the joint family utility function. The migration problem is illustrated in a simple diagram, which makes it possible to analyse economic aspects of migration." excerpt

  13. Aging and loss decision making: increased risk aversion and decreased use of maximizing information, with correlated rationality and value maximization.

    PubMed

    Kurnianingsih, Yoanna A; Sim, Sam K Y; Chee, Michael W L; Mullette-Gillman, O'Dhaniel A

    2015-01-01

    We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble) and choice strategies (what gamble information influences choices) within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk, and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning. We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61-80 years old) were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic decision-making for losses through changes in both individual preferences and the strategies individuals employ.

  14. The futility of utility: how market dynamics marginalize Adam Smith

    NASA Astrophysics Data System (ADS)

    McCauley, Joseph L.

    2000-10-01

    Economic theorizing is based on the postulated, nonempiric notion of utility. Economists assume that prices, dynamics, and market equilibria are supposed to be derived from utility. The results are supposed to represent mathematically the stabilizing action of Adam Smith's invisible hand. In deterministic excess demand dynamics I show the following. A utility function generally does not exist mathematically due to nonintegrable dynamics when production/investment are accounted for, resolving Mirowski's thesis. Price as a function of demand does not exist mathematically either. All equilibria are unstable. I then explain how deterministic chaos can be distinguished from random noise at short times. In the generalization to liquid markets and finance theory described by stochastic excess demand dynamics, I also show the following. Market price distributions cannot be rescaled to describe price movements as ‘equilibrium’ fluctuations about a systematic drift in price. Utility maximization does not describe equilibrium. Maximization of the Gibbs entropy of the observed price distribution of an asset would describe equilibrium, if equilibrium could be achieved, but equilibrium does not describe real, liquid markets (stocks, bonds, foreign exchange). There are three inconsistent definitions of equilibrium used in economics and finance, only one of which is correct. Prices in unregulated free markets are unstable against both noise and rising or falling expectations: Adam Smith's stabilizing invisible hand does not exist, either in mathematical models of liquid market data, or in real market data.

  15. Utilization of Non-Dentist Providers and Attitudes Toward New Provider Models: Findings from The National Dental Practice-Based Research Network

    PubMed Central

    Blue, Christine M.; Funkhouser, D. Ellen; Riggs, Sheila; Rindal, D. Brad; Worley, Donald; Pihlstrom, Daniel J.; Benjamin, Paul; Gilbert, Gregg H.

    2014-01-01

    Objectives The purpose of this study was to quantify within The National Dental Practice-Based Research Network current utilization of dental hygienists and assistants with expanded functions and quantify network dentists’ attitudes toward a new non-dentist provider model - the dental therapist. Methods Dental practice-based research network practitioner-investigators participated in a single, cross-sectional administration of a questionnaire. Results Current non-dentist providers are not being utilized by network practitioner-investigators to the fullest extent allowed by law. Minnesota practitioners, practitioners in large group practices, and those with prior experience with expanded function non-dentist providers delegate at a higher rate and had more-positive perceptions of the new dental therapist model. Conclusions Expanding scopes of practice for dental hygienists and assistants has not translated to the maximal delegation allowed by law among network practices. This finding may provide insight into dentists’ acceptance of newer non-dentist provider models. PMID:23668892

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hao, He; Sun, Yannan; Carroll, Thomas E.

    We propose a coordination algorithm for cooperative power allocation among a collection of commercial buildings within a campus. We introduced thermal and power models of a typical commercial building Heating, Ventilation, and Air Conditioning (HVAC) system, and utilize model predictive control to characterize their power flexibility. The power allocation problem is formulated as a cooperative game using the Nash Bargaining Solution (NBS) concept, in which buildings collectively maximize the product of their utilities subject to their local flexibility constraints and a total power limit set by the campus coordinator. To solve the optimal allocation problem, a distributed protocol is designedmore » using dual decomposition of the Nash bargaining problem. Numerical simulations are performed to demonstrate the efficacy of our proposed allocation method« less

  17. Methodologies for Optimum Capital Expenditure Decisions for New Medical Technology

    PubMed Central

    Landau, Thomas P.; Ledley, Robert S.

    1980-01-01

    This study deals with the development of a theory and an analytical model to support decisions regarding capital expenditures for complex new medical technology. Formal methodologies and quantitative techniques developed by applied mathematicians and management scientists can be used by health planners to develop cost-effective plans for the utilization of medical technology on a community or region-wide basis. In order to maximize the usefulness of the model, it was developed and tested against multiple technologies. The types of technologies studied include capital and labor-intensive technologies, technologies whose utilization rates vary with hospital occupancy rate, technologies whose use can be scheduled, and limited-use and large-use technologies.

  18. Framing matters: Effects of framing on older adults’ exploratory decision-making

    PubMed Central

    Cooper, Jessica A.; Blanco, Nathaniel; Maddox, W. Todd

    2016-01-01

    We examined framing effects on exploratory decision-making. In Experiment 1 we tested older and younger adults in two decision-making tasks separated by one week, finding that older adults’ decision-making performance was preserved when maximizing gains, but declined when minimizing losses. Computational modeling indicates that younger adults in both conditions, and older adults in gains-maximization, utilized a decreasing threshold strategy (which is optimal), but older adults in losses were better fit by a fixed-probability model of exploration. In Experiment 2 we examined within-subjects behavior in older and younger adults in the same exploratory decision-making task, but without a time separation between tasks. We replicated the older adult disadvantage in loss-minimization from Experiment 1, and found that the older adult deficit was significantly reduced when the loss-minimization task immediately followed the gains-maximization task. We conclude that older adults’ performance in exploratory decision-making is hindered when framed as loss-minimization, but that this deficit is attenuated when older adults can first develop a strategy in a gains-framed task. PMID:27977218

  19. Framing matters: Effects of framing on older adults' exploratory decision-making.

    PubMed

    Cooper, Jessica A; Blanco, Nathaniel J; Maddox, W Todd

    2017-02-01

    We examined framing effects on exploratory decision-making. In Experiment 1 we tested older and younger adults in two decision-making tasks separated by one week, finding that older adults' decision-making performance was preserved when maximizing gains, but it declined when minimizing losses. Computational modeling indicates that younger adults in both conditions, and older adults in gains maximization, utilized a decreasing threshold strategy (which is optimal), but older adults in losses were better fit by a fixed-probability model of exploration. In Experiment 2 we examined within-subject behavior in older and younger adults in the same exploratory decision-making task, but without a time separation between tasks. We replicated the older adult disadvantage in loss minimization from Experiment 1 and found that the older adult deficit was significantly reduced when the loss-minimization task immediately followed the gains-maximization task. We conclude that older adults' performance in exploratory decision-making is hindered when framed as loss minimization, but that this deficit is attenuated when older adults can first develop a strategy in a gains-framed task. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Modeling regulated water utility investment incentives

    NASA Astrophysics Data System (ADS)

    Padula, S.; Harou, J. J.

    2014-12-01

    This work attempts to model the infrastructure investment choices of privatized water utilities subject to rate of return and price cap regulation. The goal is to understand how regulation influences water companies' investment decisions such as their desire to engage in transfers with neighbouring companies. We formulate a profit maximization capacity expansion model that finds the schedule of new supply, demand management and transfer schemes that maintain the annual supply-demand balance and maximize a companies' profit under the 2010-15 price control process in England. Regulatory incentives for costs savings are also represented in the model. These include: the CIS scheme for the capital expenditure (capex) and incentive allowance schemes for the operating expenditure (opex) . The profit-maximizing investment program (what to build, when and what size) is compared with the least cost program (social optimum). We apply this formulation to several water companies in South East England to model performance and sensitivity to water network particulars. Results show that if companies' are able to outperform the regulatory assumption on the cost of capital, a capital bias can be generated, due to the fact that the capital expenditure, contrarily to opex, can be remunerated through the companies' regulatory capital value (RCV). The occurrence of the 'capital bias' or its entity depends on the extent to which a company can finance its investments at a rate below the allowed cost of capital. The bias can be reduced by the regulatory penalties for underperformances on the capital expenditure (CIS scheme); Sensitivity analysis can be applied by varying the CIS penalty to see how and to which extent this impacts the capital bias effect. We show how regulatory changes could potentially be devised to partially remove the 'capital bias' effect. Solutions potentially include allowing for incentives on total expenditure rather than separately for capex and opex and allowing both opex and capex to be remunerated through a return on the company's regulatory capital value.

  1. Wireless Sensor Network-Based Service Provisioning by a Brokering Platform

    PubMed Central

    Guijarro, Luis; Pla, Vicent; Vidal, Jose R.; Naldi, Maurizio; Mahmoodi, Toktam

    2017-01-01

    This paper proposes a business model for providing services based on the Internet of Things through a platform that intermediates between human users and Wireless Sensor Networks (WSNs). The platform seeks to maximize its profit through posting both the price charged to each user and the price paid to each WSN. A complete analysis of the profit maximization problem is performed in this paper. We show that the service provider maximizes its profit by incentivizing all users and all Wireless Sensor Infrastructure Providers (WSIPs) to join the platform. This is true not only when the number of users is high, but also when it is moderate, provided that the costs that the users bear do not trespass a cost ceiling. This cost ceiling depends on the number of WSIPs, on the value of the intrinsic value of the service and on the externality that the WSIP has on the user utility. PMID:28498347

  2. Wireless Sensor Network-Based Service Provisioning by a Brokering Platform.

    PubMed

    Guijarro, Luis; Pla, Vicent; Vidal, Jose R; Naldi, Maurizio; Mahmoodi, Toktam

    2017-05-12

    This paper proposes a business model for providing services based on the Internet of Things through a platform that intermediates between human users and Wireless Sensor Networks (WSNs). The platform seeks to maximize its profit through posting both the price charged to each user and the price paid to each WSN. A complete analysis of the profit maximization problem is performed in this paper. We show that the service provider maximizes its profit by incentivizing all users and all Wireless Sensor Infrastructure Providers (WSIPs) to join the platform. This is true not only when the number of users is high, but also when it is moderate, provided that the costs that the users bear do not trespass a cost ceiling. This cost ceiling depends on the number of WSIPs, on the value of the intrinsic value of the service and on the externality that the WSIP has on the user utility.

  3. Merton's problem for an investor with a benchmark in a Barndorff-Nielsen and Shephard market.

    PubMed

    Lennartsson, Jan; Lindberg, Carl

    2015-01-01

    To try to outperform an externally given benchmark with known weights is the most common equity mandate in the financial industry. For quantitative investors, this task is predominantly approached by optimizing their portfolios consecutively over short time horizons with one-period models. We seek in this paper to provide a theoretical justification to this practice when the underlying market is of Barndorff-Nielsen and Shephard type. This is done by verifying that an investor who seeks to maximize her expected terminal exponential utility of wealth in excess of her benchmark will in fact use an optimal portfolio equivalent to the one-period Markowitz mean-variance problem in continuum under the corresponding Black-Scholes market. Further, we can represent the solution to the optimization problem as in Feynman-Kac form. Hence, the problem, and its solution, is analogous to Merton's classical portfolio problem, with the main difference that Merton maximizes expected utility of terminal wealth, not wealth in excess of a benchmark.

  4. Creating a bridge between data collection and program planning: a technical assistance model to maximize the use of HIV/AIDS surveillance and service utilization data for planning purposes.

    PubMed

    Logan, Jennifer A; Beatty, Maile; Woliver, Renee; Rubinstein, Eric P; Averbach, Abigail R

    2005-12-01

    Over time, improvements in HIV/AIDS surveillance and service utilization data have increased their usefulness for planning programs, targeting resources, and otherwise informing HIV/AIDS policy. However, community planning groups, service providers, and health department staff often have difficulty in interpreting and applying the wide array of data now available. We describe the development of the Bridging Model, a technical assistance model for overcoming barriers to the use of data for program planning. Through the use of an iterative feedback loop in the model, HIV/AIDS data products constantly are evolving to better inform the decision-making tasks of their multiple users. Implementation of this model has led to improved data quality and data products and to a greater willingness and ability among stakeholders to use the data for planning purposes.

  5. Aging and loss decision making: increased risk aversion and decreased use of maximizing information, with correlated rationality and value maximization

    PubMed Central

    Kurnianingsih, Yoanna A.; Sim, Sam K. Y.; Chee, Michael W. L.; Mullette-Gillman, O’Dhaniel A.

    2015-01-01

    We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble) and choice strategies (what gamble information influences choices) within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk, and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning. We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61–80 years old) were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic decision-making for losses through changes in both individual preferences and the strategies individuals employ. PMID:26029092

  6. Monkeys choose as if maximizing utility compatible with basic principles of revealed preference theory

    PubMed Central

    Pastor-Bernier, Alexandre; Plott, Charles R.; Schultz, Wolfram

    2017-01-01

    Revealed preference theory provides axiomatic tools for assessing whether individuals make observable choices “as if” they are maximizing an underlying utility function. The theory evokes a tradeoff between goods whereby individuals improve themselves by trading one good for another good to obtain the best combination. Preferences revealed in these choices are modeled as curves of equal choice (indifference curves) and reflect an underlying process of optimization. These notions have far-reaching applications in consumer choice theory and impact the welfare of human and animal populations. However, they lack the empirical implementation in animals that would be required to establish a common biological basis. In a design using basic features of revealed preference theory, we measured in rhesus monkeys the frequency of repeated choices between bundles of two liquids. For various liquids, the animals’ choices were compatible with the notion of giving up a quantity of one good to gain one unit of another good while maintaining choice indifference, thereby implementing the concept of marginal rate of substitution. The indifference maps consisted of nonoverlapping, linear, convex, and occasionally concave curves with typically negative, but also sometimes positive, slopes depending on bundle composition. Out-of-sample predictions using homothetic polynomials validated the indifference curves. The animals’ preferences were internally consistent in satisfying transitivity. Change of option set size demonstrated choice optimality and satisfied the Weak Axiom of Revealed Preference (WARP). These data are consistent with a version of revealed preference theory in which preferences are stochastic; the monkeys behaved “as if” they had well-structured preferences and maximized utility. PMID:28202727

  7. Monkeys choose as if maximizing utility compatible with basic principles of revealed preference theory.

    PubMed

    Pastor-Bernier, Alexandre; Plott, Charles R; Schultz, Wolfram

    2017-03-07

    Revealed preference theory provides axiomatic tools for assessing whether individuals make observable choices "as if" they are maximizing an underlying utility function. The theory evokes a tradeoff between goods whereby individuals improve themselves by trading one good for another good to obtain the best combination. Preferences revealed in these choices are modeled as curves of equal choice (indifference curves) and reflect an underlying process of optimization. These notions have far-reaching applications in consumer choice theory and impact the welfare of human and animal populations. However, they lack the empirical implementation in animals that would be required to establish a common biological basis. In a design using basic features of revealed preference theory, we measured in rhesus monkeys the frequency of repeated choices between bundles of two liquids. For various liquids, the animals' choices were compatible with the notion of giving up a quantity of one good to gain one unit of another good while maintaining choice indifference, thereby implementing the concept of marginal rate of substitution. The indifference maps consisted of nonoverlapping, linear, convex, and occasionally concave curves with typically negative, but also sometimes positive, slopes depending on bundle composition. Out-of-sample predictions using homothetic polynomials validated the indifference curves. The animals' preferences were internally consistent in satisfying transitivity. Change of option set size demonstrated choice optimality and satisfied the Weak Axiom of Revealed Preference (WARP). These data are consistent with a version of revealed preference theory in which preferences are stochastic; the monkeys behaved "as if" they had well-structured preferences and maximized utility.

  8. The evolution of utility functions and psychological altruism.

    PubMed

    Clavien, Christine; Chapuisat, Michel

    2016-04-01

    Numerous studies show that humans tend to be more cooperative than expected given the assumption that they are rational maximizers of personal gain. As a result, theoreticians have proposed elaborated formal representations of human decision-making, in which utility functions including "altruistic" or "moral" preferences replace the purely self-oriented "Homo economicus" function. Here we review mathematical approaches that provide insights into the mathematical stability of alternative utility functions. Candidate utility functions may be evaluated with help of game theory, classical modeling of social evolution that focuses on behavioral strategies, and modeling of social evolution that focuses directly on utility functions. We present the advantages of the latter form of investigation and discuss one surprisingly precise result: "Homo economicus" as well as "altruistic" utility functions are less stable than a function containing a preference for the common welfare that is only expressed in social contexts composed of individuals with similar preferences. We discuss the contribution of mathematical models to our understanding of human other-oriented behavior, with a focus on the classical debate over psychological altruism. We conclude that human can be psychologically altruistic, but that psychological altruism evolved because it was generally expressed towards individuals that contributed to the actor's fitness, such as own children, romantic partners and long term reciprocators. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Robust Coordination for Large Sets of Simple Rovers

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Agogino, Adrian

    2006-01-01

    The ability to coordinate sets of rovers in an unknown environment is critical to the long-term success of many of NASA;s exploration missions. Such coordination policies must have the ability to adapt in unmodeled or partially modeled domains and must be robust against environmental noise and rover failures. In addition such coordination policies must accommodate a large number of rovers, without excessive and burdensome hand-tuning. In this paper we present a distributed coordination method that addresses these issues in the domain of controlling a set of simple rovers. The application of these methods allows reliable and efficient robotic exploration in dangerous, dynamic, and previously unexplored domains. Most control policies for space missions are directly programmed by engineers or created through the use of planning tools, and are appropriate for single rover missions or missions requiring the coordination of a small number of rovers. Such methods typically require significant amounts of domain knowledge, and are difficult to scale to large numbers of rovers. The method described in this article aims to address cases where a large number of rovers need to coordinate to solve a complex time dependent problem in a noisy environment. In this approach, each rover decomposes a global utility, representing the overall goal of the system, into rover-specific utilities that properly assign credit to the rover s actions. Each rover then has the responsibility to create a control policy that maximizes its own rover-specific utility. We show a method of creating rover-utilities that are "aligned" with the global utility, such that when the rovers maximize their own utility, they also maximize the global utility. In addition we show that our method creates rover-utilities that allow the rovers to create their control policies quickly and reliably. Our distributed learning method allows large sets rovers be used unmodeled domains, while providing robustness against rover failures and changing environments. In experimental simulations we show that our method scales well with large numbers of rovers in addition to being robust against noisy sensor inputs and noisy servo control. The results show that our method is able to scale to large numbers of rovers and achieves up to 400% performance improvement over standard machine learning methods.

  10. Maximizing Energy Savings Reliability in BC Hydro Industrial Demand-side Management Programs: An Assessment of Performance Incentive Models

    NASA Astrophysics Data System (ADS)

    Gosman, Nathaniel

    For energy utilities faced with expanded jurisdictional energy efficiency requirements and pursuing demand-side management (DSM) incentive programs in the large industrial sector, performance incentive programs can be an effective means to maximize the reliability of planned energy savings. Performance incentive programs balance the objectives of high participation rates with persistent energy savings by: (1) providing financial incentives and resources to minimize constraints to investment in energy efficiency, and (2) requiring that incentive payments be dependent on measured energy savings over time. As BC Hydro increases its DSM initiatives to meet the Clean Energy Act objective to reduce at least 66 per cent of new electricity demand with DSM by 2020, the utility is faced with a higher level of DSM risk, or uncertainties that impact the costeffective acquisition of planned energy savings. For industrial DSM incentive programs, DSM risk can be broken down into project development and project performance risks. Development risk represents the project ramp-up phase and is the risk that planned energy savings do not materialize due to low customer response to program incentives. Performance risk represents the operational phase and is the risk that planned energy savings do not persist over the effective measure life. DSM project development and performance risks are, in turn, a result of industrial economic, technological and organizational conditions, or DSM risk factors. In the BC large industrial sector, and characteristic of large industrial sectors in general, these DSM risk factors include: (1) capital constraints to investment in energy efficiency, (2) commodity price volatility, (3) limited internal staffing resources to deploy towards energy efficiency, (4) variable load, process-based energy saving potential, and (5) a lack of organizational awareness of an operation's energy efficiency over time (energy performance). This research assessed the capacity of alternative performance incentive program models to manage DSM risk in BC. Three performance incentive program models were assessed and compared to BC Hydro's current large industrial DSM incentive program, Power Smart Partners -- Transmission Project Incentives, itself a performance incentive-based program. Together, the selected program models represent a continuum of program design and implementation in terms of the schedule and level of incentives provided, the duration and rigour of measurement and verification (M&V), energy efficiency measures targeted and involvement of the private sector. A multi criteria assessment framework was developed to rank the capacity of each program model to manage BC large industrial DSM risk factors. DSM risk management rankings were then compared to program costeffectiveness, targeted energy savings potential in BC and survey results from BC industrial firms on the program models. The findings indicate that the reliability of DSM energy savings in the BC large industrial sector can be maximized through performance incentive program models that: (1) offer incentives jointly for capital and low-cost operations and maintenance (O&M) measures, (2) allow flexible lead times for project development, (3) utilize rigorous M&V methods capable of measuring variable load, process-based energy savings, (4) use moderate contract lengths that align with effective measure life, and (5) integrate energy management software tools capable of providing energy performance feedback to customers to maximize the persistence of energy savings. While this study focuses exclusively on the BC large industrial sector, the findings of this research have applicability to all energy utilities serving large, energy intensive industrial sectors.

  11. To kill a kangaroo: understanding the decision to pursue high-risk/high-gain resources.

    PubMed

    Jones, James Holland; Bird, Rebecca Bliege; Bird, Douglas W

    2013-09-22

    In this paper, we attempt to understand hunter-gatherer foraging decisions about prey that vary in both the mean and variance of energy return using an expected utility framework. We show that for skewed distributions of energetic returns, the standard linear variance discounting (LVD) model for risk-sensitive foraging can produce quite misleading results. In addition to creating difficulties for the LVD model, the skewed distributions characteristic of hunting returns create challenges for estimating probability distribution functions required for expected utility. We present a solution using a two-component finite mixture model for foraging returns. We then use detailed foraging returns data based on focal follows of individual hunters in Western Australia hunting for high-risk/high-gain (hill kangaroo) and relatively low-risk/low-gain (sand monitor) prey. Using probability densities for the two resources estimated from the mixture models, combined with theoretically sensible utility curves characterized by diminishing marginal utility for the highest returns, we find that the expected utility of the sand monitors greatly exceeds that of kangaroos despite the fact that the mean energy return for kangaroos is nearly twice as large as that for sand monitors. We conclude that the decision to hunt hill kangaroos does not arise simply as part of an energetic utility-maximization strategy and that additional social, political or symbolic benefits must accrue to hunters of this highly variable prey.

  12. Emergent Behavior of Coupled Barrier Island - Resort Systems

    NASA Astrophysics Data System (ADS)

    McNamara, D. E.; Werner, B. T.

    2004-12-01

    Barrier islands are attractive sites for resorts. Natural barrier islands experience beach erosion and island overwash during storms, beach accretion and dune building during inter-storm periods, and migration up the continental shelf as sea level rises. Beach replenishment, artificial dune building, seawalls, jetties and groins have been somewhat effective in protecting resorts against erosion and overwash during storms, but it is unknown how the coupled system will respond to long-term sea level rise. We investigate coupled barrier island - resort systems using an agent-based model with three components: natural barrier islands divided into a series of alongshore cells; resorts controlled by markets for tourism and hotel purchases; and coupling via storm damage to resorts and resort protection by government agents. Modeled barrier islands change by beach erosion, island overwash and inlet cutting during storms, and beach accretion, tidal delta growth and dune and vegetation growth between storms. In the resort hotel market, developer agents build hotels and hotel owning agents purchase them using predictions of future revenue and property appreciation, with the goal of maximizing discounted utility. In the tourism market, hotel owning agents set room rental prices to maximize profit and tourist agents choose vacation destinations maximizing a utility based on beach width, price and word-of-mouth. Government agents build seawalls, groins and jetties, and widen the beach and build up dunes by adding sand to protect resorts from storms, enhance beach quality, and maximize resort revenue. Results indicate that barrier islands and resorts evolve in a coupled manner to resort size saturation, with resorts protected against small-to-intermediate-scale storms under fairly stable sea level. Under extended, rapidly rising sea level, protection measures enhance the effect of large storms, leading to emergent behavior in the form of limit cycles or barrier submergence, depending on the relative rates of resort recovery from storms and sea level rise. The model is applied to Ocean City, Maryland and neighboring undeveloped Assateague Island National Seashore. Supported by the National Science Foundation, Geology and Paleontology Program, and the Andrew W. Mellon Foundation

  13. Allocation of surgical procedures to operating rooms.

    PubMed

    Ozkarahan, I

    1995-08-01

    Reduction of health care costs is of paramount importance in our time. This paper is a part of the research which proposes an expert hospital decision support system for resource scheduling. The proposed system combines mathematical programming, knowledge base, and database technologies, and what is more, its friendly interface is suitable for any novice user. Operating rooms in hospitals represent big investments and must be utilized efficiently. In this paper, first a mathematical model similar to job shop scheduling models is developed. The model loads surgical cases to operating rooms by maximizing room utilization and minimizing overtime in a multiple operating room setting. Then a prototype expert system which replaces the expertise of the operations research analyst for the model, drives the modelbase, database, and manages the user dialog is developed. Finally, an overview of the sequencing procedures for operations within an operating room is also presented.

  14. Utility of Small Animal Models of Developmental Programming.

    PubMed

    Reynolds, Clare M; Vickers, Mark H

    2018-01-01

    Any effective strategy to tackle the global obesity and rising noncommunicable disease epidemic requires an in-depth understanding of the mechanisms that underlie these conditions that manifest as a consequence of complex gene-environment interactions. In this context, it is now well established that alterations in the early life environment, including suboptimal nutrition, can result in an increased risk for a range of metabolic, cardiovascular, and behavioral disorders in later life, a process preferentially termed developmental programming. To date, most of the mechanistic knowledge around the processes underpinning development programming has been derived from preclinical research performed mostly, but not exclusively, in laboratory mouse and rat strains. This review will cover the utility of small animal models in developmental programming, the limitations of such models, and potential future directions that are required to fully maximize information derived from preclinical models in order to effectively translate to clinical use.

  15. Model Predictive Control-based Optimal Coordination of Distributed Energy Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayhorn, Ebony T.; Kalsi, Karanjit; Lian, Jianming

    2013-01-07

    Distributed energy resources, such as renewable energy resources (wind, solar), energy storage and demand response, can be used to complement conventional generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging, especially in isolated systems. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation performance. The goals of the optimization problem are to minimize fuel costs and maximize the utilization of wind while considering equipment life of generators and energy storage. Model predictive controlmore » (MPC) is used to solve a look-ahead dispatch optimization problem and the performance is compared to an open loop look-ahead dispatch problem. Simulation studies are performed to demonstrate the efficacy of the closed loop MPC in compensating for uncertainties and variability caused in the system.« less

  16. Model Predictive Control-based Optimal Coordination of Distributed Energy Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayhorn, Ebony T.; Kalsi, Karanjit; Lian, Jianming

    2013-04-03

    Distributed energy resources, such as renewable energy resources (wind, solar), energy storage and demand response, can be used to complement conventional generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging, especially in isolated systems. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation performance. The goals of the optimization problem are to minimize fuel costs and maximize the utilization of wind while considering equipment life of generators and energy storage. Model predictive controlmore » (MPC) is used to solve a look-ahead dispatch optimization problem and the performance is compared to an open loop look-ahead dispatch problem. Simulation studies are performed to demonstrate the efficacy of the closed loop MPC in compensating for uncertainties and variability caused in the system.« less

  17. Skill complementarity enhances heterophily in collaboration networks

    PubMed Central

    Xie, Wen-Jie; Li, Ming-Xia; Jiang, Zhi-Qiang; Tan, Qun-Zhao; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H. Eugene

    2016-01-01

    Much empirical evidence shows that individuals usually exhibit significant homophily in social networks. We demonstrate, however, skill complementarity enhances heterophily in the formation of collaboration networks, where people prefer to forge social ties with people who have professions different from their own. We construct a model to quantify the heterophily by assuming that individuals choose collaborators to maximize utility. Using a huge database of online societies, we find evidence of heterophily in collaboration networks. The results of model calibration confirm the presence of heterophily. Both empirical analysis and model calibration show that the heterophilous feature is persistent along the evolution of online societies. Furthermore, the degree of skill complementarity is positively correlated with their production output. Our work sheds new light on the scientific research utility of virtual worlds for studying human behaviors in complex socioeconomic systems. PMID:26743687

  18. Competition-Colonization Trade-Offs, Competitive Uncertainty, and the Evolutionary Assembly of Species

    PubMed Central

    Pillai, Pradeep; Guichard, Frédéric

    2012-01-01

    We utilize a standard competition-colonization metapopulation model in order to study the evolutionary assembly of species. Based on earlier work showing how models assuming strict competitive hierarchies will likely lead to runaway evolution and self-extinction for all species, we adopt a continuous competition function that allows for levels of uncertainty in the outcome of competition. We then, by extending the standard patch-dynamic metapopulation model in order to include evolutionary dynamics, allow for the coevolution of species into stable communities composed of species with distinct limiting similarities. Runaway evolution towards stochastic extinction then becomes a limiting case controlled by the level of competitive uncertainty. We demonstrate how intermediate competitive uncertainty maximizes the equilibrium species richness as well as maximizes the adaptive radiation and self-assembly of species under adaptive dynamics with mutations of non-negligible size. By reconciling competition-colonization tradeoff theory with co-evolutionary dynamics, our results reveal the importance of intermediate levels of competitive uncertainty for the evolutionary assembly of species. PMID:22448253

  19. Early use of Space Station Freedom for NASA's Microgravity Science and Applications Program

    NASA Technical Reports Server (NTRS)

    Rhome, Robert C.; O'Malley, Terence F.

    1992-01-01

    The paper describes microgravity science opportunities inherent to the restructured Space Station and presents a synopsis of the scientific utilization plan for the first two years of ground-tended operations. In the ground-tended utilization mode the Space Station is a large free-flyer providing a continuous microgravity environment unmatched by any other platform within any existing U.S. program. It is pointed out that the importance of this period of early Space Station mixed-mode utilization between crew-tended and ground-tended approaches is of such magnitude that Station-based microgravity science experiments many become benchmarks to the disciplines involved. The traffic model that is currently being pursued is designed to maximize this opportunity for the U.S. microgravity science community.

  20. Insurance choice and tax-preferred health savings accounts.

    PubMed

    Cardon, James H; Showalter, Mark H

    2007-03-01

    We develop an infinite horizon utility maximization model of the interaction between insurance choice and tax-preferred health savings accounts. The model can be used to examine a wide range of policy options, including flexible spending accounts, health savings accounts, and health reimbursement accounts. We also develop a 2-period model to simulate various implications of the model. Key results from the simulation analysis include the following: (1) with no adverse selection, use of unrestricted health savings accounts leads to modest welfare gains, after accounting for the tax revenue loss; (2) with adverse selection and an initial pooling equilibrium comprised of "sick" and "healthy" consumers, introducing HSAs can, but does not necessarily, lead to a new pooling equilibrium. The new equilibrium results in a higher coinsurance rate, an increase in expected utility for healthy consumers, and a decrease in expected utility for sick consumers; (3) with adverse selection and a separating equilibrium, both sick and healthy consumers are better off with a health savings account; (4) efficiency gains are possible when insurance contracts are explicitly linked to tax-preferred health savings accounts.

  1. Self-Learning Intelligent Agents for Dynamic Traffic Routing on Transportation Networks

    NASA Astrophysics Data System (ADS)

    Sadek, Add; Basha, Nagi

    Intelligent Transportation Systems (ITS) are designed to take advantage of recent advances in communications, electronics, and Information Technology in improving the efficiency and safety of transportation systems. Among the several ITS applications is the notion of Dynamic Traffic Routing (DTR), which involves generating "optimal" routing recommendations to drivers with the aim of maximizing network utilizing. In this paper, we demonstrate the feasibility of using a self-learning intelligent agent to solve the DTR problem to achieve traffic user equilibrium in a transportation network. The core idea is to deploy an agent to a simulation model of a highway. The agent then learns by itself by interacting with the simulation model. Once the agent reaches a satisfactory level of performance, it can then be deployed to the real-world, where it would continue to learn how to refine its control policies over time. To test this concept in this paper, the Cell Transmission Model (CTM) developed by Carlos Daganzo of the University of California at Berkeley is used to simulate a simple highway with two main alternative routes. With the model developed, a Reinforcement Learning Agent (RLA) is developed to learn how to best dynamically route traffic, so as to maximize the utilization of existing capacity. Preliminary results obtained from our experiments are promising. RL, being an adaptive online learning technique, appears to have a great potential for controlling a stochastic dynamic systems such as a transportation system. Furthermore, the approach is highly scalable and applicable to a variety of networks and roadways.

  2. Changes of glucose utilization by erythrocytes, lactic acid concentration in the serum and blood cells, and haematocrit value during one hour rest after maximal effort in individuals differing in physical efficiency.

    PubMed

    Tomasik, M

    1982-01-01

    Glucose utilization by the erythrocytes, lactic acid concentration in the blood and erythrocytes, and haematocrit value were determined before exercise and during one hour rest following maximal exercise in 97 individuals of either sex differing in physical efficiency. In the investigations reported by the author individuals with strikingly high physical fitness performed maximal work one-third greater than that performed by individuals with medium fitness. The serum concentration of lactic acid was in all individuals above the resting value still after 60 minutes of rest. On the other hand, this concentration returned to the normal level in the erythrocytes but only in individuals with strikingly high efficiency. Glucose utilization by the erythrocytes during the restitution period was highest immediately after the exercise in all studied individuals and showed a tendency for more rapid return to resting values again in individuals with highest efficiency. The investigation of very efficient individuals repeated twice demonstrated greater utilization of glucose by the erythrocytes at the time of greater maximal exercise. This was associated with greater lactic acid concentration in the serum and erythrocytes throughout the whole one-hour rest period. The observed facts suggest an active participation of erythrocytes in the process of adaptation of the organism to exercise.

  3. The value of foresight: how prospection affects decision-making.

    PubMed

    Pezzulo, Giovanni; Rigoli, Francesco

    2011-01-01

    Traditional theories of decision-making assume that utilities are based on the intrinsic value of outcomes; in turn, these values depend on associations between expected outcomes and the current motivational state of the decision-maker. This view disregards the fact that humans (and possibly other animals) have prospection abilities, which permit anticipating future mental processes and motivational and emotional states. For instance, we can evaluate future outcomes in light of the motivational state we expect to have when the outcome is collected, not (only) when we make a decision. Consequently, we can plan for the future and choose to store food to be consumed when we expect to be hungry, not immediately. Furthermore, similarly to any expected outcome, we can assign a value to our anticipated mental processes and emotions. It has been reported that (in some circumstances) human subjects prefer to receive an unavoidable punishment immediately, probably because they are anticipating the dread associated with the time spent waiting for the punishment. This article offers a formal framework to guide neuroeconomic research on how prospection affects decision-making. The model has two characteristics. First, it uses model-based Bayesian inference to describe anticipation of cognitive and motivational processes. Second, the utility-maximization process considers these anticipations in two ways: to evaluate outcomes (e.g., the pleasure of eating a pie is evaluated differently at the beginning of a dinner, when one is hungry, and at the end of the dinner, when one is satiated), and as outcomes having a value themselves (e.g., the case of dread as a cost of waiting for punishment). By explicitly accounting for the relationship between prospection and value, our model provides a framework to reconcile the utility-maximization approach with psychological phenomena such as planning for the future and dread.

  4. The Value of Foresight: How Prospection Affects Decision-Making

    PubMed Central

    Pezzulo, Giovanni; Rigoli, Francesco

    2011-01-01

    Traditional theories of decision-making assume that utilities are based on the intrinsic value of outcomes; in turn, these values depend on associations between expected outcomes and the current motivational state of the decision-maker. This view disregards the fact that humans (and possibly other animals) have prospection abilities, which permit anticipating future mental processes and motivational and emotional states. For instance, we can evaluate future outcomes in light of the motivational state we expect to have when the outcome is collected, not (only) when we make a decision. Consequently, we can plan for the future and choose to store food to be consumed when we expect to be hungry, not immediately. Furthermore, similarly to any expected outcome, we can assign a value to our anticipated mental processes and emotions. It has been reported that (in some circumstances) human subjects prefer to receive an unavoidable punishment immediately, probably because they are anticipating the dread associated with the time spent waiting for the punishment. This article offers a formal framework to guide neuroeconomic research on how prospection affects decision-making. The model has two characteristics. First, it uses model-based Bayesian inference to describe anticipation of cognitive and motivational processes. Second, the utility-maximization process considers these anticipations in two ways: to evaluate outcomes (e.g., the pleasure of eating a pie is evaluated differently at the beginning of a dinner, when one is hungry, and at the end of the dinner, when one is satiated), and as outcomes having a value themselves (e.g., the case of dread as a cost of waiting for punishment). By explicitly accounting for the relationship between prospection and value, our model provides a framework to reconcile the utility-maximization approach with psychological phenomena such as planning for the future and dread. PMID:21747755

  5. Reservoir Analysis Model for Battlefield Operations

    DTIC Science & Technology

    1989-05-01

    courtesy of the Imperial War Museum; Figure 2 is used courtesy of Frederick A. Praeger, Inc.; Figures 7, 8, and 9 are used courtesy of the Society of...operational and tactical levels of war . Military commanders today are confronted with problems of unprecedented complexity that require the application of...associated with operating reservoir systems in theaters of war . Without these tools the planner stands little chance of maximizing the utilization of his water

  6. Optimal planning for the sustainable utilization of municipal solid waste.

    PubMed

    Santibañez-Aguilar, José Ezequiel; Ponce-Ortega, José María; Betzabe González-Campos, J; Serna-González, Medardo; El-Halwagi, Mahmoud M

    2013-12-01

    The increasing generation of municipal solid waste (MSW) is a major problem particularly for large urban areas with insufficient landfill capacities and inefficient waste management systems. Several options associated to the supply chain for implementing a MSW management system are available, however to determine the optimal solution several technical, economic, environmental and social aspects must be considered. Therefore, this paper proposes a mathematical programming model for the optimal planning of the supply chain associated to the MSW management system to maximize the economic benefit while accounting for technical and environmental issues. The optimization model simultaneously selects the processing technologies and their location, the distribution of wastes from cities as well as the distribution of products to markets. The problem was formulated as a multi-objective mixed-integer linear programing problem to maximize the profit of the supply chain and the amount of recycled wastes, where the results are showed through Pareto curves that tradeoff economic and environmental aspects. The proposed approach is applied to a case study for the west-central part of Mexico to consider the integration of MSW from several cities to yield useful products. The results show that an integrated utilization of MSW can provide economic, environmental and social benefits. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. The Dynamics of Crime and Punishment

    NASA Astrophysics Data System (ADS)

    Hausken, Kjell; Moxnes, John F.

    This article analyzes crime development which is one of the largest threats in today's world, frequently referred to as the war on crime. The criminal commits crimes in his free time (when not in jail) according to a non-stationary Poisson process which accounts for fluctuations. Expected values and variances for crime development are determined. The deterrent effect of imprisonment follows from the amount of time in imprisonment. Each criminal maximizes expected utility defined as expected benefit (from crime) minus expected cost (imprisonment). A first-order differential equation of the criminal's utility-maximizing response to the given punishment policy is then developed. The analysis shows that if imprisonment is absent, criminal activity grows substantially. All else being equal, any equilibrium is unstable (labile), implying growth of criminal activity, unless imprisonment increases sufficiently as a function of criminal activity. This dynamic approach or perspective is quite interesting and has to our knowledge not been presented earlier. The empirical data material for crime intensity and imprisonment for Norway, England and Wales, and the US supports the model. Future crime development is shown to depend strongly on the societally chosen imprisonment policy. The model is intended as a valuable tool for policy makers who can envision arbitrarily sophisticated imprisonment functions and foresee the impact they have on crime development.

  8. Is there a trade-off between longevity and quality of life in Grossman's pure investment model?

    PubMed

    Eisenring, C

    2000-12-01

    The question is posed whether an individual maximizes lifetime or trades off longevity for quality of life in Grossman's pure investment (PI)-model. It is shown that the answer critically hinges on the assumed production function for healthy time. If the production function for healthy time produces a trade-off between life-span and quality of life, one has to solve a sequence of fixed time problems. The one offering maximal intertemporal utility determines optimal longevity. Comparative static results of optimal longevity for a simplified version of the PI-model are derived. The obtained results predict that higher initial endowments of wealth and health, a rise in the wage rate, or improvements in the technology of producing healthy time, all increase the optimal length of life. On the other hand, optimal longevity is decreasing in the depreciation and interest rate. From a technical point of view, the paper illustrates that a discrete time equivalent to the transversality condition for optimal longevity employed in continuous optimal control models does not exist. Copyright 2000 John Wiley & Sons, Ltd.

  9. Introducing health gains in location-allocation models: A stochastic model for planning the delivery of long-term care

    NASA Astrophysics Data System (ADS)

    Cardoso, T.; Oliveira, M. D.; Barbosa-Póvoa, A.; Nickel, S.

    2015-05-01

    Although the maximization of health is a key objective in health care systems, location-allocation literature has not yet considered this dimension. This study proposes a multi-objective stochastic mathematical programming approach to support the planning of a multi-service network of long-term care (LTC), both in terms of services location and capacity planning. This approach is based on a mixed integer linear programming model with two objectives - the maximization of expected health gains and the minimization of expected costs - with satisficing levels in several dimensions of equity - namely, equity of access, equity of utilization, socioeconomic equity and geographical equity - being imposed as constraints. The augmented ε-constraint method is used to explore the trade-off between these conflicting objectives, with uncertainty in the demand and delivery of care being accounted for. The model is applied to analyze the (re)organization of the LTC network currently operating in the Great Lisbon region in Portugal for the 2014-2016 period. Results show that extending the network of LTC is a cost-effective investment.

  10. Equilibrium in a Production Economy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiarolla, Maria B., E-mail: maria.chiarolla@uniroma1.it; Haussmann, Ulrich G., E-mail: uhaus@math.ubc.ca

    2011-06-15

    Consider a closed production-consumption economy with multiple agents and multiple resources. The resources are used to produce the consumption good. The agents derive utility from holding resources as well as consuming the good produced. They aim to maximize their utility while the manager of the production facility aims to maximize profits. With the aid of a representative agent (who has a multivariable utility function) it is shown that an Arrow-Debreu equilibrium exists. In so doing we establish technical results that will be used to solve the stochastic dynamic problem (a case with infinite dimensional commodity space so the General Equilibriummore » Theory does not apply) elsewhere.« less

  11. Stochastic damage evolution in textile laminates

    NASA Technical Reports Server (NTRS)

    Dzenis, Yuris A.; Bogdanovich, Alexander E.; Pastore, Christopher M.

    1993-01-01

    A probabilistic model utilizing random material characteristics to predict damage evolution in textile laminates is presented. Model is based on a division of each ply into two sublaminas consisting of cells. The probability of cell failure is calculated using stochastic function theory and maximal strain failure criterion. Three modes of failure, i.e. fiber breakage, matrix failure in transverse direction, as well as matrix or interface shear cracking, are taken into account. Computed failure probabilities are utilized in reducing cell stiffness based on the mesovolume concept. A numerical algorithm is developed predicting the damage evolution and deformation history of textile laminates. Effect of scatter of fiber orientation on cell properties is discussed. Weave influence on damage accumulation is illustrated with the help of an example of a Kevlar/epoxy laminate.

  12. The Application of a Three-Tier Model of Intervention to Parent Training

    PubMed Central

    Phaneuf, Leah; McIntyre, Laura Lee

    2015-01-01

    A three-tier intervention system was designed for use with parents with preschool children with developmental disabilities to modify parent–child interactions. A single-subject changing-conditions design was used to examine the utility of a three-tier intervention system in reducing negative parenting strategies, increasing positive parenting strategies, and reducing child behavior problems in parent–child dyads (n = 8). The three intervention tiers consisted of (a) self-administered reading material, (b) group training, and (c) individualized video feedback sessions. Parental behavior was observed to determine continuation or termination of intervention. Results support the utility of a tiered model of intervention to maximize treatment outcomes and increase efficiency by minimizing the need for more costly time-intensive interventions for participants who may not require them. PMID:26213459

  13. Rethinking school-based health centers as complex adaptive systems: maximizing opportunities for the prevention of teen pregnancy and sexually transmitted infections.

    PubMed

    Daley, Alison Moriarty

    2012-01-01

    This article examines school-based health centers (SBHCs) as complex adaptive systems, the current gaps that exist in contraceptive access, and the potential to maximize this community resource in teen pregnancy and sexually transmitted infection (STI) prevention efforts. Adolescent pregnancy is a major public health challenge for the United States. Existing community resources need to be considered for their potential to impact teen pregnancy and STI prevention efforts. SBHCs are one such community resource to be leveraged in these efforts. They offer adolescent-friendly primary care services and are responsive to the diverse needs of the adolescents utilizing them. However, current restrictions on contraceptive availability limit the ability of SBHCs to maximize opportunities for comprehensive reproductive care and create missed opportunities for pregnancy and STI prevention. A clinical case explores the current models of health care services related to contraceptive care provided in SBHCs and the ability to meet or miss the needs of an adolescent seeking reproductive care in a SBHC.

  14. Loss aversion, large deviation preferences and optimal portfolio weights for some classes of return processes

    NASA Astrophysics Data System (ADS)

    Duffy, Ken; Lobunets, Olena; Suhov, Yuri

    2007-05-01

    We propose a model of a loss averse investor who aims to maximize his expected wealth under certain constraints. The constraints are that he avoids, with high probability, incurring an (suitably defined) unacceptable loss. The methodology employed comes from the theory of large deviations. We explore a number of fundamental properties of the model and illustrate its desirable features. We demonstrate its utility by analyzing assets that follow some commonly used financial return processes: Fractional Brownian Motion, Jump Diffusion, Variance Gamma and Truncated Lévy.

  15. Rationality versus reality: the challenges of evidence-based decision making for health policy makers

    PubMed Central

    2010-01-01

    Background Current healthcare systems have extended the evidence-based medicine (EBM) approach to health policy and delivery decisions, such as access-to-care, healthcare funding and health program continuance, through attempts to integrate valid and reliable evidence into the decision making process. These policy decisions have major impacts on society and have high personal and financial costs associated with those decisions. Decision models such as these function under a shared assumption of rational choice and utility maximization in the decision-making process. Discussion We contend that health policy decision makers are generally unable to attain the basic goals of evidence-based decision making (EBDM) and evidence-based policy making (EBPM) because humans make decisions with their naturally limited, faulty, and biased decision-making processes. A cognitive information processing framework is presented to support this argument, and subtle cognitive processing mechanisms are introduced to support the focal thesis: health policy makers' decisions are influenced by the subjective manner in which they individually process decision-relevant information rather than on the objective merits of the evidence alone. As such, subsequent health policy decisions do not necessarily achieve the goals of evidence-based policy making, such as maximizing health outcomes for society based on valid and reliable research evidence. Summary In this era of increasing adoption of evidence-based healthcare models, the rational choice, utility maximizing assumptions in EBDM and EBPM, must be critically evaluated to ensure effective and high-quality health policy decisions. The cognitive information processing framework presented here will aid health policy decision makers by identifying how their decisions might be subtly influenced by non-rational factors. In this paper, we identify some of the biases and potential intervention points and provide some initial suggestions about how the EBDM/EBPM process can be improved. PMID:20504357

  16. Rationality versus reality: the challenges of evidence-based decision making for health policy makers.

    PubMed

    McCaughey, Deirdre; Bruning, Nealia S

    2010-05-26

    Current healthcare systems have extended the evidence-based medicine (EBM) approach to health policy and delivery decisions, such as access-to-care, healthcare funding and health program continuance, through attempts to integrate valid and reliable evidence into the decision making process. These policy decisions have major impacts on society and have high personal and financial costs associated with those decisions. Decision models such as these function under a shared assumption of rational choice and utility maximization in the decision-making process. We contend that health policy decision makers are generally unable to attain the basic goals of evidence-based decision making (EBDM) and evidence-based policy making (EBPM) because humans make decisions with their naturally limited, faulty, and biased decision-making processes. A cognitive information processing framework is presented to support this argument, and subtle cognitive processing mechanisms are introduced to support the focal thesis: health policy makers' decisions are influenced by the subjective manner in which they individually process decision-relevant information rather than on the objective merits of the evidence alone. As such, subsequent health policy decisions do not necessarily achieve the goals of evidence-based policy making, such as maximizing health outcomes for society based on valid and reliable research evidence. In this era of increasing adoption of evidence-based healthcare models, the rational choice, utility maximizing assumptions in EBDM and EBPM, must be critically evaluated to ensure effective and high-quality health policy decisions. The cognitive information processing framework presented here will aid health policy decision makers by identifying how their decisions might be subtly influenced by non-rational factors. In this paper, we identify some of the biases and potential intervention points and provide some initial suggestions about how the EBDM/EBPM process can be improved.

  17. A multistage stochastic programming model for a multi-period strategic expansion of biofuel supply chain under evolving uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Fei; Huang, Yongxi

    Here, we develop a multistage, stochastic mixed-integer model to support biofuel supply chain expansion under evolving uncertainties. By utilizing the block-separable recourse property, we reformulate the multistage program in an equivalent two-stage program and solve it using an enhanced nested decomposition method with maximal non-dominated cuts. We conduct extensive numerical experiments and demonstrate the application of the model and algorithm in a case study based on the South Carolina settings. The value of multistage stochastic programming method is also explored by comparing the model solution with the counterparts of an expected value based deterministic model and a two-stage stochastic model.

  18. A multistage stochastic programming model for a multi-period strategic expansion of biofuel supply chain under evolving uncertainties

    DOE PAGES

    Xie, Fei; Huang, Yongxi

    2018-02-04

    Here, we develop a multistage, stochastic mixed-integer model to support biofuel supply chain expansion under evolving uncertainties. By utilizing the block-separable recourse property, we reformulate the multistage program in an equivalent two-stage program and solve it using an enhanced nested decomposition method with maximal non-dominated cuts. We conduct extensive numerical experiments and demonstrate the application of the model and algorithm in a case study based on the South Carolina settings. The value of multistage stochastic programming method is also explored by comparing the model solution with the counterparts of an expected value based deterministic model and a two-stage stochastic model.

  19. Model based adaptive control of a continuous capture process for monoclonal antibodies production.

    PubMed

    Steinebach, Fabian; Angarita, Monica; Karst, Daniel J; Müller-Späth, Thomas; Morbidelli, Massimo

    2016-04-29

    A two-column capture process for continuous processing of cell-culture supernatant is presented. Similar to other multicolumn processes, this process uses sequential countercurrent loading of the target compound in order maximize resin utilization and productivity for a given product yield. The process was designed using a novel mechanistic model for affinity capture, which takes both specific adsorption as well as transport through the resin beads into account. Simulations as well as experimental results for the capture of an IgG antibody are discussed. The model was able to predict the process performance in terms of yield, productivity and capacity utilization. Compared to continuous capture with two columns operated batch wise in parallel, a 2.5-fold higher capacity utilization was obtained for the same productivity and yield. This results in an equal improvement in product concentration and reduction of buffer consumption. The developed model was used not only for the process design and optimization but also for its online control. In particular, the unit operating conditions are changed in order to maintain high product yield while optimizing the process performance in terms of capacity utilization and buffer consumption also in the presence of changing upstream conditions and resin aging. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  1. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  2. A Predictive Analysis of the Department of Defense Distribution System Utilizing Random Forests

    DTIC Science & Technology

    2016-06-01

    resources capable of meeting both customer and individual resource constraints and goals while also maximizing the global benefit to the supply...and probability rules to determine the optimal red wine distribution network for an Italian-based wine producer. The decision support model for...combinations of factors that will result in delivery of the highest quality wines . The model’s first stage inputs basic logistics information to look

  3. Quantifying the costs and benefits of privacy-preserving health data publishing.

    PubMed

    Khokhar, Rashid Hussain; Chen, Rui; Fung, Benjamin C M; Lui, Siu Man

    2014-08-01

    Cost-benefit analysis is a prerequisite for making good business decisions. In the business environment, companies intend to make profit from maximizing information utility of published data while having an obligation to protect individual privacy. In this paper, we quantify the trade-off between privacy and data utility in health data publishing in terms of monetary value. We propose an analytical cost model that can help health information custodians (HICs) make better decisions about sharing person-specific health data with other parties. We examine relevant cost factors associated with the value of anonymized data and the possible damage cost due to potential privacy breaches. Our model guides an HIC to find the optimal value of publishing health data and could be utilized for both perturbative and non-perturbative anonymization techniques. We show that our approach can identify the optimal value for different privacy models, including K-anonymity, LKC-privacy, and ∊-differential privacy, under various anonymization algorithms and privacy parameters through extensive experiments on real-life data. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Effective return, risk aversion and drawdowns

    NASA Astrophysics Data System (ADS)

    Dacorogna, Michel M.; Gençay, Ramazan; Müller, Ulrich A.; Pictet, Olivier V.

    2001-01-01

    We derive two risk-adjusted performance measures for investors with risk averse preferences. Maximizing these measures is equivalent to maximizing the expected utility of an investor. The first measure, Xeff, is derived assuming a constant risk aversion while the second measure, Reff, is based on a stronger risk aversion to clustering of losses than of gains. The clustering of returns is captured through a multi-horizon framework. The empirical properties of Xeff, Reff are studied within the context of real-time trading models for foreign exchange rates and their properties are compared to those of more traditional measures like the annualized return, the Sharpe Ratio and the maximum drawdown. Our measures are shown to be more robust against clustering of losses and have the ability to fully characterize the dynamic behaviour of investment strategies.

  5. Utility functions and resource management in an oversubscribed heterogeneous computing environment

    DOE PAGES

    Khemka, Bhavesh; Friese, Ryan; Briceno, Luis Diego; ...

    2014-09-26

    We model an oversubscribed heterogeneous computing system where tasks arrive dynamically and a scheduler maps the tasks to machines for execution. The environment and workloads are based on those being investigated by the Extreme Scale Systems Center at Oak Ridge National Laboratory. Utility functions that are designed based on specifications from the system owner and users are used to create a metric for the performance of resource allocation heuristics. Each task has a time-varying utility (importance) that the enterprise will earn based on when the task successfully completes execution. We design multiple heuristics, which include a technique to drop lowmore » utility-earning tasks, to maximize the total utility that can be earned by completing tasks. The heuristics are evaluated using simulation experiments with two levels of oversubscription. The results show the benefit of having fast heuristics that account for the importance of a task and the heterogeneity of the environment when making allocation decisions in an oversubscribed environment. Furthermore, the ability to drop low utility-earning tasks allow the heuristics to tolerate the high oversubscription as well as earn significant utility.« less

  6. Designing Agent Collectives For Systems With Markovian Dynamics

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Lawson, John W.

    2004-01-01

    The Collective Intelligence (COIN) framework concerns the design of collectives of agents so that as those agents strive to maximize their individual utility functions, their interaction causes a provided world utility function concerning the entire collective to be also maximized. Here we show how to extend that framework to scenarios having Markovian dynamics when no re-evolution of the system from counter-factual initial conditions (an often expensive calculation) is permitted. Our approach transforms the (time-extended) argument of each agent's utility function before evaluating that function. This transformation has benefits in scenarios not involving Markovian dynamics of an agent's utility function are observable. We investigate this transformation in simulations involving both hear and quadratic (nonlinear) dynamics. In addition, we find that a certain subset of these transformations, which result in utilities that have low opacity (analogous to having high signal to noise) but are not factored (analogous to not being incentive compatible), reliably improve performance over that arising with factored utilities. We also present a Taylor Series method for the fully general nonlinear case.

  7. Asset Management for Water and Wastewater Utilities

    EPA Pesticide Factsheets

    Renewing and replacing the nation's public water infrastructure is an ongoing task. Asset management can help a utility maximize the value of its capital as well as its operations and maintenance dollars.

  8. Golden Ratio Genetic Algorithm Based Approach for Modelling and Analysis of the Capacity Expansion of Urban Road Traffic Network

    PubMed Central

    Zhang, Lun; Zhang, Meng; Yang, Wenchen; Dong, Decun

    2015-01-01

    This paper presents the modelling and analysis of the capacity expansion of urban road traffic network (ICURTN). Thebilevel programming model is first employed to model the ICURTN, in which the utility of the entire network is maximized with the optimal utility of travelers' route choice. Then, an improved hybrid genetic algorithm integrated with golden ratio (HGAGR) is developed to enhance the local search of simple genetic algorithms, and the proposed capacity expansion model is solved by the combination of the HGAGR and the Frank-Wolfe algorithm. Taking the traditional one-way network and bidirectional network as the study case, three numerical calculations are conducted to validate the presented model and algorithm, and the primary influencing factors on extended capacity model are analyzed. The calculation results indicate that capacity expansion of road network is an effective measure to enlarge the capacity of urban road network, especially on the condition of limited construction budget; the average computation time of the HGAGR is 122 seconds, which meets the real-time demand in the evaluation of the road network capacity. PMID:25802512

  9. Slope Estimation in Noisy Piecewise Linear Functions✩

    PubMed Central

    Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy

    2014-01-01

    This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure. PMID:25419020

  10. Slope Estimation in Noisy Piecewise Linear Functions.

    PubMed

    Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy

    2015-03-01

    This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure.

  11. Travel Demand Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Southworth, Frank; Garrow, Dr. Laurie

    This chapter describes the principal types of both passenger and freight demand models in use today, providing a brief history of model development supported by references to a number of popular texts on the subject, and directing the reader to papers covering some of the more recent technical developments in the area. Over the past half century a variety of methods have been used to estimate and forecast travel demands, drawing concepts from economic/utility maximization theory, transportation system optimization and spatial interaction theory, using and often combining solution techniques as varied as Box-Jenkins methods, non-linear multivariate regression, non-linear mathematical programming,more » and agent-based microsimulation.« less

  12. Estimation of the standardized ileal digestible valine to lysine ratio required for 25- to 120-kilogram pigs fed low crude protein diets supplemented with crystalline amino acids.

    PubMed

    Liu, X T; Ma, W F; Zeng, X F; Xie, C Y; Thacker, P A; Htoo, J K; Qiao, S Y

    2015-10-01

    Four 28-d experiments were conducted to determine the standardized ileal digestible (SID) valine (Val) to lysine (Lys) ratio required for 26- to 46- (Exp. 1), 49- to 70- (Exp. 2), 71- to 92- (Exp. 3), and 94- to 119-kg (Exp. 4) pigs fed low CP diets supplemented with crystalline AA. The first 3 experiments utilized 150 pigs (Duroc × Landrace × Large White), while Exp. 4 utilized 90 finishing pigs. Pigs in all 4 experiments were randomly allocated to 1 of 5 diets with 6 pens per treatment (3 pens of barrows and 3 pens of gilts) and 5 pigs per pen for the first 3 experiments and 3 pigs per pen for Exp. 4. Diets for all experiments were formulated to contain SID Val to Lys ratios of 0.55, 0.60, 0.65, 0.70, or 0.75. In Exp. 1 (26 to 46 kg), ADG increased (linear, = 0.039; quadratic, = 0.042) with an increasing dietary Val:Lys ratio. The SID Val:Lys ratio to maximize ADG was 0.62 using a linear broken-line model and 0.71 using a quadratic model. In Exp. 2 (49 to 70 kg), ADG increased (linear, = 0.021; quadratic, = 0.042) as the SID Val:Lys ratio increased. G:F improved (linear, = 0.039) and serum urea nitrogen (SUN) decreased (linear, = 0.021; quadratic, = 0.024) with an increased SID Val:Lys ratio. The SID Val:Lys ratios to maximize ADG as well as to minimize SUN levels were 0.67 and 0.65, respectively, using a linear broken-line model and 0.72 and 0.71, respectively, using a quadratic model. In Exp. 3 (71 to 92 kg), ADG increased (linear, = 0.007; quadratic, = 0.022) and SUN decreased (linear, = 0.011; quadratic, = 0.034) as the dietary SID Val:Lys ratio increased. The SID Val:Lys ratios to maximize ADG as well as to minimize SUN levels were 0.67 and 0.67, respectively, using a linear broken-line model and 0.72 and 0.74, respectively, using a quadratic model. In Exp. 4 (94 to 119 kg), ADG increased (linear, = 0.041) and G:F was improved (linear, = 0.004; quadratic, = 0.005) as the dietary SID Val:Lys ratio increased. The SID Val:Lys ratio to maximize G:F was 0.68 using a linear broken-line model and 0.72 using a quadratic model. Carcass traits and muscle quality were not influenced by SID Val:Lys ratio. In conclusion, the dietary SID Val:Lys ratios required for 26- to 46-, 49- to 70-, 71- to 92-, and 94- to 119-kg pigs were estimated to be 0.62, 0.66, 0.67, and 0.68, respectively, using a linear broken-line model and 0.71, 0.72, 0.73, and 0.72, respectively, using a quadratic model.

  13. Power Converters Maximize Outputs Of Solar Cell Strings

    NASA Technical Reports Server (NTRS)

    Frederick, Martin E.; Jermakian, Joel B.

    1993-01-01

    Microprocessor-controlled dc-to-dc power converters devised to maximize power transferred from solar photovoltaic strings to storage batteries and other electrical loads. Converters help in utilizing large solar photovoltaic arrays most effectively with respect to cost, size, and weight. Main points of invention are: single controller used to control and optimize any number of "dumb" tracker units and strings independently; power maximized out of converters; and controller in system is microprocessor.

  14. Microeconomics-based resource allocation in overlay networks by using non-strategic behavior modeling

    NASA Astrophysics Data System (ADS)

    Analoui, Morteza; Rezvani, Mohammad Hossein

    2011-01-01

    Behavior modeling has recently been investigated for designing self-organizing mechanisms in the context of communication networks in order to exploit the natural selfishness of the users with the goal of maximizing the overall utility. In strategic behavior modeling, the users of the network are assumed to be game players who seek to maximize their utility with taking into account the decisions that the other players might make. The essential difference between the aforementioned researches and this work is that it incorporates the non-strategic decisions in order to design the mechanism for the overlay network. In this solution concept, the decisions that a peer might make does not affect the actions of the other peers at all. The theory of consumer-firm developed in microeconomics is a model of the non-strategic behavior that we have adopted in our research. Based on it, we have presented distributed algorithms for peers' "joining" and "leaving" operations. We have modeled the overlay network as a competitive economy in which the content provided by an origin server can be viewed as commodity and the origin server and the peers who multicast the content to their downside are considered as the firms. On the other hand, due to the dual role of the peers in the overlay network, they can be considered as the consumers as well. On joining to the overlay economy, each peer is provided with an income and tries to get hold of the service regardless to the behavior of the other peers. We have designed the scalable algorithms in such a way that the existence of equilibrium price (known as Walrasian equilibrium price) is guaranteed.

  15. The Naïve Utility Calculus: Computational Principles Underlying Commonsense Psychology.

    PubMed

    Jara-Ettinger, Julian; Gweon, Hyowon; Schulz, Laura E; Tenenbaum, Joshua B

    2016-08-01

    We propose that human social cognition is structured around a basic understanding of ourselves and others as intuitive utility maximizers: from a young age, humans implicitly assume that agents choose goals and actions to maximize the rewards they expect to obtain relative to the costs they expect to incur. This 'naïve utility calculus' allows both children and adults observe the behavior of others and infer their beliefs and desires, their longer-term knowledge and preferences, and even their character: who is knowledgeable or competent, who is praiseworthy or blameworthy, who is friendly, indifferent, or an enemy. We review studies providing support for the naïve utility calculus, and we show how it captures much of the rich social reasoning humans engage in from infancy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Learning in engineered multi-agent systems

    NASA Astrophysics Data System (ADS)

    Menon, Anup

    Consider the problem of maximizing the total power produced by a wind farm. Due to aerodynamic interactions between wind turbines, each turbine maximizing its individual power---as is the case in present-day wind farms---does not lead to optimal farm-level power capture. Further, there are no good models to capture the said aerodynamic interactions, rendering model based optimization techniques ineffective. Thus, model-free distributed algorithms are needed that help turbines adapt their power production on-line so as to maximize farm-level power capture. Motivated by such problems, the main focus of this dissertation is a distributed model-free optimization problem in the context of multi-agent systems. The set-up comprises of a fixed number of agents, each of which can pick an action and observe the value of its individual utility function. An individual's utility function may depend on the collective action taken by all agents. The exact functional form (or model) of the agent utility functions, however, are unknown; an agent can only measure the numeric value of its utility. The objective of the multi-agent system is to optimize the welfare function (i.e. sum of the individual utility functions). Such a collaborative task requires communications between agents and we allow for the possibility of such inter-agent communications. We also pay attention to the role played by the pattern of such information exchange on certain aspects of performance. We develop two algorithms to solve this problem. The first one, engineered Interactive Trial and Error Learning (eITEL) algorithm, is based on a line of work in the Learning in Games literature and applies when agent actions are drawn from finite sets. While in a model-free setting, we introduce a novel qualitative graph-theoretic framework to encode known directed interactions of the form "which agents' action affect which others' payoff" (interaction graph). We encode explicit inter-agent communications in a directed graph (communication graph) and, under certain conditions, prove convergence of agent joint action (under eITEL) to the welfare optimizing set. The main condition requires that the union of interaction and communication graphs be strongly connected; thus the algorithm combines an implicit form of communication (via interactions through utility functions) with explicit inter-agent communications to achieve the given collaborative goal. This work has kinship with certain evolutionary computation techniques such as Simulated Annealing; the algorithm steps are carefully designed such that it describes an ergodic Markov chain with a stationary distribution that has support over states where agent joint actions optimize the welfare function. The main analysis tool is perturbed Markov chains and results of broader interest regarding these are derived as well. The other algorithm, Collaborative Extremum Seeking (CES), uses techniques from extremum seeking control to solve the problem when agent actions are drawn from the set of real numbers. In this case, under the assumption of existence of a local minimizer for the welfare function and a connected undirected communication graph between agents, a result regarding convergence of joint action to a small neighborhood of a local optimizer of the welfare function is proved. Since extremum seeking control uses a simultaneous gradient estimation-descent scheme, gradient information available in the continuous action space formulation is exploited by the CES algorithm to yield improved convergence speeds. The effectiveness of this algorithm for the wind farm power maximization problem is evaluated via simulations. Lastly, we turn to a different question regarding role of the information exchange pattern on performance of distributed control systems by means of a case study for the vehicle platooning problem. In the vehicle platoon control problem, the objective is to design distributed control laws for individual vehicles in a platoon (or a road-train) that regulate inter-vehicle distances at a specified safe value while the entire platoon follows a leader-vehicle. While most of the literature on the problem deals with some inadequacy in control performance when the information exchange is of the nearest neighbor-type, we consider an arbitrary graph serving as information exchange pattern and derive a relationship between how a certain indicator of control performance is related to the information pattern. Such analysis helps in understanding qualitative features of the `right' information pattern for this problem.

  17. Optimality of profit-including prices under ideal planning.

    PubMed

    Samuelson, P A

    1973-07-01

    Although prices calculated by a constant percentage markup on all costs (nonlabor as well as direct-labor) are usually admitted to be more realistic for a competitive capitalistic model, the view is often expressed that, for optimal planning purposes, the "values" model of Marx's Capital, Volume I, is to be preferred. It is shown here that an optimal-control model that maximizes discounted social utility of consumption per capita and that ultimately approaches a steady state must ultimately have optimal pricing that involves equal rates of steady-state profit in all industries; and such optimal pricing will necessarily deviate from Marx's model of equal rates of surplus value (markups on direct-labor only) in all industries.

  18. Optimality of Profit-Including Prices Under Ideal Planning

    PubMed Central

    Samuelson, Paul A.

    1973-01-01

    Although prices calculated by a constant percentage markup on all costs (nonlabor as well as direct-labor) are usually admitted to be more realistic for a competitive capitalistic model, the view is often expressed that, for optimal planning purposes, the “values” model of Marx's Capital, Volume I, is to be preferred. It is shown here that an optimal-control model that maximizes discounted social utility of consumption per capita and that ultimately approaches a steady state must ultimately have optimal pricing that involves equal rates of steady-state profit in all industries; and such optimal pricing will necessarily deviate from Marx's model of equal rates of surplus value (markups on direct-labor only) in all industries. PMID:16592102

  19. Next Day Price Forecasting in Deregulated Market by Combination of Artificial Neural Network and ARIMA Time Series Models

    NASA Astrophysics Data System (ADS)

    Areekul, Phatchakorn; Senjyu, Tomonobu; Urasaki, Naomitsu; Yona, Atsushi

    Electricity price forecasting is becoming increasingly relevant to power producers and consumers in the new competitive electric power markets, when planning bidding strategies in order to maximize their benefits and utilities, respectively. This paper proposed a method to predict hourly electricity prices for next-day electricity markets by combination methodology of ARIMA and ANN models. The proposed method is examined on the Australian National Electricity Market (NEM), New South Wales regional in year 2006. Comparison of forecasting performance with the proposed ARIMA, ANN and combination (ARIMA-ANN) models are presented. Empirical results indicate that an ARIMA-ANN model can improve the price forecasting accuracy.

  20. Recipe creation for automated defect classification with a 450mm surface scanning inspection system based on the bidirectional reflectance distribution function of native defects

    NASA Astrophysics Data System (ADS)

    Yathapu, Nithin; McGarvey, Steve; Brown, Justin; Zhivotovsky, Alexander

    2016-03-01

    This study explores the feasibility of Automated Defect Classification (ADC) with a Surface Scanning Inspection System (SSIS). The defect classification was based upon scattering sensitivity sizing curves created via modeling of the Bidirectional Reflectance Distribution Function (BRDF). The BRDF allowed for the creation of SSIS sensitivity/sizing curves based upon the optical properties of both the filmed wafer samples and the optical architecture of the SSIS. The elimination of Polystyrene Latex Sphere (PSL) and Silica deposition on both filmed and bare Silicon wafers prior to SSIS recipe creation and ADC creates a challenge for light scattering surface intensity based defect binning. This study explored the theoretical maximal SSIS sensitivity based on native defect recipe creation in conjunction with the maximal sensitivity derived from BRDF modeling recipe creation. Single film and film stack wafers were inspected with recipes based upon BRDF modeling. Following SSIS recipe creation, initially targeting maximal sensitivity, selected recipes were optimized to classify defects commonly found on non-patterned wafers. The results were utilized to determine the ADC binning accuracy of the native defects and evaluate the SSIS recipe creation methodology. A statistically valid sample of defects from the final inspection results of each SSIS recipe and filmed substrate were reviewed post SSIS ADC processing on a Defect Review Scanning Electron Microscope (SEM). Native defect images were collected from each statistically valid defect bin category/size for SEM Review. The data collected from the Defect Review SEM was utilized to determine the statistical purity and accuracy of each SSIS defect classification bin. This paper explores both, commercial and technical, considerations of the elimination of PSL and Silica deposition as a precursor to SSIS recipe creation targeted towards ADC. Successful integration of SSIS ADC in conjunction with recipes created via BRDF modeling has the potential to dramatically reduce the workload requirements of a Defect Review SEM and save a significant amount of capital expenditure for 450mm SSIS recipe creation.

  1. Cognitive Somatic Behavioral Interventions for Maximizing Gymnastic Performance.

    ERIC Educational Resources Information Center

    Ravizza, Kenneth; Rotella, Robert

    Psychological training programs developed and implemented for gymnasts of a wide range of age and varying ability levels are examined. The programs utilized strategies based on cognitive-behavioral intervention. The approach contends that mental training plays a crucial role in maximizing performance for most gymnasts. The object of the training…

  2. Economics of Red Pine Management for Utility Pole Timber

    Treesearch

    Gerald H. Grossman; Karen Potter-Witter

    1991-01-01

    Including utility poles in red pine management regimes leads to distinctly different management recommendations. Where utility pole markets exist, managing for poles will maximize net returns. To do so, plantations should be maintained above 110 ft2/ac, higher than usually recommended. In Michigan's northern lower peninsula, approximately...

  3. Counseling issues and management of side effects for women using depot medroxyprogesterone acetate contraception.

    PubMed

    Nelson, A L

    1996-05-01

    Patients satisfaction is crucial to maximizing long-term utilization and efficacy of any contraceptive method. Satisfaction is enhanced when appropriate preutilization counseling is offered and when side effects are successfully managed. This article provides a conceptual model for patient counseling, highlights the significant points that should be included in counseling patients about depot medroxyprogesterone acetate (DMPA) and offers clinical suggestions to help evaluate and treat the more common side effects associated with DMPA use.

  4. Maximally Expressive Task Modeling

    NASA Technical Reports Server (NTRS)

    Japp, John; Davis, Elizabeth; Maxwell, Theresa G. (Technical Monitor)

    2002-01-01

    Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiment activities for the Space Station. The equipment used in these experiments is some of the most complex hardware ever developed by mankind, the information sought by these experiments is at the cutting edge of scientific endeavor, and the procedures for executing the experiments are intricate and exacting. Scheduling is made more difficult by a scarcity of space station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling space station experiment operations calls for a "maximally expressive" modeling schema. Modeling even the simplest of activities cannot be automated; no sensor can be attached to a piece of equipment that can discern how to use that piece of equipment; no camera can quantify how to operate a piece of equipment. Modeling is a human enterprise-both an art and a science. The modeling schema should allow the models to flow from the keyboard of the user as easily as works of literature flowed from the pen of Shakespeare. The Ground Systems Department at the Marshall Space Flight Center has embarked on an effort to develop a new scheduling engine that is highlighted by a maximally expressive modeling schema. This schema, presented in this paper, is a synergy of technological advances and domain-specific innovations.

  5. Individual welfare maximization in electricity markets including consumer and full transmission system modeling

    NASA Astrophysics Data System (ADS)

    Weber, James Daniel

    1999-11-01

    This dissertation presents a new algorithm that allows a market participant to maximize its individual welfare in the electricity spot market. The use of such an algorithm in determining market equilibrium points, called Nash equilibria, is also demonstrated. The start of the algorithm is a spot market model that uses the optimal power flow (OPF), with a full representation of the transmission system. The OPF is also extended to model consumer behavior, and a thorough mathematical justification for the inclusion of the consumer model in the OPF is presented. The algorithm utilizes price and dispatch sensitivities, available from the Hessian matrix of the OPF, to help determine an optimal change in an individual's bid. The algorithm is shown to be successful in determining local welfare maxima, and the prospects for scaling the algorithm up to realistically sized systems are very good. Assuming a market in which all participants maximize their individual welfare, economic equilibrium points, called Nash equilibria, are investigated. This is done by iteratively solving the individual welfare maximization algorithm for each participant until a point is reached where all individuals stop modifying their bids. It is shown that these Nash equilibria can be located in this manner. However, it is also demonstrated that equilibria do not always exist, and are not always unique when they do exist. It is also shown that individual welfare is a highly nonconcave function resulting in many local maxima. As a result, a more global optimization technique, using a genetic algorithm (GA), is investigated. The genetic algorithm is successfully demonstrated on several systems. It is also shown that a GA can be developed using special niche methods, which allow a GA to converge to several local optima at once. Finally, the last chapter of this dissertation covers the development of a new computer visualization routine for power system analysis: contouring. The contouring algorithm is demonstrated to be useful in visualizing bus-based and transmission line-based quantities.

  6. Using Classification Trees to Predict Alumni Giving for Higher Education

    ERIC Educational Resources Information Center

    Weerts, David J.; Ronca, Justin M.

    2009-01-01

    As the relative level of public support for higher education declines, colleges and universities aim to maximize alumni-giving to keep their programs competitive. Anchored in a utility maximization framework, this study employs the classification and regression tree methodology to examine characteristics of alumni donors and non-donors at a…

  7. Designing Agent Collectives For Systems With Markovian Dynamics

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Lawson, John W.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    The "Collective Intelligence" (COIN) framework concerns the design of collectives of agents so that as those agents strive to maximize their individual utility functions, their interaction causes a provided "world" utility function concerning the entire collective to be also maximized. Here we show how to extend that framework to scenarios having Markovian dynamics when no re-evolution of the system from counter-factual initial conditions (an often expensive calculation) is permitted. Our approach transforms the (time-extended) argument of each agent's utility function before evaluating that function. This transformation has benefits in scenarios not involving Markovian dynamics, in particular scenarios where not all of the arguments of an agent's utility function are observable. We investigate this transformation in simulations involving both linear and quadratic (nonlinear) dynamics. In addition, we find that a certain subset of these transformations, which result in utilities that have low "opacity (analogous to having high signal to noise) but are not "factored" (analogous to not being incentive compatible), reliably improve performance over that arising with factored utilities. We also present a Taylor Series method for the fully general nonlinear case.

  8. Exploring theoretical frameworks for the analysis of fertility fluctuations.

    PubMed

    Micheli, G A

    1988-05-01

    The Easterlin theory, popular during the 1970s, explained population fluctuations in terms of maximization of choice, based on the evaluation of previously acquired information. Fluctuations in procreational patterns were seen as responses to conflict between 2 consecutive generations in which the propensity to procreate is inversely related to cohort size. However, the number of demographic trends not directly explainable by the hypothesis imply that either the model must be extended over a longer time frame or that there has been a drastic change of regime, i.e., a basic change in popular attitudes which determine decision making behavior. 4 strategic principles underlie reproductive decisions: primary adaptation, economic utility, norm internalization, and identity reinforcement. The decision-making process is determined by the relative importance of these 4 principles. Primary adaptation implies inertia, i.e., nondecision. Economic utility implies the use of rational choice to maximize economic gain. Norm internalization implies conforming to the behavior of one's sociocultural peers as if it were one's own choice. Identity reinforcement implies that one decides to reproduce because procreation is a way of extending one's identity forward in time. The 2 active decision-making attitudes, economic rationality and identity reinforcement, are strategically both antagonistic and complementary. This polarity of behavior lends itself to analysis in terms of the predator-prey model, in which population is seen as the predator and resources as the prey. However, in applying the model, one must keep in mind that the real demographic picture is not static and that it is subject to deformation by external forces.

  9. Prioritization of engineering support requests and advanced technology projects using decision support and industrial engineering models

    NASA Technical Reports Server (NTRS)

    Tavana, Madjid

    1995-01-01

    The evaluation and prioritization of Engineering Support Requests (ESR's) is a particularly difficult task at the Kennedy Space Center (KSC) -- Shuttle Project Engineering Office. This difficulty is due to the complexities inherent in the evaluation process and the lack of structured information. The evaluation process must consider a multitude of relevant pieces of information concerning Safety, Supportability, O&M Cost Savings, Process Enhancement, Reliability, and Implementation. Various analytical and normative models developed over the past have helped decision makers at KSC utilize large volumes of information in the evaluation of ESR's. The purpose of this project is to build on the existing methodologies and develop a multiple criteria decision support system that captures the decision maker's beliefs through a series of sequential, rational, and analytical processes. The model utilizes the Analytic Hierarchy Process (AHP), subjective probabilities, the entropy concept, and Maximize Agreement Heuristic (MAH) to enhance the decision maker's intuition in evaluating a set of ESR's.

  10. Spectrum Sharing Based on a Bertrand Game in Cognitive Radio Sensor Networks

    PubMed Central

    Zeng, Biqing; Zhang, Chi; Hu, Pianpian; Wang, Shengyu

    2017-01-01

    In the study of power control and allocation based on pricing, the utility of secondary users is usually studied from the perspective of the signal to noise ratio. The study of secondary user utility from the perspective of communication demand can not only promote the secondary users to meet the maximum communication needs, but also to maximize the utilization of spectrum resources, however, research in this area is lacking, so from the viewpoint of meeting the demand of network communication, this paper designs a two stage model to solve spectrum leasing and allocation problem in cognitive radio sensor networks (CRSNs). In the first stage, the secondary base station collects the secondary network communication requirements, and rents spectrum resources from several primary base stations using the Bertrand game to model the transaction behavior of the primary base station and secondary base station. The second stage, the subcarriers and power allocation problem of secondary base stations is defined as a nonlinear programming problem to be solved based on Nash bargaining. The simulation results show that the proposed model can satisfy the communication requirements of each user in a fair and efficient way compared to other spectrum sharing schemes. PMID:28067850

  11. Modeling and design of light powered biomimicry micropump utilizing transporter proteins

    NASA Astrophysics Data System (ADS)

    Liu, Jin; Sze, Tsun-Kay Jackie; Dutta, Prashanta

    2014-11-01

    The creation of compact micropumps to provide steady flow has been an on-going challenge in the field of microfluidics. We present a mathematical model for a micropump utilizing Bacteriorhodopsin and sugar transporter proteins. This micropump utilizes transporter proteins as method to drive fluid flow by converting light energy into chemical potential. The fluid flow through a microchannel is simulated using the Nernst-Planck, Navier-Stokes, and continuity equations. Numerical results show that the micropump is capable of generating usable pressure. Designing parameters influencing the performance of the micropump are investigated including membrane fraction, lipid proton permeability, illumination, and channel height. The results show that there is a substantial membrane fraction region at which fluid flow is maximized. The use of lipids with low membrane proton permeability allows illumination to be used as a method to turn the pump on and off. This capability allows the micropump to be activated and shut off remotely without bulky support equipment. This modeling work provides new insights on mechanisms potentially useful for fluidic pumping in self-sustained bio-mimic microfluidic pumps. This work is supported in part by the National Science Fundation Grant CBET-1250107.

  12. PEM-PCA: a parallel expectation-maximization PCA face recognition architecture.

    PubMed

    Rujirakul, Kanokmon; So-In, Chakchai; Arnonkijpanich, Banchar

    2014-01-01

    Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.

  13. A decision theoretical approach for diffusion promotion

    NASA Astrophysics Data System (ADS)

    Ding, Fei; Liu, Yun

    2009-09-01

    In order to maximize cost efficiency from scarce marketing resources, marketers are facing the problem of which group of consumers to target for promotions. We propose to use a decision theoretical approach to model this strategic situation. According to one promotion model that we develop, marketers balance between probabilities of successful persuasion and the expected profits on a diffusion scale, before making their decisions. In the other promotion model, the cost for identifying influence information is considered, and marketers are allowed to ignore individual heterogeneity. We apply the proposed approach to two threshold influence models, evaluate the utility of each promotion action, and provide discussions about the best strategy. Our results show that efforts for targeting influentials or easily influenced people might be redundant under some conditions.

  14. Social and Professional Participation of Individuals Who Are Deaf: Utilizing the Psychosocial Potential Maximization Framework

    ERIC Educational Resources Information Center

    Jacobs, Paul G.; Brown, P. Margaret; Paatsch, Louise

    2012-01-01

    This article documents a strength-based understanding of how individuals who are deaf maximize their social and professional potential. This exploratory study was conducted with 49 adult participants who are deaf (n = 30) and who have typical hearing (n = 19) residing in America, Australia, England, and South Africa. The findings support a…

  15. Adaptive design optimization: a mutual information-based approach to model discrimination in cognitive science.

    PubMed

    Cavagnaro, Daniel R; Myung, Jay I; Pitt, Mark A; Kujala, Janne V

    2010-04-01

    Discriminating among competing statistical models is a pressing issue for many experimentalists in the field of cognitive science. Resolving this issue begins with designing maximally informative experiments. To this end, the problem to be solved in adaptive design optimization is identifying experimental designs under which one can infer the underlying model in the fewest possible steps. When the models under consideration are nonlinear, as is often the case in cognitive science, this problem can be impossible to solve analytically without simplifying assumptions. However, as we show in this letter, a full solution can be found numerically with the help of a Bayesian computational trick derived from the statistics literature, which recasts the problem as a probability density simulation in which the optimal design is the mode of the density. We use a utility function based on mutual information and give three intuitive interpretations of the utility function in terms of Bayesian posterior estimates. As a proof of concept, we offer a simple example application to an experiment on memory retention.

  16. Incentives for Optimal Multi-level Allocation of HIV Prevention Resources

    PubMed Central

    Malvankar, Monali M.; Zaric, Gregory S.

    2013-01-01

    HIV/AIDS prevention funds are often allocated at multiple levels of decision-making. Optimal allocation of HIV prevention funds maximizes the number of HIV infections averted. However, decision makers often allocate using simple heuristics such as proportional allocation. We evaluate the impact of using incentives to encourage optimal allocation in a two-level decision-making process. We model an incentive based decision-making process consisting of an upper-level decision maker allocating funds to a single lower-level decision maker who then distributes funds to local programs. We assume that the lower-level utility function is linear in the amount of the budget received from the upper-level, the fraction of funds reserved for proportional allocation, and the number of infections averted. We assume that the upper level objective is to maximize the number of infections averted. We illustrate with an example using data from California, U.S. PMID:23766551

  17. Coding for Parallel Links to Maximize the Expected Value of Decodable Messages

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew A.; Chang, Christopher S.

    2011-01-01

    When multiple parallel communication links are available, it is useful to consider link-utilization strategies that provide tradeoffs between reliability and throughput. Interesting cases arise when there are three or more available links. Under the model considered, the links have known probabilities of being in working order, and each link has a known capacity. The sender has a number of messages to send to the receiver. Each message has a size and a value (i.e., a worth or priority). Messages may be divided into pieces arbitrarily, and the value of each piece is proportional to its size. The goal is to choose combinations of messages to send on the links so that the expected value of the messages decodable by the receiver is maximized. There are three parts to the innovation: (1) Applying coding to parallel links under the model; (2) Linear programming formulation for finding the optimal combinations of messages to send on the links; and (3) Algorithms for assisting in finding feasible combinations of messages, as support for the linear programming formulation. There are similarities between this innovation and methods developed in the field of network coding. However, network coding has generally been concerned with either maximizing throughput in a fixed network, or robust communication of a fixed volume of data. In contrast, under this model, the throughput is expected to vary depending on the state of the network. Examples of error-correcting codes that are useful under this model but which are not needed under previous models have been found. This model can represent either a one-shot communication attempt, or a stream of communications. Under the one-shot model, message sizes and link capacities are quantities of information (e.g., measured in bits), while under the communications stream model, message sizes and link capacities are information rates (e.g., measured in bits/second). This work has the potential to increase the value of data returned from spacecraft under certain conditions.

  18. Assessing the system value of optimal load shifting

    DOE PAGES

    Merrick, James; Ye, Yinyu; Entriken, Bob

    2017-04-30

    We analyze a competitive electricity market, where consumers exhibit optimal load shifting behavior to maximize utility and producers/suppliers maximize their profit under supply capacity constraints. The associated computationally tractable formulation can be used to inform market design or policy analysis in the context of increasing availability of the smart grid technologies that enable optimal load shifting. Through analytic and numeric assessment of the model, we assess the equilibrium value of optimal electricity load shifting, including how the value changes as more electricity consumers adopt associated technologies. For our illustrative numerical case, derived from the Current Trends scenario of the ERCOTmore » Long Term System Assessment, the average energy arbitrage value per ERCOT customer of optimal load shifting technologies is estimated to be $3 for the 2031 scenario year. We assess the sensitivity of this result to the flexibility of load, along with its relationship to the deployment of renewables. Finally, the model presented can also be a starting point for designing system operation infrastructure that communicates with the devices that schedule loads in response to price signals.« less

  19. Assessing the system value of optimal load shifting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merrick, James; Ye, Yinyu; Entriken, Bob

    We analyze a competitive electricity market, where consumers exhibit optimal load shifting behavior to maximize utility and producers/suppliers maximize their profit under supply capacity constraints. The associated computationally tractable formulation can be used to inform market design or policy analysis in the context of increasing availability of the smart grid technologies that enable optimal load shifting. Through analytic and numeric assessment of the model, we assess the equilibrium value of optimal electricity load shifting, including how the value changes as more electricity consumers adopt associated technologies. For our illustrative numerical case, derived from the Current Trends scenario of the ERCOTmore » Long Term System Assessment, the average energy arbitrage value per ERCOT customer of optimal load shifting technologies is estimated to be $3 for the 2031 scenario year. We assess the sensitivity of this result to the flexibility of load, along with its relationship to the deployment of renewables. Finally, the model presented can also be a starting point for designing system operation infrastructure that communicates with the devices that schedule loads in response to price signals.« less

  20. The Potential (F)utility of a Passive Organ Donor Registration Opportunity: A Conceptual Replication.

    PubMed

    Siegel, Jason T; Alvaro, Eusebio M; Tan, Cara N; Navarro, Mario A; Garner, Lori R; Jones, Sara Pace

    2016-06-01

    Approximately 22 people die each day in the United States as a result of the shortage of transplantable organs. This is particularly problematic among Spanish-dominant Hispanics. Increasing the number of registered organ donors can reduce this deficit. The goal of the current set of studies was to conceptually replicate a prior study indicating the lack of utility of a lone, immediate and complete registration opportunity (ICRO). The study, a quasi-experimental design involving a total of 4 waves of data collection, was conducted in 2 different Mexican consulates in the United States. Guided by the IIFF Model (ie, an ICRO, information, focused engagement, and favorable activation), each wave compared a lone ICRO to a condition that likewise included an ICRO but also included the 3 additional intervention components recommended by the model (ie, information, focused engagement, and favorable activation). Visitors to the Mexican consulates in Tucson, Arizona, and Albuquerque, New Mexico, constituted the participant pool. New organ donor registrations represented the dependent variable. When all 4 components of the IIFF Model were present, approximately 4 registrations per day were recorded; the lone ICRO resulted in approximately 1 registration every 15 days. An ICRO, without the other components of the IIFF Model, is of minimal use in regard to garnering organ donor registrations. Future studies should use the IIFF Model to consider how the utility of ICROs can be maximized. © 2016, NATCO.

  1. Optimal weight based on energy imbalance and utility maximization

    NASA Astrophysics Data System (ADS)

    Sun, Ruoyan

    2016-01-01

    This paper investigates the optimal weight for both male and female using energy imbalance and utility maximization. Based on the difference of energy intake and expenditure, we develop a state equation that reveals the weight gain from this energy gap. We ​construct an objective function considering food consumption, eating habits and survival rate to measure utility. Through applying mathematical tools from optimal control methods and qualitative theory of differential equations, we obtain some results. For both male and female, the optimal weight is larger than the physiologically optimal weight calculated by the Body Mass Index (BMI). We also study the corresponding trajectories to steady state weight respectively. Depending on the value of a few parameters, the steady state can either be a saddle point with a monotonic trajectory or a focus with dampened oscillations.

  2. Budget Allocation in a Competitive Communication Spectrum Economy

    NASA Astrophysics Data System (ADS)

    Lin, Ming-Hua; Tsai, Jung-Fa; Ye, Yinyu

    2009-12-01

    This study discusses how to adjust "monetary budget" to meet each user's physical power demand, or balance all individual utilities in a competitive "spectrum market" of a communication system. In the market, multiple users share a common frequency or tone band and each of them uses the budget to purchase its own transmit power spectra (taking others as given) in maximizing its Shannon utility or pay-off function that includes the effect of interferences. A market equilibrium is a budget allocation, price spectrum, and tone power distribution that independently and simultaneously maximizes each user's utility. The equilibrium conditions of the market are formulated and analyzed, and the existence of an equilibrium is proved. Computational results and comparisons between the competitive equilibrium and Nash equilibrium solutions are also presented, which show that the competitive market equilibrium solution often provides more efficient power distribution.

  3. Dynamic Loading of Substation Distribution Transformers: An Application for use in a Production Grade Environment

    NASA Astrophysics Data System (ADS)

    Zhang, Ming

    Recent trends in the electric power industry have led to more attention to optimal operation of power transformers. In a deregulated environment, optimal operation means minimizing the maintenance and extending the life of this critical and costly equipment for the purpose of maximizing profits. Optimal utilization of a transformer can be achieved through the use of dynamic loading. A benefit of dynamic loading is that it allows better utilization of the transformer capacity, thus increasing the flexibility and reliability of the power system. This document presents the progress on a software application which can estimate the maximum time-varying loading capability of transformers. This information can be used to load devices closer to their limits without exceeding the manufacturer specified operating limits. The maximally efficient dynamic loading of transformers requires a model that can accurately predict both top-oil temperatures (TOTs) and hottest-spot temperatures (HSTs). In the previous work, two kinds of thermal TOT and HST models have been studied and used in the application: the IEEE TOT/HST models and the ASU TOT/HST models. And, several metrics have been applied to evaluate the model acceptability and determine the most appropriate models for using in the dynamic loading calculations. In this work, an investigation to improve the existing transformer thermal models performance is presented. Some factors that may affect the model performance such as improper fan status and the error caused by the poor performance of IEEE models are discussed. Additional methods to determine the reliability of transformer thermal models using metrics such as time constant and the model parameters are also provided. A new production grade application for real-time dynamic loading operating purpose is introduced. This application is developed by using an existing planning application, TTeMP, as a start point, which is designed for the dispatchers and load specialists. To overcome the limitations of TTeMP, the new application can perform dynamic loading under emergency conditions, such as loss-of transformer loading. It also has the capability to determine the emergency rating of the transformers for a real-time estimation.

  4. Maximizing overall liking results in a superior product to minimizing deviations from ideal ratings: an optimization case study with coffee-flavored milk

    PubMed Central

    Li, Bangde; Hayes, John E.; Ziegler, Gregory R.

    2015-01-01

    In just-about-right (JAR) scaling and ideal scaling, attribute delta (i.e., “Too Little” or “Too Much”) reflects a subject’s dissatisfaction level for an attribute relative to their hypothetical ideal. Dissatisfaction (attribute delta) is a different construct from consumer acceptability, operationalized as liking. Therefore, we hypothesized minimizing dissatisfaction and maximizing liking would yield different optimal formulations. The objective of this research was to compare product optimization strategies, i.e. maximizing liking vis-à-vis minimizing dissatisfaction. Coffee-flavored dairy beverages (n = 20) were formulated using a fractional mixture design that constrained the proportions of coffee extract, milk, sucrose, and water. Participants (n = 388) were randomly assigned to one of three research conditions, where they evaluated 4 of the 20 samples using an incomplete block design. Samples were rated for overall liking and for intensity of the attributes sweetness, milk flavor, thickness and coffee flavor. Where appropriate, measures of overall product quality (Ideal_Delta and JAR_Delta) were calculated as the sum of the absolute values of the four attribute deltas. Optimal formulations were estimated by: a) maximizing liking; b) minimizing Ideal_Delta; or c) minimizing JAR_Delta. A validation study was conducted to evaluate product optimization models. Participants indicated a preference for a coffee-flavored dairy beverage with more coffee extract and less milk and sucrose in the dissatisfaction model compared to the formula obtained by maximizing liking. That is, when liking was optimized, participants generally liked a weaker, milkier and sweeter coffee-flavored dairy beverage. Predicted liking scores were validated in a subsequent experiment, and the optimal product formulated to maximize liking was significantly preferred to that formulated to minimize dissatisfaction by a paired preference test. These findings are consistent with the view that JAR and ideal scaling methods both suffer from attitudinal biases that are not present when liking is assessed. That is, consumers sincerely believe they want ‘dark, rich, hearty’ coffee when they do not. This paper also demonstrates the utility and efficiency of a lean experimental approach. PMID:26005291

  5. Speed over efficiency: locusts select body temperatures that favour growth rate over efficient nutrient utilization

    PubMed Central

    Miller, Gabriel A.; Clissold, Fiona J.; Mayntz, David; Simpson, Stephen J.

    2009-01-01

    Ectotherms have evolved preferences for particular body temperatures, but the nutritional and life-history consequences of such temperature preferences are not well understood. We measured thermal preferences in Locusta migratoria (migratory locusts) and used a multi-factorial experimental design to investigate relationships between growth/development and macronutrient utilization (conversion of ingesta to body mass) as a function of temperature. A range of macronutrient intake values for insects at 26, 32 and 38°C was achieved by offering individuals high-protein diets, high-carbohydrate diets or a choice between both. Locusts placed in a thermal gradient selected temperatures near 38°C, maximizing rates of weight gain; however, this enhanced growth rate came at the cost of poor protein and carbohydrate utilization. Protein and carbohydrate were equally digested across temperature treatments, but once digested both macronutrients were converted to growth most efficiently at the intermediate temperature (32°C). Body temperature preference thus yielded maximal growth rates at the expense of efficient nutrient utilization. PMID:19625322

  6. Impulsive Approach Tendencies towards Physical Activity and Sedentary Behaviors, but Not Reflective Intentions, Prospectively Predict Non-Exercise Activity Thermogenesis

    PubMed Central

    Cheval, Boris; Sarrazin, Philippe; Pelletier, Luc

    2014-01-01

    Understanding the determinants of non-exercise activity thermogenesis (NEAT) is crucial, given its extensive health benefits. Some scholars have assumed that a proneness to react differently to environmental cues promoting sedentary versus active behaviors could be responsible for inter-individual differences in NEAT. In line with this reflection and grounded on the Reflective-Impulsive Model, we test the assumption that impulsive processes related to sedentary and physical activity behaviors can prospectively predict NEAT, operationalized as spontaneous effort exerted to maintain low intensity muscle contractions within the release phases of an intermittent maximal isometric contraction task. Participants (n = 91) completed a questionnaire assessing their intentions to adopt physical activity behaviors and a manikin task to assess impulsive approach tendencies towards physical activity behaviors (IAPA) and sedentary behaviors (IASB). Participants were then instructed to perform a maximal handgrip strength task and an intermittent maximal isometric contraction task. As hypothesized, multilevel regression analyses revealed that spontaneous effort was (a) positively predicted by IAPA, (b) negatively predicted by IASB, and (c) was not predicted by physical activity intentions, after controlling for some confounding variables such as age, sex, usual PA level and average force provided during the maximal-contraction phases of the task. These effects remained constant throughout all the phases of the task. This study demonstrated that impulsive processes may play a unique role in predicting spontaneous physical activity behaviors. Theoretically, this finding reinforces the utility of a motivational approach based on dual-process models to explain inter-individual differences in NEAT. Implications for health behavior theories and behavior change interventions are outlined. PMID:25526596

  7. Impulsive approach tendencies towards physical activity and sedentary behaviors, but not reflective intentions, prospectively predict non-exercise activity thermogenesis.

    PubMed

    Cheval, Boris; Sarrazin, Philippe; Pelletier, Luc

    2014-01-01

    Understanding the determinants of non-exercise activity thermogenesis (NEAT) is crucial, given its extensive health benefits. Some scholars have assumed that a proneness to react differently to environmental cues promoting sedentary versus active behaviors could be responsible for inter-individual differences in NEAT. In line with this reflection and grounded on the Reflective-Impulsive Model, we test the assumption that impulsive processes related to sedentary and physical activity behaviors can prospectively predict NEAT, operationalized as spontaneous effort exerted to maintain low intensity muscle contractions within the release phases of an intermittent maximal isometric contraction task. Participants (n = 91) completed a questionnaire assessing their intentions to adopt physical activity behaviors and a manikin task to assess impulsive approach tendencies towards physical activity behaviors (IAPA) and sedentary behaviors (IASB). Participants were then instructed to perform a maximal handgrip strength task and an intermittent maximal isometric contraction task. As hypothesized, multilevel regression analyses revealed that spontaneous effort was (a) positively predicted by IAPA, (b) negatively predicted by IASB, and (c) was not predicted by physical activity intentions, after controlling for some confounding variables such as age, sex, usual PA level and average force provided during the maximal-contraction phases of the task. These effects remained constant throughout all the phases of the task. This study demonstrated that impulsive processes may play a unique role in predicting spontaneous physical activity behaviors. Theoretically, this finding reinforces the utility of a motivational approach based on dual-process models to explain inter-individual differences in NEAT. Implications for health behavior theories and behavior change interventions are outlined.

  8. Active inference and epistemic value.

    PubMed

    Friston, Karl; Rigoli, Francesco; Ognibene, Dimitri; Mathys, Christoph; Fitzgerald, Thomas; Pezzulo, Giovanni

    2015-01-01

    We offer a formal treatment of choice behavior based on the premise that agents minimize the expected free energy of future outcomes. Crucially, the negative free energy or quality of a policy can be decomposed into extrinsic and epistemic (or intrinsic) value. Minimizing expected free energy is therefore equivalent to maximizing extrinsic value or expected utility (defined in terms of prior preferences or goals), while maximizing information gain or intrinsic value (or reducing uncertainty about the causes of valuable outcomes). The resulting scheme resolves the exploration-exploitation dilemma: Epistemic value is maximized until there is no further information gain, after which exploitation is assured through maximization of extrinsic value. This is formally consistent with the Infomax principle, generalizing formulations of active vision based upon salience (Bayesian surprise) and optimal decisions based on expected utility and risk-sensitive (Kullback-Leibler) control. Furthermore, as with previous active inference formulations of discrete (Markovian) problems, ad hoc softmax parameters become the expected (Bayes-optimal) precision of beliefs about, or confidence in, policies. This article focuses on the basic theory, illustrating the ideas with simulations. A key aspect of these simulations is the similarity between precision updates and dopaminergic discharges observed in conditioning paradigms.

  9. Quality competition and uncertainty in a horizontally differentiated hospital market.

    PubMed

    Montefiori, Marcello

    2014-01-01

    The chapter studies hospital competition in a spatially differentiated market in which patient demand reflects the quality/distance mix that maximizes their utility. Treatment is free at the point of use and patients freely choose the provider which best fits their expectations. Hospitals might have asymmetric objectives and costs, however they are reimbursed using a uniform prospective payment. The chapter provides different equilibrium outcomes, under perfect and asymmetric information. The results show that asymmetric costs, in the case where hospitals are profit maximizers, allow for a social welfare and quality improvement. On the other hand, the presence of a publicly managed hospital which pursues the objective of quality maximization is able to ensure a higher level of quality, patient surplus and welfare. However, the extent of this outcome might be considerably reduced when high levels of public hospital inefficiency are detectable. Finally, the negative consequences caused by the presence of asymmetric information are highlighted in the different scenarios of ownership/objectives and costs. The setting adopted in the model aims at describing the up-coming European market for secondary health care, focusing on hospital behavior and it is intended to help the policy-maker in understanding real world dynamics.

  10. Integrating Genomic Data Sets for Knowledge Discovery: An Informed Approach to Management of Captive Endangered Species.

    PubMed

    Irizarry, Kristopher J L; Bryant, Doug; Kalish, Jordan; Eng, Curtis; Schmidt, Peggy L; Barrett, Gini; Barr, Margaret C

    2016-01-01

    Many endangered captive populations exhibit reduced genetic diversity resulting in health issues that impact reproductive fitness and quality of life. Numerous cost effective genomic sequencing and genotyping technologies provide unparalleled opportunity for incorporating genomics knowledge in management of endangered species. Genomic data, such as sequence data, transcriptome data, and genotyping data, provide critical information about a captive population that, when leveraged correctly, can be utilized to maximize population genetic variation while simultaneously reducing unintended introduction or propagation of undesirable phenotypes. Current approaches aimed at managing endangered captive populations utilize species survival plans (SSPs) that rely upon mean kinship estimates to maximize genetic diversity while simultaneously avoiding artificial selection in the breeding program. However, as genomic resources increase for each endangered species, the potential knowledge available for management also increases. Unlike model organisms in which considerable scientific resources are used to experimentally validate genotype-phenotype relationships, endangered species typically lack the necessary sample sizes and economic resources required for such studies. Even so, in the absence of experimentally verified genetic discoveries, genomics data still provides value. In fact, bioinformatics and comparative genomics approaches offer mechanisms for translating these raw genomics data sets into integrated knowledge that enable an informed approach to endangered species management.

  11. Integrating Genomic Data Sets for Knowledge Discovery: An Informed Approach to Management of Captive Endangered Species

    PubMed Central

    Irizarry, Kristopher J. L.; Bryant, Doug; Kalish, Jordan; Eng, Curtis; Schmidt, Peggy L.; Barrett, Gini; Barr, Margaret C.

    2016-01-01

    Many endangered captive populations exhibit reduced genetic diversity resulting in health issues that impact reproductive fitness and quality of life. Numerous cost effective genomic sequencing and genotyping technologies provide unparalleled opportunity for incorporating genomics knowledge in management of endangered species. Genomic data, such as sequence data, transcriptome data, and genotyping data, provide critical information about a captive population that, when leveraged correctly, can be utilized to maximize population genetic variation while simultaneously reducing unintended introduction or propagation of undesirable phenotypes. Current approaches aimed at managing endangered captive populations utilize species survival plans (SSPs) that rely upon mean kinship estimates to maximize genetic diversity while simultaneously avoiding artificial selection in the breeding program. However, as genomic resources increase for each endangered species, the potential knowledge available for management also increases. Unlike model organisms in which considerable scientific resources are used to experimentally validate genotype-phenotype relationships, endangered species typically lack the necessary sample sizes and economic resources required for such studies. Even so, in the absence of experimentally verified genetic discoveries, genomics data still provides value. In fact, bioinformatics and comparative genomics approaches offer mechanisms for translating these raw genomics data sets into integrated knowledge that enable an informed approach to endangered species management. PMID:27376076

  12. Ground truth spectrometry and imagery of eruption clouds to maximize utility of satellite imagery

    NASA Technical Reports Server (NTRS)

    Rose, William I.

    1993-01-01

    Field experiments with thermal imaging infrared radiometers were performed and a laboratory system was designed for controlled study of simulated ash clouds. Using AVHRR (Advanced Very High Resolution Radiometer) thermal infrared bands 4 and 5, a radiative transfer method was developed to retrieve particle sizes, optical depth and particle mass involcanic clouds. A model was developed for measuring the same parameters using TIMS (Thermal Infrared Multispectral Scanner), MODIS (Moderate Resolution Imaging Spectrometer), and ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer). Related publications are attached.

  13. Phosphate metabolite concentrations and ATP hydrolysis potential in normal and ischaemic hearts

    PubMed Central

    Wu, Fan; Zhang, Eric Y; Zhang, Jianyi; Bache, Robert J; Beard, Daniel A

    2008-01-01

    To understand how cardiac ATP and CrP remain stable with changes in work rate – a phenomenon that has eluded mechanistic explanation for decades – data from 31phosphate-magnetic resonance spectroscopy (31P-MRS) are analysed to estimate cytoplasmic and mitochondrial phosphate metabolite concentrations in the normal state, during high cardiac workstates, during acute ischaemia and reactive hyperaemic recovery. Analysis is based on simulating distributed heterogeneous oxygen transport in the myocardium integrated with a detailed model of cardiac energy metabolism. The model predicts that baseline myocardial free inorganic phosphate (Pi) concentration in the canine myocyte cytoplasm – a variable not accessible to direct non-invasive measurement – is approximately 0.29 mm and increases to 2.3 mm near maximal cardiac oxygen consumption. During acute ischaemia (from ligation of the left anterior descending artery) Pi increases to approximately 3.1 mm and ATP consumption in the ischaemic tissue is reduced quickly to less than half its baseline value before the creatine phosphate (CrP) pool is 18% depleted. It is determined from these experiments that the maximal rate of oxygen consumption of the heart is an emergent property and is limited not simply by the maximal rate of ATP synthesis, but by the maximal rate at which ATP can be synthesized at a potential at which it can be utilized. The critical free energy of ATP hydrolysis for cardiac contraction that is consistent with these findings is approximately −63.5 kJ mol−1. Based on theoretical findings, we hypothesize that inorganic phosphate is both the primary feedback signal for stimulating oxidative phosphorylation in vivo and also the most significant product of ATP hydrolysis in limiting the capacity of the heart to hydrolyse ATP in vivo. Due to the lack of precise quantification of Piin vivo, these hypotheses and associated model predictions remain to be carefully tested experimentally. PMID:18617566

  14. Time-Extended Policies in Mult-Agent Reinforcement Learning

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Agogino, Adrian K.

    2004-01-01

    Reinforcement learning methods perform well in many domains where a single agent needs to take a sequence of actions to perform a task. These methods use sequences of single-time-step rewards to create a policy that tries to maximize a time-extended utility, which is a (possibly discounted) sum of these rewards. In this paper we build on our previous work showing how these methods can be extended to a multi-agent environment where each agent creates its own policy that works towards maximizing a time-extended global utility over all agents actions. We show improved methods for creating time-extended utilities for the agents that are both "aligned" with the global utility and "learnable." We then show how to crate single-time-step rewards while avoiding the pi fall of having rewards aligned with the global reward leading to utilities not aligned with the global utility. Finally, we apply these reward functions to the multi-agent Gridworld problem. We explicitly quantify a utility's learnability and alignment, and show that reinforcement learning agents using the prescribed reward functions successfully tradeoff learnability and alignment. As a result they outperform both global (e.g., team games ) and local (e.g., "perfectly learnable" ) reinforcement learning solutions by as much as an order of magnitude.

  15. Creating an Agent Based Framework to Maximize Information Utility

    DTIC Science & Technology

    2008-03-01

    information utility may be a qualitative description of the information, where one would expect the adjectives low value, fair value , high value. For...operations. Information in this category may have a fair value rating. Finally, many seemingly unrelated events, such as reports of snipers in buildings

  16. Modeling road-cycling performance.

    PubMed

    Olds, T S; Norton, K I; Lowe, E L; Olive, S; Reay, F; Ly, S

    1995-04-01

    This paper presents a complete set of equations for a "first principles" mathematical model of road-cycling performance, including corrections for the effect of winds, tire pressure and wheel radius, altitude, relative humidity, rotational kinetic energy, drafting, and changed drag. The relevant physiological, biophysical, and environmental variables were measured in 41 experienced cyclists completing a 26-km road time trial. The correlation between actual and predicted times was 0.89 (P < or = 0.0001), with a mean difference of 0.74 min (1.73% of mean performance time) and a mean absolute difference of 1.65 min (3.87%). Multiple simulations were performed where model inputs were randomly varied using a normal distribution about the measured values with a SD equivalent to the estimated day-to-day variability or technical error of measurement in each of the inputs. This analysis yielded 95% confidence limits for the predicted times. The model suggests that the main physiological factors contributing to road-cycling performance are maximal O2 consumption, fractional utilization of maximal O2 consumption, mechanical efficiency, and projected frontal area. The model is then applied to some practical problems in road cycling: the effect of drafting, the advantage of using smaller front wheels, the effects of added mass, the importance of rotational kinetic energy, the effect of changes in drag due to changes in bicycle configuration, the normalization of performances under different conditions, and the limits of human performance.

  17. Patient-specific parameter estimation in single-ventricle lumped circulation models under uncertainty

    PubMed Central

    Schiavazzi, Daniele E.; Baretta, Alessia; Pennati, Giancarlo; Hsia, Tain-Yen; Marsden, Alison L.

    2017-01-01

    Summary Computational models of cardiovascular physiology can inform clinical decision-making, providing a physically consistent framework to assess vascular pressures and flow distributions, and aiding in treatment planning. In particular, lumped parameter network (LPN) models that make an analogy to electrical circuits offer a fast and surprisingly realistic method to reproduce the circulatory physiology. The complexity of LPN models can vary significantly to account, for example, for cardiac and valve function, respiration, autoregulation, and time-dependent hemodynamics. More complex models provide insight into detailed physiological mechanisms, but their utility is maximized if one can quickly identify patient specific parameters. The clinical utility of LPN models with many parameters will be greatly enhanced by automated parameter identification, particularly if parameter tuning can match non-invasively obtained clinical data. We present a framework for automated tuning of 0D lumped model parameters to match clinical data. We demonstrate the utility of this framework through application to single ventricle pediatric patients with Norwood physiology. Through a combination of local identifiability, Bayesian estimation and maximum a posteriori simplex optimization, we show the ability to automatically determine physiologically consistent point estimates of the parameters and to quantify uncertainty induced by errors and assumptions in the collected clinical data. We show that multi-level estimation, that is, updating the parameter prior information through sub-model analysis, can lead to a significant reduction in the parameter marginal posterior variance. We first consider virtual patient conditions, with clinical targets generated through model solutions, and second application to a cohort of four single-ventricle patients with Norwood physiology. PMID:27155892

  18. Hybrid protection algorithms based on game theory in multi-domain optical networks

    NASA Astrophysics Data System (ADS)

    Guo, Lei; Wu, Jingjing; Hou, Weigang; Liu, Yejun; Zhang, Lincong; Li, Hongming

    2011-12-01

    With the network size increasing, the optical backbone is divided into multiple domains and each domain has its own network operator and management policy. At the same time, the failures in optical network may lead to a huge data loss since each wavelength carries a lot of traffic. Therefore, the survivability in multi-domain optical network is very important. However, existing survivable algorithms can achieve only the unilateral optimization for profit of either users or network operators. Then, they cannot well find the double-win optimal solution with considering economic factors for both users and network operators. Thus, in this paper we develop the multi-domain network model with involving multiple Quality of Service (QoS) parameters. After presenting the link evaluation approach based on fuzzy mathematics, we propose the game model to find the optimal solution to maximize the user's utility, the network operator's utility, and the joint utility of user and network operator. Since the problem of finding double-win optimal solution is NP-complete, we propose two new hybrid protection algorithms, Intra-domain Sub-path Protection (ISP) algorithm and Inter-domain End-to-end Protection (IEP) algorithm. In ISP and IEP, the hybrid protection means that the intelligent algorithm based on Bacterial Colony Optimization (BCO) and the heuristic algorithm are used to solve the survivability in intra-domain routing and inter-domain routing, respectively. Simulation results show that ISP and IEP have the similar comprehensive utility. In addition, ISP has better resource utilization efficiency, lower blocking probability, and higher network operator's utility, while IEP has better user's utility.

  19. Sequence, assembly and annotation of the maize W22 genome

    USDA-ARS?s Scientific Manuscript database

    Since its adoption by Brink and colleagues in the 1950s and 60s, the maize W22 inbred has been utilized extensively to understand fundamental genetic and epigenetic processes such recombination, transposition and paramutation. To maximize the utility of W22 in gene discovery, we have Illumina sequen...

  20. Complete utilization of spent coffee to biodiesel, bio-oil and biochar

    USDA-ARS?s Scientific Manuscript database

    Energy production from renewable or waste biomass/material is a more attractive alternative compared to conventional feedstocks, such as corn and soybean. The objective of this study is to maximize utilization of any waste organic carbon material to produce renewable energy. This study presents tota...

  1. 76 FR 49473 - Petition to Maximize Practical Utility of List 1 Chemicals Screened Through EPA's Endocrine...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-10

    ... Utility of List 1 Chemicals Screened Through EPA's Endocrine Disruptor Screening Program; Notice of... to the test orders issued under the Endocrine Disruptor Screening Program. DATES: Comments must be... testing of chemical substances for potential endocrine effects. Potentially affected entities, identified...

  2. Dynamics of melanoma tumor therapy with vesicular stomatitis virus: explaining the variability in outcomes using mathematical modeling.

    PubMed

    Rommelfanger, D M; Offord, C P; Dev, J; Bajzer, Z; Vile, R G; Dingli, D

    2012-05-01

    Tumor selective, replication competent viruses are being tested for cancer gene therapy. This approach introduces a new therapeutic paradigm due to potential replication of the therapeutic agent and induction of a tumor-specific immune response. However, the experimental outcomes are quite variable, even when studies utilize highly inbred strains of mice and the same cell line and virus. Recognizing that virotherapy is an exercise in population dynamics, we utilize mathematical modeling to understand the variable outcomes observed when B16ova malignant melanoma tumors are treated with vesicular stomatitis virus in syngeneic, fully immunocompetent mice. We show how variability in the initial tumor size and the actual amount of virus delivered to the tumor have critical roles on the outcome of therapy. Virotherapy works best when tumors are small, and a robust innate immune response can lead to superior tumor control. Strategies that reduce tumor burden without suppressing the immune response and methods that maximize the amount of virus delivered to the tumor should optimize tumor control in this model system.

  3. Modeling cross-border care in the EU using a principal-agent framework.

    PubMed

    Crivelli, L; Zweifel, P

    1998-01-01

    Cross-border care is likely to become a major issue among EU countries because patients have the option of obtaining treatment abroad under Community Regulations 1408/71. This paper develops a model formalizing both the patient's decision to apply for cross-border care and the authorizing physician's decision to admit a patient to the program. The patient is assumed to maximize expected utility, which depends on the quality of care and the length of waiting in the home country and the host country, respectively. Not all patients qualifying for the EU program present themselves to the authorizing physician because of the transaction cost involved. The physician in her turn shapes effective demand for authorization through her rate of refusal, which constitutes information to potential applicants about the probability of obtaining treatment abroad. The authorizing physician thus acts as an agent serving two principals, her patient and her national government, trading off the perceived utility loss of patients who are rejected against her commitment to domestic health policy. The model may be used to explain existing patient flows between EU countries.

  4. Model of refrigerated display-space allocation for multi agro-perishable products considering markdown policy

    NASA Astrophysics Data System (ADS)

    Satiti, D.; Rusdiansyah, A.

    2018-04-01

    Problems that need more attention in the agri-food supply chain are loss and waste as consequences from improper quality control and excessive inventories. The use of cold storage is still being one of favourite technologies in controlling product quality by majority of retailers. We considerate the temperature of cold storage in determining the inventory and pricing strategies based on identified product quality. This study aims to minimize the agri-food waste, utility of cold storage facilities and maximize retailer’s profit through determining the refrigerated display-space allocation and markdown policy based on identified food shelf life. The proposed model evaluated with several different scenarios to find out the right strategy.

  5. Optimization of power generating thermoelectric modules utilizing LNG cold energy

    NASA Astrophysics Data System (ADS)

    Jeong, Eun Soo

    2017-12-01

    A theoretical investigation to optimize thermoelectric modules, which convert LNG cold energy into electrical power, is performed using a novel one-dimensional analytic model. In the model the optimum thermoelement length and external load resistance, which maximize the energy conversion ratio, are determined by the heat supplied to the cold heat reservoir, the hot and cold side temperatures, the thermal and electrical contact resistances and the properties of thermoelectric materials. The effects of the thermal and electrical contact resistances and the heat supplied to the cold heat reservoir on the maximum energy conversion ratio, the optimum thermoelement length and the optimum external load resistance are shown.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thaule, S.B.; Postvoll, W.

    Installation by den norske stats oljeselskap A.S. (Statoil) of a powerful pipeline-modeling system on Zeepipe has allowed this major North Sea gas pipeline to meet the growing demands and seasonal variations of the European gas market. The Troll gas-sales agreement (TGSA) in 1986 called for large volumes of Norwegian gas to begin arriving from the North Sea Sleipner East field in october 1993. It is important to Statoil to maintain regular gas delivers from its integrated transport network. In addition, high utilization of transport capacity maximizes profits. In advance of operations, Statoil realized that state-of-the-art supervisory control and data acquisitionmore » (scada) and pipeline-modeling systems (PMS) would be necessary to meet its goals and to remain the most efficient North Sea operator. The paper describes the linking of Troll and Zeebrugge, contractual issues, the supervisory system, the scada module, pipeline modeling, real-time model, look-ahead model, predictive model, and model performance.« less

  7. Adoption of Emissions Abating Technologies by U.S. Electricity Producing Firms Under the SO2 Emission Allowance Market

    NASA Astrophysics Data System (ADS)

    Creamer, Gregorio Bernardo

    The objective of this research is to determine the adaptation strategies that coal-based, electricity producing firms in the United States utilize to comply with the emission control regulations imposed by the SO2 Emissions Allowance Market created by the Clean Air Act Amendment of 1990, and the effect of market conditions on the decision making process. In particular, I take into consideration (1) the existence of carbon contracts for the provision of coal that may a affect coal prices at the plant level, and (2) local and geographical conditions, as well as political arrangements that may encourage firms to adopt strategies that appear socially less efficient. As the electricity producing sector is a regulated sector, firms do not necessarily behave in a way that maximizes the welfare of society when reacting to environmental regulations. In other words, profit maximization actions taken by the firm do not necessarily translate into utility maximization for society. Therefore, the environmental regulator has to direct firms into adopting strategies that are socially efficient, i.e., that maximize utility. The SO 2 permit market is an instrument that allows each firm to reduce marginal emissions abatement costs according to their own production conditions and abatement costs. Companies will be driven to opt for a cost-minimizing emissions abatement strategy or a combination of abatement strategies when adapting to new environmental regulations or markets. Firms may adopt one or more of the following strategies to reduce abatement costs while meeting the emission constraints imposed by the SO2 Emissions Allowance Market: (1) continue with business as usual on the production site while buying SO2 permits to comply with environmental regulations, (2) switch to higher quality, lower sulfur coal inputs that will generate less SO2 emissions, or (3) adopting new emissions abating technologies. A utility optimization condition is that the marginal value of each input should be equal to the product generated by using it and to the activities that are required by new regulations. The comparative technological and scale efficiency factors of coal-based electricity producing plants are calculated using the Data Envelopment Analysis (DEA) framework, and used as proxies to test this condition. In the empirical analysis, econometric models of the response of firms to emissions control are analyzed around the following aspects: (1) characterization of the behavior of firms and their efficiency, (2) relevant variables that trigger the adoption of technology, that is, the acquisition of scrubbers , and (3) the influence of exogenous variables, such as the existence of contracts, distance from mine to plant, and local conditions of the region where plants are located.

  8. Fine grained event processing on HPCs with the ATLAS Yoda system

    NASA Astrophysics Data System (ADS)

    Calafiura, Paolo; De, Kaushik; Guan, Wen; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Panitkin, Sergey; Tsulaia, Vakhtang; Van Gemmeren, Peter; Wenaus, Torre

    2015-12-01

    High performance computing facilities present unique challenges and opportunities for HEP event processing. The massive scale of many HPC systems means that fractionally small utilization can yield large returns in processing throughput. Parallel applications which can dynamically and efficiently fill any scheduling opportunities the resource presents benefit both the facility (maximal utilization) and the (compute-limited) science. The ATLAS Yoda system provides this capability to HEP-like event processing applications by implementing event-level processing in an MPI-based master-client model that integrates seamlessly with the more broadly scoped ATLAS Event Service. Fine grained, event level work assignments are intelligently dispatched to parallel workers to sustain full utilization on all cores, with outputs streamed off to destination object stores in near real time with similarly fine granularity, such that processing can proceed until termination with full utilization. The system offers the efficiency and scheduling flexibility of preemption without requiring the application actually support or employ check-pointing. We will present the new Yoda system, its motivations, architecture, implementation, and applications in ATLAS data processing at several US HPC centers.

  9. Utilization of design data on conventional system to building information modeling (BIM)

    NASA Astrophysics Data System (ADS)

    Akbar, Boyke M.; Z. R., Dewi Larasati

    2017-11-01

    Nowadays infrastructure development becomes one of the main priorities in the developed country such as Indonesia. The use of conventional design system is considered no longer effectively support the infrastructure projects, especially for the high complexity building design, due to its fragmented system issues. BIM comes as one of the solutions in managing projects in an integrated manner. Despite of the all known BIM benefits, there are some obstacles on the migration process to BIM. The two main of the obstacles are; the BIM implementation unpreparedness of some project parties and a concerns to leave behind the existing database and create a new one on the BIM system. This paper discusses the utilization probabilities of the existing CAD data from the conventional design system for BIM system. The existing conventional CAD data's and BIM design system output was studied to examine compatibility issues between two subject and followed by an utilization scheme-strategy probabilities. The goal of this study is to add project parties' eagerness in migrating to BIM by maximizing the existing data utilization and hopefully could also increase BIM based project workflow quality.

  10. a Threshold-Free Filtering Algorithm for Airborne LIDAR Point Clouds Based on Expectation-Maximization

    NASA Astrophysics Data System (ADS)

    Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.

    2018-04-01

    Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.

  11. Optimizing Industrial Consumer Demand Response Through Disaggregation, Hour-Ahead Pricing, and Momentary Autonomous Control

    NASA Astrophysics Data System (ADS)

    Abdulaal, Ahmed

    The work in this study addresses the current limitations of the price-driven demand response (DR) approach. Mainly, the dependability on consumers to respond in an energy aware conduct, the response timeliness, the difficulty of applying DR in a busy industrial environment, and the problem of load synchronization are of utmost concern. In order to conduct a simulation study, realistic price simulation model and consumers' building load models are created using real data. DR action is optimized using an autonomous control method, which eliminates the dependency on frequent consumer engagement. Since load scheduling and long-term planning approaches are infeasible in the industrial environment, the proposed method utilizes instantaneous DR in response to hour-ahead price signals (RTP-HA). Preliminary simulation results concluded savings at the consumer-side at the cost of increased supplier-side burden due to the aggregate effect of the universal DR policies. Therefore, a consumer disaggregation strategy is briefly discussed. Finally, a refined discrete-continuous control system is presented, which utilizes multi-objective Pareto optimization, evolutionary programming, utility functions, and bidirectional loads. Demonstrated through a virtual testbed fit with real data, the new system achieves momentary optimized DR in real-time while maximizing the consumer's wellbeing.

  12. Power system modeling and optimization methods vis-a-vis integrated resource planning (IRP)

    NASA Astrophysics Data System (ADS)

    Arsali, Mohammad H.

    1998-12-01

    The state-of-the-art restructuring of power industries is changing the fundamental nature of retail electricity business. As a result, the so-called Integrated Resource Planning (IRP) strategies implemented on electric utilities are also undergoing modifications. Such modifications evolve from the imminent considerations to minimize the revenue requirements and maximize electrical system reliability vis-a-vis capacity-additions (viewed as potential investments). IRP modifications also provide service-design bases to meet the customer needs towards profitability. The purpose of this research as deliberated in this dissertation is to propose procedures for optimal IRP intended to expand generation facilities of a power system over a stretched period of time. Relevant topics addressed in this research towards IRP optimization are as follows: (1) Historical prospective and evolutionary aspects of power system production-costing models and optimization techniques; (2) A survey of major U.S. electric utilities adopting IRP under changing socioeconomic environment; (3) A new technique designated as the Segmentation Method for production-costing via IRP optimization; (4) Construction of a fuzzy relational database of a typical electric power utility system for IRP purposes; (5) A genetic algorithm based approach for IRP optimization using the fuzzy relational database.

  13. General form of a cooperative gradual maximal covering location problem

    NASA Astrophysics Data System (ADS)

    Bagherinejad, Jafar; Bashiri, Mahdi; Nikzad, Hamideh

    2018-07-01

    Cooperative and gradual covering are two new methods for developing covering location models. In this paper, a cooperative maximal covering location-allocation model is developed (CMCLAP). In addition, both cooperative and gradual covering concepts are applied to the maximal covering location simultaneously (CGMCLP). Then, we develop an integrated form of a cooperative gradual maximal covering location problem, which is called a general CGMCLP. By setting the model parameters, the proposed general model can easily be transformed into other existing models, facilitating general comparisons. The proposed models are developed without allocation for physical signals and with allocation for non-physical signals in discrete location space. Comparison of the previously introduced gradual maximal covering location problem (GMCLP) and cooperative maximal covering location problem (CMCLP) models with our proposed CGMCLP model in similar data sets shows that the proposed model can cover more demands and acts more efficiently. Sensitivity analyses are performed to show the effect of related parameters and the model's validity. Simulated annealing (SA) and a tabu search (TS) are proposed as solution algorithms for the developed models for large-sized instances. The results show that the proposed algorithms are efficient solution approaches, considering solution quality and running time.

  14. Reputation, Princing and the E-Science Grid

    NASA Astrophysics Data System (ADS)

    Anandasivam, Arun; Neumann, Dirk

    One of the fundamental aspects for an efficient Grid usage is the optimization of resource allocation among the participants. However, this has not yet materialized. Each user is a self-interested participant trying to maximize his utility whereas the utility is not only determined by the fastest completion time, but on the prices as well. Future revenues are influenced by users' reputation. Reputation mechanisms help to build trust between loosely coupled and geographically distributed participants. Providers need an incentive to reduce selfish cancellation of jobs and privilege own jobs. In this chapter we present first an offline scheduling mechanism with a fixed price. Jobs are collected by a broker and scheduled to machines. The goal of the broker is to balance the load and to maximize the revenue in the network. Consumers can submit their jobs according to their preferences, but taking the incentives of the broker into account. This mechanism does not consider reputation. In a second step a reputation-based pricing mechanism for a simple, but fair pricing of resources is analyzed. In e-Science researchers do not appreciate idiosyncratic pricing strategies and policies. Their interest lies in doing research in an efficient manner. Consequently, in our mechanism the price is tightly coupled to the reputation of a site to guarantee fairness of pricing and facilitate price determination. Furthermore, the price is not the only parameter as completion time plays an important role, when deadlines have to be met. We provide a flexible utility and decision model for every participant and analyze the outcome of our reputation-based pricing system via simulation.

  15. Will the Needs-Based Planning of Health Human Resources Currently Undertaken in Several Countries Lead to Excess Supply and Inefficiency?

    PubMed

    Basu, Kisalaya; Pak, Maxwell

    2016-01-01

    Recently, the emphasis on health human resources (HHR) planning has shifted away from a utilization-based approach toward a needs-based one in which planning is based on the projected health needs of the population. However, needs-based models that are currently in use rely on a definition of 'needs' that include only the medical circumstances of individuals and not personal preferences or other socio-economic factors. We examine whether planning based on such a narrow definition will maximize social welfare. We show that, in a publicly funded healthcare system, if the planner seeks to meet the aggregate need without taking utilization into consideration, then oversupply of HHR is likely because 'needs' do not necessarily translate into 'usage.' Our result suggests that HHR planning should track the healthcare system as access gradually improves because, even if health care is fully accessible, individuals may not fully utilize it to the degree prescribed by their medical circumstances. Copyright © 2014 John Wiley & Sons, Ltd.

  16. Maximally Symmetric Composite Higgs Models.

    PubMed

    Csáki, Csaba; Ma, Teng; Shu, Jing

    2017-09-29

    Maximal symmetry is a novel tool for composite pseudo Goldstone boson Higgs models: it is a remnant of an enhanced global symmetry of the composite fermion sector involving a twisting with the Higgs field. Maximal symmetry has far-reaching consequences: it ensures that the Higgs potential is finite and fully calculable, and also minimizes the tuning. We present a detailed analysis of the maximally symmetric SO(5)/SO(4) model and comment on its observational consequences.

  17. Assessing the Problem Formulation in an Integrated Assessment Model: Implications for Climate Policy Decision-Support

    NASA Astrophysics Data System (ADS)

    Garner, G. G.; Reed, P. M.; Keller, K.

    2014-12-01

    Integrated assessment models (IAMs) are often used with the intent to aid in climate change decisionmaking. Numerous studies have analyzed the effects of parametric and/or structural uncertainties in IAMs, but uncertainties regarding the problem formulation are often overlooked. Here we use the Dynamic Integrated model of Climate and the Economy (DICE) to analyze the effects of uncertainty surrounding the problem formulation. The standard DICE model adopts a single objective to maximize a weighted sum of utilities of per-capita consumption. Decisionmakers, however, may be concerned with a broader range of values and preferences that are not captured by this a priori definition of utility. We reformulate the problem by introducing three additional objectives that represent values such as (i) reliably limiting global average warming to two degrees Celsius and minimizing both (ii) the costs of abatement and (iii) the damages due to climate change. We derive a set of Pareto-optimal solutions over which decisionmakers can trade-off and assess performance criteria a posteriori. We illustrate the potential for myopia in the traditional problem formulation and discuss the capability of this multiobjective formulation to provide decision support.

  18. Maintaining Registered Nurses' Currency in Informatics

    ERIC Educational Resources Information Center

    Strawn, Jennifer Alaine

    2017-01-01

    Technology has changed how registered nurses (RNs) provide care at the bedside. As more technologies are utilized to improve quality of care, safety of care, maximize efficiencies, and decrease costs of care, one must question how well the information technologies (IT) are fully integrated and utilized by the front-line bedside nurse in his or her…

  19. Network efficient power control for wireless communication systems.

    PubMed

    Campos-Delgado, Daniel U; Luna-Rivera, Jose Martin; Martinez-Sánchez, C J; Gutierrez, Carlos A; Tecpanecatl-Xihuitl, J L

    2014-01-01

    We introduce a two-loop power control that allows an efficient use of the overall power resources for commercial wireless networks based on cross-layer optimization. This approach maximizes the network's utility in the outer-loop as a function of the averaged signal to interference-plus-noise ratio (SINR) by considering adaptively the changes in the network characteristics. For this purpose, the concavity property of the utility function was verified with respect to the SINR, and an iterative search was proposed with guaranteed convergence. In addition, the outer-loop is in charge of selecting the detector that minimizes the overall power consumption (transmission and detection). Next the inner-loop implements a feedback power control in order to achieve the optimal SINR in the transmissions despite channel variations and roundtrip delays. In our proposal, the utility maximization process and detector selection and feedback power control are decoupled problems, and as a result, these strategies are implemented at two different time scales in the two-loop framework. Simulation results show that substantial utility gains may be achieved by improving the power management in the wireless network.

  20. Network Efficient Power Control for Wireless Communication Systems

    PubMed Central

    Campos-Delgado, Daniel U.; Luna-Rivera, Jose Martin; Martinez-Sánchez, C. J.; Gutierrez, Carlos A.; Tecpanecatl-Xihuitl, J. L.

    2014-01-01

    We introduce a two-loop power control that allows an efficient use of the overall power resources for commercial wireless networks based on cross-layer optimization. This approach maximizes the network's utility in the outer-loop as a function of the averaged signal to interference-plus-noise ratio (SINR) by considering adaptively the changes in the network characteristics. For this purpose, the concavity property of the utility function was verified with respect to the SINR, and an iterative search was proposed with guaranteed convergence. In addition, the outer-loop is in charge of selecting the detector that minimizes the overall power consumption (transmission and detection). Next the inner-loop implements a feedback power control in order to achieve the optimal SINR in the transmissions despite channel variations and roundtrip delays. In our proposal, the utility maximization process and detector selection and feedback power control are decoupled problems, and as a result, these strategies are implemented at two different time scales in the two-loop framework. Simulation results show that substantial utility gains may be achieved by improving the power management in the wireless network. PMID:24683350

  1. Optimization of Multiple Related Negotiation through Multi-Negotiation Network

    NASA Astrophysics Data System (ADS)

    Ren, Fenghui; Zhang, Minjie; Miao, Chunyan; Shen, Zhiqi

    In this paper, a Multi-Negotiation Network (MNN) and a Multi- Negotiation Influence Diagram (MNID) are proposed to optimally handle Multiple Related Negotiations (MRN) in a multi-agent system. Most popular, state-of-the-art approaches perform MRN sequentially. However, a sequential procedure may not optimally execute MRN in terms of maximizing the global outcome, and may even lead to unnecessary losses in some situations. The motivation of this research is to use a MNN to handle MRN concurrently so as to maximize the expected utility of MRN. Firstly, both the joint success rate and the joint utility by considering all related negotiations are dynamically calculated based on a MNN. Secondly, by employing a MNID, an agent's possible decision on each related negotiation is reflected by the value of expected utility. Lastly, through comparing expected utilities between all possible policies to conduct MRN, an optimal policy is generated to optimize the global outcome of MRN. The experimental results indicate that the proposed approach can improve the global outcome of MRN in a successful end scenario, and avoid unnecessary losses in an unsuccessful end scenario.

  2. An empirical analysis of the corporate call decision

    NASA Astrophysics Data System (ADS)

    Carlson, Murray Dean

    1998-12-01

    In this thesis we provide insights into the behavior of financial managers of utility companies by studying their decisions to redeem callable preferred shares. In particular, we investigate whether or not an option pricing based model of the call decision, with managers who maximize shareholder value, does a better job of explaining callable preferred share prices and call decisions than do other models of the decision. In order to perform these tests, we extend an empirical technique introduced by Rust (1987) to include the use of information from preferred share prices in addition to the call decisions. The model we develop to value the option embedded in a callable preferred share differs from standard models in two ways. First, as suggested in Kraus (1983), we explicitly account for transaction costs associated with a redemption. Second, we account for state variables that are observed by the decision makers but not by the preferred shareholders. We interpret these unobservable state variables as the benefits and costs associated with a change in capital structure that can accompany a call decision. When we add this variable, our empirical model changes from one which predicts exactly when a share should be called to one which predicts the probability of a call as the function of the observable state. These two modifications of the standard model result in predictions of calls, and therefore of callable preferred share prices, that are consistent with several previously unexplained features of the data; we show that the predictive power of the model is improved in a statistical sense by adding these features to the model. The pricing and call probability functions from our model do a good job of describing call decisions and preferred share prices for several utilities. Using data from shares of the Pacific Gas and Electric Co. (PGE) we obtain reasonable estimates for the transaction costs associated with a call. Using a formal empirical test, we are able to conclude that the managers of the Pacific Gas and Electric Company clearly take into account the value of the option to delay the call when making their call decisions. Overall, the model seems to be robust to tests of its specification and does a better job of describing the data than do simpler models of the decision making process. Limitations in the data do not allow us to perform the same tests in a larger cross-section of utility companies. However, we are able to estimate transaction cost parameters for many firms and these do not seem to vary significantly from those of PGE. This evidence does not cause us to reject our hypothesis that managerial behavior is consistent with a model in which managers maximize shareholder value.

  3. Integrating geographic information systems and remote sensing with spatial econometric and mixed logit models for environmental valuation

    NASA Astrophysics Data System (ADS)

    Wells, Aaron Raymond

    This research focuses on the Emory and Obed Watersheds in the Cumberland Plateau in Central Tennessee and the Lower Hatchie River Watershed in West Tennessee. A framework based on market and nonmarket valuation techniques was used to empirically estimate economic values for environmental amenities and negative externalities in these areas. The specific techniques employed include a variation of hedonic pricing and discrete choice conjoint analysis (i.e., choice modeling), in addition to geographic information systems (GIS) and remote sensing. Microeconomic models of agent behavior, including random utility theory and profit maximization, provide the principal theoretical foundation linking valuation techniques and econometric models. The generalized method of moments estimator for a first-order spatial autoregressive function and mixed logit models are the principal econometric methods applied within the framework. The dissertation is subdivided into three separate chapters written in a manuscript format. The first chapter provides the necessary theoretical and mathematical conditions that must be satisfied in order for a forest amenity enhancement program to be implemented. These conditions include utility, value, and profit maximization. The second chapter evaluates the effect of forest land cover and information about future land use change on respondent preferences and willingness to pay for alternative hypothetical forest amenity enhancement options. Land use change information and the amount of forest land cover significantly influenced respondent preferences, choices, and stated willingness to pay. Hicksian welfare estimates for proposed enhancement options ranged from 57.42 to 25.53, depending on the policy specification, information level, and econometric model. The third chapter presents economic values for negative externalities associated with channelization that affect the productivity and overall market value of forested wetlands. Results of robust, generalized moments estimation of a double logarithmic first-order spatial autoregressive error model (inverse distance weights with spatial dependence up to 1500m) indicate that the implicit cost of damages to forested wetlands caused by channelization equaled -$5,438 ha-1. Collectively, the results of this dissertation provide economic measures of the damages to and benefits of environmental assets, help private landowners and policy makers identify the amenity attributes preferred by the public, and improve the management of natural resources.

  4. An analysis of competitive bidding by providers for indigent medical care contracts.

    PubMed Central

    Kirkman-Liff, B L; Christianson, J B; Hillman, D G

    1985-01-01

    This article develops a model of behavior in bidding for indigent medical care contracts in which bidders set bid prices to maximize their expected utility, conditional on estimates of variables which affect the payoff associated with winning or losing a contract. The hypotheses generated by this model are tested empirically using data from the first round of bidding in the Arizona indigent health care experiment. The behavior of bidding organizations in Arizona is found to be consistent in most respects with the predictions of the model. Bid prices appear to have been influenced by estimated costs and by expectations concerning the potential loss from not securing a contract, the initial wealth of the bidding organization, and the expected number of competitors in the bidding process. PMID:4086301

  5. A structural econometric model of family valuation and choice of employer-sponsored health insurance in the United States.

    PubMed

    Vanness, David J

    2003-09-01

    This paper estimates a fully structural unitary household model of employment and health insurance decisions for dual wage-earner families with children in the United States, using data from the 1987 National Medical Expenditure Survey. Families choose hours of work and the breakdown of compensation between cash wages and health insurance benefits for each wage earner in order to maximize expected utility under uncertain need for medical care. Heterogeneous demand for the employer-sponsored health insurance is thus generated directly from variations in health status and earning potential. The paper concludes by discussing the benefits of using structural models for simulating welfare effects of insurance reform relative to the costly assumptions that must be imposed for identification. Copyright 2003 John Wiley & Sons, Ltd.

  6. In-hardware demonstration of model-independent adaptive tuning of noisy systems with arbitrary phase drift

    DOE PAGES

    Scheinker, Alexander; Baily, Scott; Young, Daniel; ...

    2014-08-01

    In this work, an implementation of a recently developed model-independent adaptive control scheme, for tuning uncertain and time varying systems, is demonstrated on the Los Alamos linear particle accelerator. The main benefits of the algorithm are its simplicity, ability to handle an arbitrary number of components without increased complexity, and the approach is extremely robust to measurement noise, a property which is both analytically proven and demonstrated in the experiments performed. We report on the application of this algorithm for simultaneous tuning of two buncher radio frequency (RF) cavities, in order to maximize beam acceptance into the accelerating electromagnetic fieldmore » cavities of the machine, with the tuning based only on a noisy measurement of the surviving beam current downstream from the two bunching cavities. The algorithm automatically responds to arbitrary phase shift of the cavity phases, automatically re-tuning the cavity settings and maximizing beam acceptance. Because it is model independent it can be utilized for continuous adaptation to time-variation of a large system, such as due to thermal drift, or damage to components, in which the remaining, functional components would be automatically re-tuned to compensate for the failing ones. We start by discussing the general model-independent adaptive scheme and how it may be digitally applied to a large class of multi-parameter uncertain systems, and then present our experimental results.« less

  7. Cost-effective cloud computing: a case study using the comparative genomics tool, roundup.

    PubMed

    Kudtarkar, Parul; Deluca, Todd F; Fusaro, Vincent A; Tonellato, Peter J; Wall, Dennis P

    2010-12-22

    Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource-Roundup-using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon's Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon's computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure.

  8. ENVIRONMENTAL TECHNOLOGY VERIFICATION: JOINT (NSF-EPA) VERIFICATION STATEMENT AND REPORT: BROME AGRI SALES, LTD., MAXIMIZER SEPARATOR, MODEL MAX 1016 - 03/01/WQPC-SWP

    EPA Science Inventory

    Verification testing of the Brome Agri Sales Ltd. Maximizer Separator, Model MAX 1016 (Maximizer) was conducted at the Lake Wheeler Road Field Laboratory Swine Educational Unit in Raleigh, North Carolina. The Maximizer is an inclined screen solids separator that can be used to s...

  9. Estimating the relative utility of screening mammography.

    PubMed

    Abbey, Craig K; Eckstein, Miguel P; Boone, John M

    2013-05-01

    The concept of diagnostic utility is a fundamental component of signal detection theory, going back to some of its earliest works. Attaching utility values to the various possible outcomes of a diagnostic test should, in principle, lead to meaningful approaches to evaluating and comparing such systems. However, in many areas of medical imaging, utility is not used because it is presumed to be unknown. In this work, we estimate relative utility (the utility benefit of a detection relative to that of a correct rejection) for screening mammography using its known relation to the slope of a receiver operating characteristic (ROC) curve at the optimal operating point. The approach assumes that the clinical operating point is optimal for the goal of maximizing expected utility and therefore the slope at this point implies a value of relative utility for the diagnostic task, for known disease prevalence. We examine utility estimation in the context of screening mammography using the Digital Mammographic Imaging Screening Trials (DMIST) data. We show how various conditions can influence the estimated relative utility, including characteristics of the rating scale, verification time, probability model, and scope of the ROC curve fit. Relative utility estimates range from 66 to 227. We argue for one particular set of conditions that results in a relative utility estimate of 162 (±14%). This is broadly consistent with values in screening mammography determined previously by other means. At the disease prevalence found in the DMIST study (0.59% at 365-day verification), optimal ROC slopes are near unity, suggesting that utility-based assessments of screening mammography will be similar to those found using Youden's index.

  10. Collective Intelligence. Chapter 17

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2003-01-01

    Many systems of self-interested agents have an associated performance criterion that rates the dynamic behavior of the overall system. This chapter presents an introduction to the science of such systems. Formally, collectives are defined as any system having the following two characteristics: First, the system must contain one or more agents each of which we view as trying to maximize an associated private utility; second, the system must have an associated world utility function that rates the possible behaviors of that overall system. In practice, collectives are often very large, distributed, and support little, if any, centralized communication and control, although those characteristics are not part of their formal definition. A naturally occurring example of a collective is a human economy. One can identify the agents and their private utilities as the human individuals in the economy and the associated personal rewards they are each trying to maximize. One could then identify the world utility as the time average of the gross domestic product. ("World utility" per se is not a construction internal to a human economy, but rather something defined from the outside.) To achieve high world utility it is necessary to avoid having the agents work at cross-purposes lest phenomena like liquidity traps or the Tragedy of the Commons (TOC) occur, in which agents' individually pursuing their private utilities lowers world utility. The obvious way to avoid such phenomena is by modifying the agents utility functions to be "aligned" with the world utility. This can be done via punitive legislation. A real-world example of an attempt to do this was the creation of antitrust regulations designed to prevent monopolistic practices.

  11. Comparison of particle swarm optimization and differential evolution for aggregators' profit maximization in the demand response system

    NASA Astrophysics Data System (ADS)

    Wisittipanit, Nuttachat; Wisittipanich, Warisa

    2018-07-01

    Demand response (DR) refers to changes in the electricity use patterns of end-users in response to incentive payment designed to prompt lower electricity use during peak periods. Typically, there are three players in the DR system: an electric utility operator, a set of aggregators and a set of end-users. The DR model used in this study aims to minimize the operator's operational cost and offer rewards to aggregators, while profit-maximizing aggregators compete to sell DR services to the operator and provide compensation to end-users for altering their consumption profiles. This article presents the first application of two metaheuristics in the DR system: particle swarm optimization (PSO) and differential evolution (DE). The objective is to optimize the incentive payments during various periods to satisfy all stakeholders. The results show that DE significantly outperforms PSO, since it can attain better compensation rates, lower operational costs and higher aggregator profits.

  12. NREL + SolarCity: Maximizing Solar Power on Electrical Grids

    ScienceCinema

    Hannegan, Bryan; Hanley, Ryan; Symko-Davies, Marth

    2018-05-23

    Learn how NREL is partnering with SolarCity to study how to better integrate rooftop solar onto the grid. The work includes collaboration with the Hawaiian Electric Companies (HECO) to analyze high-penetration solar scenarios using advanced modeling and inverter testing at the Energy Systems Integration Facility (ESIF) on NREL’s campus. Results to date have been so promising that HECO has more than doubled the amount of rooftop solar it allows on its grid, showing utilities across the country that distributed solar is not a liability for reliability—and can even be an asset.

  13. New EVSE Analytical Tools/Models: Electric Vehicle Infrastructure Projection Tool (EVI-Pro)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Eric W; Rames, Clement L; Muratori, Matteo

    This presentation addresses the fundamental question of how much charging infrastructure is needed in the United States to support PEVs. It complements ongoing EVSE initiatives by providing a comprehensive analysis of national PEV charging infrastructure requirements. The result is a quantitative estimate for a U.S. network of non-residential (public and workplace) EVSE that would be needed to support broader PEV adoption. The analysis provides guidance to public and private stakeholders who are seeking to provide nationwide charging coverage, improve the EVSE business case by maximizing station utilization, and promote effective use of private/public infrastructure investments.

  14. Value versus Accuracy: application of seasonal forecasts to a hydro-economic optimization model for the Sudanese Blue Nile

    NASA Astrophysics Data System (ADS)

    Satti, S.; Zaitchik, B. F.; Siddiqui, S.; Badr, H. S.; Shukla, S.; Peters-Lidard, C. D.

    2015-12-01

    The unpredictable nature of precipitation within the East African (EA) region makes it one of the most vulnerable, food insecure regions in the world. There is a vital need for forecasts to inform decision makers, both local and regional, and to help formulate the region's climate change adaptation strategies. Here, we present a suite of different seasonal forecast models, both statistical and dynamical, for the EA region. Objective regionalization is performed for EA on the basis of interannual variability in precipitation in both observations and models. This regionalization is applied as the basis for calculating a number of standard skill scores to evaluate each model's forecast accuracy. A dynamically linked Land Surface Model (LSM) is then applied to determine forecasted flows, which drive the Sudanese Hydroeconomic Optimization Model (SHOM). SHOM combines hydrologic, agronomic and economic inputs to determine the optimal decisions that maximize economic benefits along the Sudanese Blue Nile. This modeling sequence is designed to derive the potential added value of information of each forecasting model to agriculture and hydropower management. A rank of each model's forecasting skill score along with its added value of information is analyzed in order compare the performance of each forecast. This research aims to improve understanding of how characteristics of accuracy, lead time, and uncertainty of seasonal forecasts influence their utility to water resources decision makers who utilize them.

  15. Plasma etched surface scanning inspection recipe creation based on bidirectional reflectance distribution function and polystyrene latex spheres

    NASA Astrophysics Data System (ADS)

    Saldana, Tiffany; McGarvey, Steve; Ayres, Steve

    2014-04-01

    The continual increasing demands upon Plasma Etching systems to self-clean and continue Plasma Etching with minimal downtime allows for the examination of SiCN, SiO2 and SiN defectivity based upon Surface Scanning Inspection Systems (SSIS) wafer scan results. Historically all Surface Scanning Inspection System wafer scanning recipes have been based upon Polystyrene Spheres wafer deposition for each film stack and the subsequent creation of light scattering sizing response curves. This paper explores the feasibility of the elimination of Polystyrene Latex Sphere (PSL) and/or process particle deposition on both filmed and bare Silicon wafers prior to Surface Scanning Inspection System recipe creation. The study will explore the theoretical maximal Surface Scanning Inspection System sensitivity based on PSL recipe creation in conjunction with the maximal sensitivity derived from Bidirectional Reflectance Distribution Function (BRDF) maximal sensitivity modeling recipe creation. The surface roughness (Root Mean Square) of plasma etched wafers varies dependent upon the process film stack. Decrease of the root mean square value of the wafer sample surface equates to higher surface scanning inspection system sensitivity. Maximal sensitivity SSIS scan results from bare and filmed wafers inspected with recipes created based upon Polystyrene/Particle Deposition and recipes created based upon BRDF modeling will be overlaid against each other to determine maximal sensitivity and capture rate for each type of recipe that was created with differing recipe creation modes. A statistically valid sample of defects from each Surface Scanning Inspection system recipe creation mode and each bare wafer/filmed substrate will be reviewed post SSIS System processing on a Defect Review Scanning Electron Microscope (DRSEM). Native defects, Polystyrene Latex Spheres will be collected from each statistically valid defect bin category/size. The data collected from the DRSEM will be utilized to determine the maximum sensitivity capture rate for each recipe creation mode. Emphasis will be placed upon the sizing accuracy of PSL versus BRDF modeling results based upon automated DRSEM defect sizing. An examination the scattering response for both Mie and Rayleigh will be explored in relationship to the reported sizing variance of the SSIS to make a determination of the absolute sizing accuracy of the recipes there were generated based upon BRDF modeling. This paper explores both the commercial and technical considerations of the elimination of PSL deposition as a precursor to SSIS recipe creation. Successful integration of BRDF modeling into the technical aspect of SSIS recipe creation process has the potential to dramatically reduce the recipe creation timeline and vetting period. Integration of BRDF modeling has the potential to greatly reduce the overhead operation costs for High Volume Manufacturing sites by eliminating the associated costs of third party PSL deposition.

  16. A Multi Agent-Based Framework for Simulating Household PHEV Distribution and Electric Distribution Network Impact

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Xiaohui; Liu, Cheng; Kim, Hoe Kyoung

    2011-01-01

    The variation of household attributes such as income, travel distance, age, household member, and education for different residential areas may generate different market penetration rates for plug-in hybrid electric vehicle (PHEV). Residential areas with higher PHEV ownership could increase peak electric demand locally and require utilities to upgrade the electric distribution infrastructure even though the capacity of the regional power grid is under-utilized. Estimating the future PHEV ownership distribution at the residential household level can help us understand the impact of PHEV fleet on power line congestion, transformer overload and other unforeseen problems at the local residential distribution network level.more » It can also help utilities manage the timing of recharging demand to maximize load factors and utilization of existing distribution resources. This paper presents a multi agent-based simulation framework for 1) modeling spatial distribution of PHEV ownership at local residential household level, 2) discovering PHEV hot zones where PHEV ownership may quickly increase in the near future, and 3) estimating the impacts of the increasing PHEV ownership on the local electric distribution network with different charging strategies. In this paper, we use Knox County, TN as a case study to show the simulation results of the agent-based model (ABM) framework. However, the framework can be easily applied to other local areas in the US.« less

  17. Defender-Attacker Decision Tree Analysis to Combat Terrorism.

    PubMed

    Garcia, Ryan J B; von Winterfeldt, Detlof

    2016-12-01

    We propose a methodology, called defender-attacker decision tree analysis, to evaluate defensive actions against terrorist attacks in a dynamic and hostile environment. Like most game-theoretic formulations of this problem, we assume that the defenders act rationally by maximizing their expected utility or minimizing their expected costs. However, we do not assume that attackers maximize their expected utilities. Instead, we encode the defender's limited knowledge about the attacker's motivations and capabilities as a conditional probability distribution over the attacker's decisions. We apply this methodology to the problem of defending against possible terrorist attacks on commercial airplanes, using one of three weapons: infrared-guided MANPADS (man-portable air defense systems), laser-guided MANPADS, or visually targeted RPGs (rocket propelled grenades). We also evaluate three countermeasures against these weapons: DIRCMs (directional infrared countermeasures), perimeter control around the airport, and hardening airplanes. The model includes deterrence effects, the effectiveness of the countermeasures, and the substitution of weapons and targets once a specific countermeasure is selected. It also includes a second stage of defensive decisions after an attack occurs. Key findings are: (1) due to the high cost of the countermeasures, not implementing countermeasures is the preferred defensive alternative for a large range of parameters; (2) if the probability of an attack and the associated consequences are large, a combination of DIRCMs and ground perimeter control are preferred over any single countermeasure. © 2016 Society for Risk Analysis.

  18. A Joint Multitarget Estimator for the Joint Target Detection and Tracking Filter

    DTIC Science & Technology

    2015-06-27

    function is the information theoretic part of the problem and aims for entropy maximization, while the second one arises from the constraint in the...objective functions in conflict. The first objective function is the information theo- retic part of the problem and aims for entropy maximization...theory. For the sake of completeness and clarity, we also summarize how each concept is utilized later. Entropy : A random variable is statistically

  19. The jet-disk symbiosis without maximal jets: 1D hydrodynamical jets revisited

    NASA Astrophysics Data System (ADS)

    Crumley, Patrick; Ceccobello, Chiara; Connors, Riley M. T.; Cavecchi, Yuri

    2017-05-01

    In this work we discuss the recent criticism by Zdziarski (2016, A&A, 586, A18) of the maximal jet model derived in Falcke & Biermann (1995, A&A, 293, 665). We agree with Zdziarski that in general a jet's internal energy is not bounded by its rest-mass energy density. We describe the effects of the mistake on conclusions that have been made using the maximal jet model and show when a maximal jet is an appropriate assumption. The maximal jet model was used to derive a 1D hydrodynamical model of jets in agnjet, a model that does multiwavelength fitting of quiescent/hard state X-ray binaries and low-luminosity active galactic nuclei. We correct algebraic mistakes made in the derivation of the 1D Euler equation and relax the maximal jet assumption. We show that the corrections cause minor differences as long as the jet has a small opening angle and a small terminal Lorentz factor. We find that the major conclusion from the maximal jet model, the jet-disk symbiosis, can be generally applied to astrophysical jets. We also show that isothermal jets are required to match the flat radio spectra seen in low-luminosity X-ray binaries and active galactic nuclei, in agreement with other works.

  20. An information maximization model of eye movements

    NASA Technical Reports Server (NTRS)

    Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra

    2005-01-01

    We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.

  1. Electromyographic and neuromuscular analysis in patients with post-polio syndrome.

    PubMed

    Corrêa, J C F; Rocco, C Chiusoli de Miranda; de Andrade, D Ventura; Peres, J Augusto; Corrêa, F Ishida

    2008-01-01

    Proceed to a comparative analysis of the electromyographic (EMG) activity of the muscles rectus femoris, vastus medialis and vastus lateralis, and to assess muscle strength and fatigue after maximal isometric contraction during knee extension. Eighteen patients with post-polio syndrome, age and weight matched, were utilized in this study. The signal acquisition system utilized consisted of three pairs of surface electrodes positioned on the motor point of the analyzed muscles. It was possible to observe with the results of this study a decreased endurance on initial muscle contraction and during contraction after 15 minutes of the initial maximal voluntary contraction, along with a muscle fatigue that was assessed through linear regression executed with Pearson's test. There were significant differences among the comparative analysis of EMG activity of the muscles rectus femoris, vastus medialis and vastus lateralis after maximal isometric contraction during knee extension. Initial muscle contraction and contraction after a 15 minute-rest from initial contraction decreased considerably, indicating a decreased endurance on muscle contraction, concluding that a lower limb muscle fatigue was present on the analyzed PPS patients.

  2. Morphology, mechanical, cross-linking, thermal, and tribological properties of nitrile and hydrogenated nitrile rubber/multi-walled carbon nanotubes composites prepared by melt compounding: The effect of acrylonitrile content and hydrogenation

    NASA Astrophysics Data System (ADS)

    Likozar, Blaž; Major, Zoltan

    2010-11-01

    The purpose of this work was to prepare nanocomposites by mixing multi-walled carbon nanotubes (MWCNT) with nitrile and hydrogenated nitrile elastomers (NBR and HNBR). Utilization of transmission electronic microscopy (TEM), scanning electron microscopy (SEM), and small- and wide-angle X-ray scattering techniques (SAXS and WAXS) for advanced morphology observation of conducting filler-reinforced nitrile and hydrogenated nitrile rubber composites is reported. Principal results were increases in hardness (maximally 97 Shore, type A), elastic modulus (maximally 981 MPa), tensile strength (maximally 27.7 MPa), elongation at break (maximally 216%), cross-link density (maximally 7.94 × 1028 m-3), density (maximally 1.16 g cm-3), and tear strength (11.2 kN m-1), which were clearly visible at particular acrylonitrile contents both for unhydrogenated and hydrogenated polymers due to enhanced distribution of carbon nanotubes (CNT) and their aggregated particles in the applied rubber matrix. Conclusion was that multi-walled carbon nanotubes improved the performance of nitrile and hydrogenated nitrile rubber nanocomposites prepared by melt compounding.

  3. The behavioral economics of consumer brand choice: patterns of reinforcement and utility maximization.

    PubMed

    Foxall, Gordon R; Oliveira-Castro, Jorge M; Schrezenmaier, Teresa C

    2004-06-30

    Purchasers of fast-moving consumer goods generally exhibit multi-brand choice, selecting apparently randomly among a small subset or "repertoire" of tried and trusted brands. Their behavior shows both matching and maximization, though it is not clear just what the majority of buyers are maximizing. Each brand attracts, however, a small percentage of consumers who are 100%-loyal to it during the period of observation. Some of these are exclusively buyers of premium-priced brands who are presumably maximizing informational reinforcement because their demand for the brand is relatively price-insensitive or inelastic. Others buy exclusively the cheapest brands available and can be assumed to maximize utilitarian reinforcement since their behavior is particularly price-sensitive or elastic. Between them are the majority of consumers whose multi-brand buying takes the form of selecting a mixture of economy -- and premium-priced brands. Based on the analysis of buying patterns of 80 consumers for 9 product categories, the paper examines the continuum of consumers so defined and seeks to relate their buying behavior to the question of how and what consumers maximize.

  4. Anthropometric body measurements based on multi-view stereo image reconstruction.

    PubMed

    Li, Zhaoxin; Jia, Wenyan; Mao, Zhi-Hong; Li, Jie; Chen, Hsin-Chen; Zuo, Wangmeng; Wang, Kuanquan; Sun, Mingui

    2013-01-01

    Anthropometric measurements, such as the circumferences of the hip, arm, leg and waist, waist-to-hip ratio, and body mass index, are of high significance in obesity and fitness evaluation. In this paper, we present a home based imaging system capable of conducting anthropometric measurements. Body images are acquired at different angles using a home camera and a simple rotating disk. Advanced image processing algorithms are utilized for 3D body surface reconstruction. A coarse body shape model is first established from segmented body silhouettes. Then, this model is refined through an inter-image consistency maximization process based on an energy function. Our experimental results using both a mannequin surrogate and a real human body validate the feasibility of the proposed system.

  5. Anthropometric Body Measurements Based on Multi-View Stereo Image Reconstruction*

    PubMed Central

    Li, Zhaoxin; Jia, Wenyan; Mao, Zhi-Hong; Li, Jie; Chen, Hsin-Chen; Zuo, Wangmeng; Wang, Kuanquan; Sun, Mingui

    2013-01-01

    Anthropometric measurements, such as the circumferences of the hip, arm, leg and waist, waist-to-hip ratio, and body mass index, are of high significance in obesity and fitness evaluation. In this paper, we present a home based imaging system capable of conducting automatic anthropometric measurements. Body images are acquired at different angles using a home camera and a simple rotating disk. Advanced image processing algorithms are utilized for 3D body surface reconstruction. A coarse body shape model is first established from segmented body silhouettes. Then, this model is refined through an inter-image consistency maximization process based on an energy function. Our experimental results using both a mannequin surrogate and a real human body validate the feasibility of proposed system. PMID:24109700

  6. Reactive Scheduling in Multipurpose Batch Plants

    NASA Astrophysics Data System (ADS)

    Narayani, A.; Shaik, Munawar A.

    2010-10-01

    Scheduling is an important operation in process industries for improving resource utilization resulting in direct economic benefits. It has a two-fold objective of fulfilling customer orders within the specified time as well as maximizing the plant profit. Unexpected disturbances such as machine breakdown, arrival of rush orders and cancellation of orders affect the schedule of the plant. Reactive scheduling is generation of a new schedule which has minimum deviation from the original schedule in spite of the occurrence of unexpected events in the plant operation. Recently, Shaik & Floudas (2009) proposed a novel unified model for short-term scheduling of multipurpose batch plants using unit-specific event-based continuous time representation. In this paper, we extend the model of Shaik & Floudas (2009) to handle reactive scheduling.

  7. Modeling of Mean-VaR portfolio optimization by risk tolerance when the utility function is quadratic

    NASA Astrophysics Data System (ADS)

    Sukono, Sidi, Pramono; Bon, Abdul Talib bin; Supian, Sudradjat

    2017-03-01

    The problems of investing in financial assets are to choose a combination of weighting a portfolio can be maximized return expectations and minimizing the risk. This paper discusses the modeling of Mean-VaR portfolio optimization by risk tolerance, when square-shaped utility functions. It is assumed that the asset return has a certain distribution, and the risk of the portfolio is measured using the Value-at-Risk (VaR). So, the process of optimization of the portfolio is done based on the model of Mean-VaR portfolio optimization model for the Mean-VaR done using matrix algebra approach, and the Lagrange multiplier method, as well as Khun-Tucker. The results of the modeling portfolio optimization is in the form of a weighting vector equations depends on the vector mean return vector assets, identities, and matrix covariance between return of assets, as well as a factor in risk tolerance. As an illustration of numeric, analyzed five shares traded on the stock market in Indonesia. Based on analysis of five stocks return data gained the vector of weight composition and graphics of efficient surface of portfolio. Vector composition weighting weights and efficient surface charts can be used as a guide for investors in decisions to invest.

  8. The key kinematic determinants of undulatory underwater swimming at maximal velocity.

    PubMed

    Connaboy, Chris; Naemi, Roozbeh; Brown, Susan; Psycharakis, Stelios; McCabe, Carla; Coleman, Simon; Sanders, Ross

    2016-01-01

    The optimisation of undulatory underwater swimming is highly important in competitive swimming performance. Nineteen kinematic variables were identified from previous research undertaken to assess undulatory underwater swimming performance. The purpose of the present study was to determine which kinematic variables were key to the production of maximal undulatory underwater swimming velocity. Kinematic data at maximal undulatory underwater swimming velocity were collected from 17 skilled swimmers. A series of separate backward-elimination analysis of covariance models was produced with cycle frequency and cycle length as dependent variables (DVs) and participant as a fixed factor, as including cycle frequency and cycle length would explain 100% of the maximal swimming velocity variance. The covariates identified in the cycle-frequency and cycle-length models were used to form the saturated model for maximal swimming velocity. The final parsimonious model identified three covariates (maximal knee joint angular velocity, maximal ankle angular velocity and knee range of movement) as determinants of the variance in maximal swimming velocity (adjusted-r2 = 0.929). However, when participant was removed as a fixed factor there was a large reduction in explained variance (adjusted r2 = 0.397) and only maximal knee joint angular velocity continued to contribute significantly, highlighting its importance to the production of maximal swimming velocity. The reduction in explained variance suggests an emphasis on inter-individual differences in undulatory underwater swimming technique and/or anthropometry. Future research should examine the efficacy of other anthropometric, kinematic and coordination variables to better understand the production of maximal swimming velocity and consider the importance of individual undulatory underwater swimming techniques when interpreting the data.

  9. Structured Set Intra Prediction With Discriminative Learning in a Max-Margin Markov Network for High Efficiency Video Coding

    PubMed Central

    Dai, Wenrui; Xiong, Hongkai; Jiang, Xiaoqian; Chen, Chang Wen

    2014-01-01

    This paper proposes a novel model on intra coding for High Efficiency Video Coding (HEVC), which simultaneously predicts blocks of pixels with optimal rate distortion. It utilizes the spatial statistical correlation for the optimal prediction based on 2-D contexts, in addition to formulating the data-driven structural interdependences to make the prediction error coherent with the probability distribution, which is desirable for successful transform and coding. The structured set prediction model incorporates a max-margin Markov network (M3N) to regulate and optimize multiple block predictions. The model parameters are learned by discriminating the actual pixel value from other possible estimates to maximize the margin (i.e., decision boundary bandwidth). Compared to existing methods that focus on minimizing prediction error, the M3N-based model adaptively maintains the coherence for a set of predictions. Specifically, the proposed model concurrently optimizes a set of predictions by associating the loss for individual blocks to the joint distribution of succeeding discrete cosine transform coefficients. When the sample size grows, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. As an internal step, we optimize the underlying Markov network structure to find states that achieve the maximal energy using expectation propagation. For validation, we integrate the proposed model into HEVC for optimal mode selection on rate-distortion optimization. The proposed prediction model obtains up to 2.85% bit rate reduction and achieves better visual quality in comparison to the HEVC intra coding. PMID:25505829

  10. Nurses wanted Is the job too harsh or is the wage too low?

    PubMed

    Di Tommaso, M L; Strøm, S; Saether, E M

    2009-05-01

    When entering the job market, nurses choose among different kind of jobs. Each of these jobs is characterized by wage, sector (primary care or hospital) and shift (daytime work or shift). This paper estimates a multi-sector-job-type random utility model of labor supply on data for Norwegian registered nurses (RNs) in 2000. The empirical model implies that labor supply is rather inelastic; 10% increase in the wage rates for all nurses is estimated to yield 3.3% increase in overall labor supply. This modest response shadows for much stronger inter-job-type responses. Our approach differs from previous studies in two ways: First, to our knowledge, it is the first time that a model of labor supply for nurses is estimated taking explicitly into account the choices that RN's have regarding work place and type of job. Second, it differs from previous studies with respect to the measurement of the compensations for different types of work. So far, it has been focused on wage differentials. But there are more attributes of a job than the wage. Based on the estimated random utility model we therefore calculate the expected value of compensation that makes a utility maximizing agent indifferent between types of jobs, here between shift work and daytime work. It turns out that Norwegian nurses working shifts may be willing to work shift relative to daytime work for a lower wage than the current one.

  11. Optimal joint detection and estimation that maximizes ROC-type curves

    PubMed Central

    Wunderlich, Adam; Goossens, Bart; Abbey, Craig K.

    2017-01-01

    Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation. PMID:27093544

  12. Optimal Joint Detection and Estimation That Maximizes ROC-Type Curves.

    PubMed

    Wunderlich, Adam; Goossens, Bart; Abbey, Craig K

    2016-09-01

    Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation.

  13. Long-term reliability of ImPACT in professional ice hockey.

    PubMed

    Echemendia, Ruben J; Bruce, Jared M; Meeuwisse, Willem; Comper, Paul; Aubry, Mark; Hutchison, Michael

    2016-02-01

    This study sought to assess the test-retest reliability of Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT) across 2-4 year time intervals and evaluate the utility of a newly proposed two-factor (Speed/Memory) model of ImPACT across multiple language versions. Test-retest data were collected from non-concussed National Hockey League (NHL) players across 2-, 3-, and 4-year time intervals. The two-factor model was examined using different language versions (English, French, Czech, Swedish) of the test using a one-year interval, and across 2-4 year intervals using the English version of the test. The two-factor Speed index improved reliability across multiple language versions of ImPACT. The Memory factor also improved but reliability remained below the traditional cutoff of .70 for use in clinical decision-making. ImPACT reliabilities remained low (below .70) regardless of whether the four-composite or the two-factor model was used across 2-, 3-, and 4-year time intervals. The two-factor approach increased ImPACT's one-year reliability over the traditional four-composite model among NHL players. The increased stability in test scores improves the test's ability to detect cognitive changes following injury, which increases the diagnostic utility of the test and allows for better return to play decision-making by reducing the risk of exposing an athlete to additional trauma while the brain may be at a heightened vulnerability to such trauma. Although the Speed Index increases the clinical utility of the test, the stability of the Memory index remains low. Irrespective of whether the two-factor or traditional four-composite approach is used, these data suggest that new baselines should occur on a yearly basis in order to maximize clinical utility.

  14. State-based versus reward-based motivation in younger and older adults.

    PubMed

    Worthy, Darrell A; Cooper, Jessica A; Byrne, Kaileigh A; Gorlick, Marissa A; Maddox, W Todd

    2014-12-01

    Recent decision-making work has focused on a distinction between a habitual, model-free neural system that is motivated toward actions that lead directly to reward and a more computationally demanding goal-directed, model-based system that is motivated toward actions that improve one's future state. In this article, we examine how aging affects motivation toward reward-based versus state-based decision making. Participants performed tasks in which one type of option provided larger immediate rewards but the alternative type of option led to larger rewards on future trials, or improvements in state. We predicted that older adults would show a reduced preference for choices that led to improvements in state and a greater preference for choices that maximized immediate reward. We also predicted that fits from a hybrid reinforcement-learning model would indicate greater model-based strategy use in younger than in older adults. In line with these predictions, older adults selected the options that maximized reward more often than did younger adults in three of the four tasks, and modeling results suggested reduced model-based strategy use. In the task where older adults showed similar behavior to younger adults, our model-fitting results suggested that this was due to the utilization of a win-stay-lose-shift heuristic rather than a more complex model-based strategy. Additionally, within older adults, we found that model-based strategy use was positively correlated with memory measures from our neuropsychological test battery. We suggest that this shift from state-based to reward-based motivation may be due to age related declines in the neural structures needed for more computationally demanding model-based decision making.

  15. Optimizing separate phase light hydrocarbon recovery from contaminated unconfined aquifers

    NASA Astrophysics Data System (ADS)

    Cooper, Grant S.; Peralta, Richard C.; Kaluarachchi, Jagath J.

    A modeling approach is presented that optimizes separate phase recovery of light non-aqueous phase liquids (LNAPL) for a single dual-extraction well in a homogeneous, isotropic unconfined aquifer. A simulation/regression/optimization (S/R/O) model is developed to predict, analyze, and optimize the oil recovery process. The approach combines detailed simulation, nonlinear regression, and optimization. The S/R/O model utilizes nonlinear regression equations describing system response to time-varying water pumping and oil skimming. Regression equations are developed for residual oil volume and free oil volume. The S/R/O model determines optimized time-varying (stepwise) pumping rates which minimize residual oil volume and maximize free oil recovery while causing free oil volume to decrease a specified amount. This S/R/O modeling approach implicitly immobilizes the free product plume by reversing the water table gradient while achieving containment. Application to a simple representative problem illustrates the S/R/O model utility for problem analysis and remediation design. When compared with the best steady pumping strategies, the optimal stepwise pumping strategy improves free oil recovery by 11.5% and reduces the amount of residual oil left in the system due to pumping by 15%. The S/R/O model approach offers promise for enhancing the design of free phase LNAPL recovery systems and to help in making cost-effective operation and management decisions for hydrogeologists, engineers, and regulators.

  16. Formulation and demonstration of a robust mean variance optimization approach for concurrent airline network and aircraft design

    NASA Astrophysics Data System (ADS)

    Davendralingam, Navindran

    Conceptual design of aircraft and the airline network (routes) on which aircraft fly on are inextricably linked to passenger driven demand. Many factors influence passenger demand for various Origin-Destination (O-D) city pairs including demographics, geographic location, seasonality, socio-economic factors and naturally, the operations of directly competing airlines. The expansion of airline operations involves the identificaion of appropriate aircraft to meet projected future demand. The decisions made in incorporating and subsequently allocating these new aircraft to serve air travel demand affects the inherent risk and profit potential as predicted through the airline revenue management systems. Competition between airlines then translates to latent passenger observations of the routes served between OD pairs and ticket pricing---this in effect reflexively drives future states of demand. This thesis addresses the integrated nature of aircraft design, airline operations and passenger demand, in order to maximize future expected profits as new aircraft are brought into service. The goal of this research is to develop an approach that utilizes aircraft design, airline network design and passenger demand as a unified framework to provide better integrated design solutions in order to maximize expexted profits of an airline. This is investigated through two approaches. The first is a static model that poses the concurrent engineering paradigm above as an investment portfolio problem. Modern financial portfolio optimization techniques are used to leverage risk of serving future projected demand using a 'yet to be introduced' aircraft against potentially generated future profits. Robust optimization methodologies are incorporated to mitigate model sensitivity and address estimation risks associated with such optimization techniques. The second extends the portfolio approach to include dynamic effects of an airline's operations. A dynamic programming approach is employed to simulate the reflexive nature of airline supply-demand interactions by modeling the aggregate changes in demand that would result from tactical allocations of aircraft to maximize profit. The best yet-to-be-introduced aircraft maximizes profit by minimizing the long term fleetwide direct operating costs.

  17. Optimal coordination of maximal-effort horizontal and vertical jump motions – a computer simulation study

    PubMed Central

    Nagano, Akinori; Komura, Taku; Fukashiro, Senshi

    2007-01-01

    Background The purpose of this study was to investigate the coordination strategy of maximal-effort horizontal jumping in comparison with vertical jumping, using the methodology of computer simulation. Methods A skeletal model that has nine rigid body segments and twenty degrees of freedom was developed. Thirty-two Hill-type lower limb muscles were attached to the model. The excitation-contraction dynamics of the contractile element, the tissues around the joints to limit the joint range of motion, as well as the foot-ground interaction were implemented. Simulations were initiated from an identical standing posture for both motions. Optimal pattern of the activation input signal was searched through numerical optimization. For the horizontal jumping, the goal was to maximize the horizontal distance traveled by the body's center of mass. For the vertical jumping, the goal was to maximize the height reached by the body's center of mass. Results As a result, it was found that the hip joint was utilized more vigorously in the horizontal jumping than in the vertical jumping. The muscles that have a function of joint flexion such as the m. iliopsoas, m. rectus femoris and m. tibialis anterior were activated to a greater level during the countermovement in the horizontal jumping with an effect of moving the body's center of mass in the forward direction. Muscular work was transferred to the mechanical energy of the body's center of mass more effectively in the horizontal jump, which resulted in a greater energy gain of the body's center of mass throughout the motion. Conclusion These differences in the optimal coordination strategy seem to be caused from the requirement that the body's center of mass needs to be located above the feet in a vertical jumping, whereas this requirement is not so strict in a horizontal jumping. PMID:17543118

  18. Applying Probabilistic Decision Models to Clinical Trial Design

    PubMed Central

    Smith, Wade P; Phillips, Mark H

    2018-01-01

    Clinical trial design most often focuses on a single or several related outcomes with corresponding calculations of statistical power. We consider a clinical trial to be a decision problem, often with competing outcomes. Using a current controversy in the treatment of HPV-positive head and neck cancer, we apply several different probabilistic methods to help define the range of outcomes given different possible trial designs. Our model incorporates the uncertainties in the disease process and treatment response and the inhomogeneities in the patient population. Instead of expected utility, we have used a Markov model to calculate quality adjusted life expectancy as a maximization objective. Monte Carlo simulations over realistic ranges of parameters are used to explore different trial scenarios given the possible ranges of parameters. This modeling approach can be used to better inform the initial trial design so that it will more likely achieve clinical relevance. PMID:29888075

  19. Biofuel supply chain considering depreciation cost of installed plants

    NASA Astrophysics Data System (ADS)

    Rabbani, Masoud; Ramezankhani, Farshad; Giahi, Ramin; Farshbaf-Geranmayeh, Amir

    2016-06-01

    Due to the depletion of the fossil fuels and major concerns about the security of energy in the future to produce fuels, the importance of utilizing the renewable energies is distinguished. Nowadays there has been a growing interest for biofuels. Thus, this paper reveals a general optimization model which enables the selection of preprocessing centers for the biomass, biofuel plants, and warehouses to store the biofuels. The objective of this model is to maximize the total benefits. Costs of the model consist of setup cost of preprocessing centers, plants and warehouses, transportation costs, production costs, emission cost and the depreciation cost. At first, the deprecation cost of the centers is calculated by means of three methods. The model chooses the best depreciation method in each period by switching between them. A numerical example is presented and solved by CPLEX solver in GAMS software and finally, sensitivity analyses are accomplished.

  20. HEPATOKIN1 is a biochemistry-based model of liver metabolism for applications in medicine and pharmacology.

    PubMed

    Berndt, Nikolaus; Bulik, Sascha; Wallach, Iwona; Wünsch, Tilo; König, Matthias; Stockmann, Martin; Meierhofer, David; Holzhütter, Hermann-Georg

    2018-06-19

    The epidemic increase of non-alcoholic fatty liver diseases (NAFLD) requires a deeper understanding of the regulatory circuits controlling the response of liver metabolism to nutritional challenges, medical drugs, and genetic enzyme variants. As in vivo studies of human liver metabolism are encumbered with serious ethical and technical issues, we developed a comprehensive biochemistry-based kinetic model of the central liver metabolism including the regulation of enzyme activities by their reactants, allosteric effectors, and hormone-dependent phosphorylation. The utility of the model for basic research and applications in medicine and pharmacology is illustrated by simulating diurnal variations of the metabolic state of the liver at various perturbations caused by nutritional challenges (alcohol), drugs (valproate), and inherited enzyme disorders (galactosemia). Using proteomics data to scale maximal enzyme activities, the model is used to highlight differences in the metabolic functions of normal hepatocytes and malignant liver cells (adenoma and hepatocellular carcinoma).

  1. The efficiency of convective energy transport in the sun

    NASA Technical Reports Server (NTRS)

    Schatten, Kenneth H.

    1988-01-01

    Mixing length theory (MLT) utilizes adiabatic expansion (as well as radiative transport) to diminish the energy content of rising convective elements. Thus in MLT, the rising elements lose their energy to the environment most efficiently and consequently transport heat with the least efficiency. On the other hand Malkus proposed that convection would maximize the efficiency of energy transport. A new stellar envelope code is developed to first examine this other extreme, wherein rising turbulent elements transport heat with the greatest possible efficiency. This other extreme model differs from MLT by providing a small reduction in the upper convection zone temperatures but greatly diminished turbulent velocities below the top few hundred kilometers. Using the findings of deep atmospheric models with the Navier-Stokes equation allows the calculation of an intermediate solar envelope model. Consideration is given to solar observations, including recent helioseismology, to examine the position of the solar envelope compared with the envelope models.

  2. Fair Package Assignment

    NASA Astrophysics Data System (ADS)

    Lahaie, Sébastien; Parkes, David C.

    We consider the problem of fair allocation in the package assignment model, where a set of indivisible items, held by single seller, must be efficiently allocated to agents with quasi-linear utilities. A fair assignment is one that is efficient and envy-free. We consider a model where bidders have superadditive valuations, meaning that items are pure complements. Our central result is that core outcomes are fair and even coalition-fair over this domain, while fair distributions may not even exist for general valuations. Of relevance to auction design, we also establish that the core is equivalent to the set of anonymous-price competitive equilibria, and that superadditive valuations are a maximal domain that guarantees the existence of anonymous-price competitive equilibrium. Our results are analogs of core equivalence results for linear prices in the standard assignment model, and for nonlinear, non-anonymous prices in the package assignment model with general valuations.

  3. Cylindrical heat conduction and structural acoustic models for enclosed fiber array thermophones.

    PubMed

    Dzikowicz, Benjamin R; Tressler, James F; Baldwin, Jeffrey W

    2017-11-01

    Calculation of the heat loss for thermophone heating elements is a function of their geometry and the thermodynamics of their surroundings. Steady-state behavior is difficult to establish or evaluate as heat is only flowing in one direction in the device. However, for a heating element made from an array of carbon fibers in a planar enclosure, several assumptions can be made, leading to simple solutions of the heat equation. These solutions can be used to more carefully determine the efficiency of thermophones of this geometry. Acoustic response is predicted with the application of a Helmholtz resonator and thin plate structural acoustics models. A laboratory thermophone utilizing a sparse horizontal array of fine (6.7 μm diameter) carbon fibers is designed and tested. Experimental results are compared with the model. The model is also used to examine the optimal array density for maximal efficiency.

  4. Environmental degradation and remediation: is economics part of the problem?

    PubMed

    Dore, Mohammed H I; Burton, Ian

    2003-01-01

    It is argued that standard environmental economic and 'ecological economics', have the same fundamentals of valuation in terms of money, based on a demand curve derived from utility maximization. But this approach leads to three different measures of value. An invariant measure of value exists only if the consumer has 'homothetic preferences'. In order to obtain a numerical estimate of value, specific functional forms are necessary, but typically these estimates do not converge. This is due to the fact that the underlying economic model is not structurally stable. According to neoclassical economics, any environmental remediation can be justified only in terms of increases in consumer satisfaction, balancing marginal gains against marginal costs. It is not surprising that the optimal policy obtained from this approach suggests only small reductions in greenhouse gases. We show that a unidimensional metric of consumer's utility measured in dollar terms can only trivialize the problem of global climate change.

  5. An Oxidase-Based Electrochemical Fluidic Sensor with High-Sensitivity and Low-Interference by On-Chip Oxygen Manipulation

    PubMed Central

    Radhakrishnan, Nitin; Park, Jongwon; Kim, Chang-Soo

    2012-01-01

    Utilizing a simple fluidic structure, we demonstrate the improved performance of oxidase-based enzymatic biosensors. Electrolysis of water is utilized to generate bubbles to manipulate the oxygen microenvironment close to the biosensor in a fluidic channel. For the proper enzyme reactions to occur, a simple mechanical procedure of manipulating bubbles was developed to maximize the oxygen level while minimizing the pH change after electrolysis. The sensors show improved sensitivities based on the oxygen dependency of enzyme reaction. In addition, this oxygen-rich operation minimizes the ratio of electrochemical interference signal by ascorbic acid during sensor operation (i.e., amperometric detection of hydrogen peroxide). Although creatinine sensors have been used as the model system in this study, this method is applicable to many other biosensors that can use oxidase enzymes (e.g., glucose, alcohol, phenol, etc.) to implement a viable component for in-line fluidic sensor systems. PMID:23012527

  6. Optimizing signal output: effects of viscoelasticity and difference frequency on vibroacoustic radiation of tissue-mimicking phantoms

    NASA Astrophysics Data System (ADS)

    Namiri, Nikan K.; Maccabi, Ashkan; Bajwa, Neha; Badran, Karam W.; Taylor, Zachary D.; St. John, Maie A.; Grundfest, Warren S.; Saddik, George N.

    2018-02-01

    Vibroacoustography (VA) is an imaging technology that utilizes the acoustic response of tissues to a localized, low frequency radiation force to generate a spatially resolved, high contrast image. Previous studies have demonstrated the utility of VA for tissue identification and margin delineation in cancer tissues. However, the relationship between specimen viscoelasticity and vibroacoustic emission remains to be fully quantified. This work utilizes the effects of variable acoustic wave profiles on unique tissue-mimicking phantoms (TMPs) to maximize VA signal power according to tissue mechanical properties, particularly elasticity. A micro-indentation method was utilized to provide measurements of the elastic modulus for each biological replica. An inverse relationship was found between elastic modulus (E) and VA signal amplitude among homogeneous TMPs. Additionally, the difference frequency (Δf ) required to reach maximum VA signal correlated with specimen elastic modulus. Peak signal diminished with increasing Δf among the polyvinyl alcohol specimen, suggesting an inefficient vibroacoustic response by the specimen beyond a threshold of resonant Δf. Comparison of these measurements may provide additional information to improve tissue modeling, system characterization, as well as insights into the unique tissue composition of tumors in head and neck cancer patients.

  7. Percolation of binary disk systems: Modeling and theory

    DOE PAGES

    Meeks, Kelsey; Tencer, John; Pantoya, Michelle L.

    2017-01-12

    The dispersion and connectivity of particles with a high degree of polydispersity is relevant to problems involving composite material properties and reaction decomposition prediction and has been the subject of much study in the literature. This paper utilizes Monte Carlo models to predict percolation thresholds for a two-dimensional systems containing disks of two different radii. Monte Carlo simulations and spanning probability are used to extend prior models into regions of higher polydispersity than those previously considered. A correlation to predict the percolation threshold for binary disk systems is proposed based on the extended dataset presented in this work and comparedmore » to previously published correlations. Finally, a set of boundary conditions necessary for a good fit is presented, and a condition for maximizing percolation threshold for binary disk systems is suggested.« less

  8. UceWeb: a web-based collaborative tool for collecting and sharing quality of life data.

    PubMed

    Parimbelli, E; Sacchi, L; Rubrichi, S; Mazzanti, A; Quaglini, S

    2015-01-01

    This work aims at building a platform where quality-of-life data, namely utility coefficients, can be elicited not only for immediate use, but also systematically stored together with patient profiles to build a public repository to be further exploited in studies on specific target populations (e.g. cost/utility analyses). We capitalized on utility theory and previous experience to define a set of desirable features such a tool should show to facilitate sound elicitation of quality of life. A set of visualization tools and algorithms has been developed to this purpose. To make it easily accessible for potential users, the software has been designed as a web application. A pilot validation study has been performed on 20 atrial fibrillation patients. A collaborative platform, UceWeb, has been developed and tested. It implements the standard gamble, time trade-off and rating-scale utility elicitation methods. It allows doctors and patients to choose the mode of interaction to maximize patients’ comfort in answering difficult questions. Every utility elicitation may contribute to the growth of the repository. UceWeb can become a unique source of data allowing researchers both to perform more reliable comparisons among healthcare interventions and build statistical models to gain deeper insight into quality of life data.

  9. Producing regionally-relevant multiobjective tradeoffs to engage with Colorado water managers

    NASA Astrophysics Data System (ADS)

    Smith, R.; Kasprzyk, J. R.; Basdekas, L.; Dilling, L.

    2016-12-01

    Disseminating results from water resources systems analysis research can be challenging when there are political or regulatory barriers associated with real-world models, or when a research model does not incorporate management context to which practitioners can relate. As part of a larger transdisciplinary study, we developed a broadly-applicable case study in collaboration with our partners at six diverse water utilities in the Front Range of Colorado, USA. Our model, called the "Eldorado Utility Planning Model", incorporates realistic water management decisions and objectives and achieves a pragmatic balance between system complexity and simplicity. Using the sophisticated modeling platform RiverWare, we modeled a spatially distributed regional network in which, under varying climate scenarios, the Eldorado Utility can meet growing demand from its variety of sources and by interacting with other users in the network. In accordance with complicated Front Range water laws, ownership, priority of use, and restricted uses of water are tracked through RiverWare's accounting functionality. To achieve good system performance, Eldorado can make decisions such as expand/build a reservoir, purchase rights from one or more actors, and enact conservation. This presentation introduces the model, and motivates how it can be used to aid researchers in developing multi-objective evolutionary algorithm (MOEA)-based optimization for similar multi-reservoir systems in Colorado and the Western US. Within the optimization, system performance is quantified by 5 objectives: minimizing time in restrictions; new storage capacity; newly developed supply; and uncaptured water; and maximizing year-end storage. Our results demonstrate critical tradeoffs between the objectives and show how these tradeoffs are affected by several realistic climate change scenarios. These results were used within an interactive workshop that helped demonstrate the application of MOEA-based optimization for water management in the western US.

  10. Dynamic Resource Management for Parallel Tasks in an Oversubscribed Energy-Constrained Heterogeneous Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imam, Neena; Koenig, Gregory A; Machovec, Dylan

    2016-01-01

    Abstract: The worth of completing parallel tasks is modeled using utility functions, which monotonically-decrease with time and represent the importance and urgency of a task. These functions define the utility earned by a task at the time of its completion. The performance of such a system is measured as the total utility earned by all completed tasks over some interval of time (e.g., 24 hours). To maximize system performance when scheduling dynamically arriving parallel tasks onto a high performance computing (HPC) system that is oversubscribed and energy-constrained, we have designed, analyzed, and compared different heuristic techniques. Four utility-aware heuristics (i.e.,more » Max Utility, Max Utility-per-Time, Max Utility-per-Resource, and Max Utility-per-Energy), three FCFS-based heuristics (Conservative Backfilling, EASY Backfilling, and FCFS with Multiple Queues), and a Random heuristic were examined in this study. A technique that is often used with the FCFS-based heuristics is the concept of a permanent reservation. We compare the performance of permanent reservations with temporary place-holders to demonstrate the advantages that place-holders can provide. We also present a novel energy filtering technique that constrains the maximum energy-per-resource used by each task. We conducted a simulation study to evaluate the performance of these heuristics and techniques in an energy-constrained oversubscribed HPC environment. With place-holders, energy filtering, and dropping tasks with low potential utility, our utility-aware heuristics are able to significantly outperform the existing FCFS-based techniques.« less

  11. Digital asset management.

    PubMed

    Humphrey, Clinton D; Tollefson, Travis T; Kriet, J David

    2010-05-01

    Facial plastic surgeons are accumulating massive digital image databases with the evolution of photodocumentation and widespread adoption of digital photography. Managing and maximizing the utility of these vast data repositories, or digital asset management (DAM), is a persistent challenge. Developing a DAM workflow that incorporates a file naming algorithm and metadata assignment will increase the utility of a surgeon's digital images. Copyright 2010 Elsevier Inc. All rights reserved.

  12. Flowfield characterization and model development in detonation tubes

    NASA Astrophysics Data System (ADS)

    Owens, Zachary Clark

    A series of experiments and numerical simulations are performed to advance the understanding of flowfield phenomena and impulse generation in detonation tubes. Experiments employing laser-based velocimetry, high-speed schlieren imaging and pressure measurements are used to construct a dataset against which numerical models can be validated. The numerical modeling culminates in the development of a two-dimensional, multi-species, finite-rate-chemistry, parallel, Navier-Stokes solver. The resulting model is specifically designed to assess unsteady, compressible, reacting flowfields, and its utility for studying multidimensional detonation structure is demonstrated. A reduced, quasi-one-dimensional model with source terms accounting for wall losses is also developed for rapid parametric assessment. Using these experimental and numerical tools, two primary objectives are pursued. The first objective is to gain an understanding of how nozzles affect unsteady, detonation flowfields and how they can be designed to maximize impulse in a detonation based propulsion system called a pulse detonation engine. It is shown that unlike conventional, steady-flow propulsion systems where converging-diverging nozzles generate optimal performance, unsteady detonation tube performance during a single-cycle is maximized using purely diverging nozzles. The second objective is to identify the primary underlying mechanisms that cause velocity and pressure measurements to deviate from idealized theory. An investigation of the influence of non-ideal losses including wall heat transfer, friction and condensation leads to the development of improved models that reconcile long-standing discrepancies between predicted and measured detonation tube performance. It is demonstrated for the first time that wall condensation of water vapor in the combustion products can cause significant deviations from ideal theory.

  13. Development and verification of a new wind speed forecasting system using an ensemble Kalman filter data assimilation technique in a fully coupled hydrologic and atmospheric model

    NASA Astrophysics Data System (ADS)

    Williams, John L.; Maxwell, Reed M.; Monache, Luca Delle

    2013-12-01

    Wind power is rapidly gaining prominence as a major source of renewable energy. Harnessing this promising energy source is challenging because of the chaotic nature of wind and its inherently intermittent nature. Accurate forecasting tools are critical to support the integration of wind energy into power grids and to maximize its impact on renewable energy portfolios. We have adapted the Data Assimilation Research Testbed (DART), a community software facility which includes the ensemble Kalman filter (EnKF) algorithm, to expand our capability to use observational data to improve forecasts produced with a fully coupled hydrologic and atmospheric modeling system, the ParFlow (PF) hydrologic model and the Weather Research and Forecasting (WRF) mesoscale atmospheric model, coupled via mass and energy fluxes across the land surface, and resulting in the PF.WRF model. Numerous studies have shown that soil moisture distribution and land surface vegetative processes profoundly influence atmospheric boundary layer development and weather processes on local and regional scales. We have used the PF.WRF model to explore the connections between the land surface and the atmosphere in terms of land surface energy flux partitioning and coupled variable fields including hydraulic conductivity, soil moisture, and wind speed and demonstrated that reductions in uncertainty in these coupled fields realized through assimilation of soil moisture observations propagate through the hydrologic and atmospheric system. The sensitivities found in this study will enable further studies to optimize observation strategies to maximize the utility of the PF.WRF-DART forecasting system.

  14. Kinetics of CO2 diffusion in human carbonic anhydrase: a study using molecular dynamics simulations and the Markov-state model.

    PubMed

    Chen, Gong; Kong, Xian; Lu, Diannan; Wu, Jianzhong; Liu, Zheng

    2017-05-10

    Molecular dynamics (MD) simulations, in combination with the Markov-state model (MSM), were applied to probe CO 2 diffusion from an aqueous solution into the active site of human carbonic anhydrase II (hCA-II), an enzyme useful for enhanced CO 2 capture and utilization. The diffusion process in the hydrophobic pocket of hCA-II was illustrated in terms of a two-dimensional free-energy landscape. We found that CO 2 diffusion in hCA-II is a rate-limiting step in the CO 2 diffusion-binding-reaction process. The equilibrium distribution of CO 2 shows its preferential accumulation within a hydrophobic domain in the protein core region. An analysis of the committors and reactive fluxes indicates that the main pathway for CO 2 diffusion into the active site of hCA-II is through a binding pocket where residue Gln 136 contributes to the maximal flux. The simulation results offer a new perspective on the CO 2 hydration kinetics and useful insights toward the development of novel biochemical processes for more efficient CO 2 sequestration and utilization.

  15. Information transmission in hippocampal CA1 neuron models in the presence of poisson shot noise: the case of periodic sub-threshold spike trains.

    PubMed

    Kawaguchi, Minato; Mino, Hiroyuki; Durand, Dominique M

    2006-01-01

    This article presents an analysis of the information transmission of periodic sub-threshold spike trains in a hippocampal CA1 neuron model in the presence of a homogeneous Poisson shot noise. In the computer simulation, periodic sub-threshold spike trains were presented repeatedly to the midpoint of the main apical branch, while the homogeneous Poisson shot noise was applied to the mid-point of a basal dendrite in the CA1 neuron model consisting of the soma with one sodium, one calcium, and five potassium channels. From spike firing times recorded at the soma, the inter spike intervals were generated and then the probability, p(T), of the inter-spike interval histogram corresponding to the spike interval, r, of the periodic input spike trains was estimated to obtain an index of information transmission. In the present article, it is shown that at a specific amplitude of the homogeneous Poisson shot noise, p(T) was found to be maximized, as well as the possibility to encode the periodic sub-threshold spike trains became greater. It was implied that setting the amplitude of the homogeneous Poisson shot noise to the specific values which maximize the information transmission might contribute to efficiently encoding the periodic sub-threshold spike trains by utilizing the stochastic resonance.

  16. Networking Micro-Processors for Effective Computer Utilization in Nursing

    PubMed Central

    Mangaroo, Jewellean; Smith, Bob; Glasser, Jay; Littell, Arthur; Saba, Virginia

    1982-01-01

    Networking as a social entity has important implications for maximizing computer resources for improved utilization in nursing. This paper describes the one process of networking of complementary resources at three institutions. Prairie View A&M University, Texas A&M University and the University of Texas School of Public Health, which has effected greater utilization of computers at the college. The results achieved in this project should have implications for nurses, users, and consumers in the development of computer resources.

  17. Modeling pedestrian shopping behavior using principles of bounded rationality: model comparison and validation

    NASA Astrophysics Data System (ADS)

    Zhu, Wei; Timmermans, Harry

    2011-06-01

    Models of geographical choice behavior have been dominantly based on rational choice models, which assume that decision makers are utility-maximizers. Rational choice models may be less appropriate as behavioral models when modeling decisions in complex environments in which decision makers may simplify the decision problem using heuristics. Pedestrian behavior in shopping streets is an example. We therefore propose a modeling framework for pedestrian shopping behavior incorporating principles of bounded rationality. We extend three classical heuristic rules (conjunctive, disjunctive and lexicographic rule) by introducing threshold heterogeneity. The proposed models are implemented using data on pedestrian behavior in Wang Fujing Street, the city center of Beijing, China. The models are estimated and compared with multinomial logit models and mixed logit models. Results show that the heuristic models are the best for all the decisions that are modeled. Validation tests are carried out through multi-agent simulation by comparing simulated spatio-temporal agent behavior with the observed pedestrian behavior. The predictions of heuristic models are slightly better than those of the multinomial logit models.

  18. Facilitating CCS Business Planning by Extending the Functionality of the SimCCS Integrated System Model

    DOE PAGES

    Ellett, Kevin M.; Middleton, Richard S.; Stauffer, Philip H.; ...

    2017-08-18

    The application of integrated system models for evaluating carbon capture and storage technology has expanded steadily over the past few years. To date, such models have focused largely on hypothetical scenarios of complex source-sink matching involving numerous large-scale CO 2 emitters, and high-volume, continuous reservoirs such as deep saline formations to function as geologic sinks for carbon storage. Though these models have provided unique insight on the potential costs and feasibility of deploying complex networks of integrated infrastructure, there remains a pressing need to translate such insight to the business community if this technology is to ever achieve a trulymore » meaningful impact in greenhouse gas mitigation. Here, we present a new integrated system modelling tool termed SimCCUS aimed at providing crucial decision support for businesses by extending the functionality of a previously developed model called SimCCS. The primary innovation of the SimCCUS tool development is the incorporation of stacked geological reservoir systems with explicit consideration of processes and costs associated with the operation of multiple CO 2 utilization and storage targets from a single geographic location. In such locations provide significant efficiencies through economies of scale, effectively minimizing CO 2 storage costs while simultaneously maximizing revenue streams via the utilization of CO 2 as a commodity for enhanced hydrocarbon recovery.« less

  19. Ecosocial consequences and policy implications of disease management in East African agropastoral systems.

    PubMed

    Gutierrez, Andrew Paul; Gilioli, Gianni; Baumgärtner, Johann

    2009-08-04

    International research and development efforts in Africa have brought ecological and social change, but analyzing the consequences of this change and developing policy to manage it for sustainable development has been difficult. This has been largely due to a lack of conceptual and analytical models to access the interacting dynamics of the different components of ecosocial systems. Here, we examine the ecological and social changes resulting from an ongoing suppression of trypanosomiasis disease in cattle in an agropastoral community in southwest Ethiopia to illustrate how such problems may be addressed. The analysis combines physiologically based demographic models of pasture, cattle, and pastoralists and a bioeconomic model that includes the demographic models as dynamic constraints in the economic objective function that maximizes the utility of individual consumption under different level of disease risk in cattle. Field data and model analysis show that suppression of trypanosomiasis leads to increased cattle and human populations and to increased agricultural development. However, in the absence of sound management, these changes will lead to a decline in pasture quality and increase the risk from tick-borne diseases in cattle and malaria in humans that would threaten system sustainability and resilience. The analysis of these conflicting outcomes of trypanosomiasis suppression is used to illustrate the need for and utility of conceptual bioeconomic models to serve as a basis for developing policy for sustainable agropastoral resource management in sub-Saharan Africa.

  20. Facilitating CCS Business Planning by Extending the Functionality of the SimCCS Integrated System Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellett, Kevin M.; Middleton, Richard S.; Stauffer, Philip H.

    The application of integrated system models for evaluating carbon capture and storage technology has expanded steadily over the past few years. To date, such models have focused largely on hypothetical scenarios of complex source-sink matching involving numerous large-scale CO 2 emitters, and high-volume, continuous reservoirs such as deep saline formations to function as geologic sinks for carbon storage. Though these models have provided unique insight on the potential costs and feasibility of deploying complex networks of integrated infrastructure, there remains a pressing need to translate such insight to the business community if this technology is to ever achieve a trulymore » meaningful impact in greenhouse gas mitigation. Here, we present a new integrated system modelling tool termed SimCCUS aimed at providing crucial decision support for businesses by extending the functionality of a previously developed model called SimCCS. The primary innovation of the SimCCUS tool development is the incorporation of stacked geological reservoir systems with explicit consideration of processes and costs associated with the operation of multiple CO 2 utilization and storage targets from a single geographic location. In such locations provide significant efficiencies through economies of scale, effectively minimizing CO 2 storage costs while simultaneously maximizing revenue streams via the utilization of CO 2 as a commodity for enhanced hydrocarbon recovery.« less

  1. Demand side management in recycling and electricity retail pricing

    NASA Astrophysics Data System (ADS)

    Kazan, Osman

    This dissertation addresses several problems from the recycling industry and electricity retail market. The first paper addresses a real-life scheduling problem faced by a national industrial recycling company. Based on their practices, a scheduling problem is defined, modeled, analyzed, and a solution is approximated efficiently. The recommended application is tested on the real-life data and randomly generated data. The scheduling improvements and the financial benefits are presented. The second problem is from electricity retail market. There are well-known patterns in daily usage in hours. These patterns change in shape and magnitude by seasons and days of the week. Generation costs are multiple times higher during the peak hours of the day. Yet most consumers purchase electricity at flat rates. This work explores analytic pricing tools to reduce peak load electricity demand for retailers. For that purpose, a nonlinear model that determines optimal hourly prices is established based on two major components: unit generation costs and consumers' utility. Both are analyzed and estimated empirically in the third paper. A pricing model is introduced to maximize the electric retailer's profit. As a result, a closed-form expression for the optimal price vector is obtained. Possible scenarios are evaluated for consumers' utility distribution. For the general case, we provide a numerical solution methodology to obtain the optimal pricing scheme. The models recommended are tested under various scenarios that consider consumer segmentation and multiple pricing policies. The recommended model reduces the peak load significantly in most cases. Several utility companies offer hourly pricing to their customers. They determine prices using historical data of unit electricity cost over time. In this dissertation we develop a nonlinear model that determines optimal hourly prices with parameter estimation. The last paper includes a regression analysis of the unit generation cost function obtained from Independent Service Operators. A consumer experiment is established to replicate the peak load behavior. As a result, consumers' utility function is estimated and optimal retail electricity prices are computed.

  2. How humans integrate the prospects of pain and reward during choice

    PubMed Central

    Talmi, Deborah; Dayan, Peter; Kiebel, Stefan J.; Frith, Chris D.; Dolan, Raymond J.

    2010-01-01

    The maxim “no pain, no gain” summarises scenarios where an action leading to reward also entails a cost. Although we know a substantial amount about how the brain represents pain and reward separately, we know little about how they are integrated during goal directed behaviour. Two theoretical models might account for the integration of reward and pain. An additive model specifies that the disutility of costs is summed linearly with the utility of benefits, while an interactive model suggests that cost and benefit utilities interact so that the sensitivity to benefits is attenuated as costs become increasingly aversive. Using a novel task that required integration of physical pain and monetary reward, we examined the mechanism underlying cost-benefit integration in humans. We provide evidence in support of an interactive model in behavioural choice. Using functional neuroimaging we identify a neural signature for this interaction such that when the consequences of actions embody a mixture of reward and pain, there is an attenuation of a predictive reward-signal in both ventral anterior cingulate cortex and ventral striatum. We conclude that these regions subserve integration of action costs and benefits in humans, a finding that suggests a cross-species similarity in neural substrates that implement this function and illuminates mechanisms that underlie altered decision making under aversive conditions. PMID:19923294

  3. The Factory of the Future

    NASA Technical Reports Server (NTRS)

    Byman, J. E.

    1985-01-01

    A brief history of aircraft production techniques is given. A flexible machining cell is then described. It is a computer controlled system capable of performing 4-axis machining part cleaning, dimensional inspection and materials handling functions in an unmanned environment. The cell was designed to: allow processing of similar and dissimilar parts in random order without disrupting production; allow serial (one-shipset-at-a-time) manufacturing; reduce work-in-process inventory; maximize machine utilization through remote set-up; maximize throughput and minimize labor.

  4. Bandwidth auction for SVC streaming in dynamic multi-overlay

    NASA Astrophysics Data System (ADS)

    Xiong, Yanting; Zou, Junni; Xiong, Hongkai

    2010-07-01

    In this paper, we study the optimal bandwidth allocation for scalable video coding (SVC) streaming in multiple overlays. We model the whole bandwidth request and distribution process as a set of decentralized auction games between the competing peers. For the upstream peer, a bandwidth allocation mechanism is introduced to maximize the aggregate revenue. For the downstream peer, a dynamic bidding strategy is proposed. It achieves maximum utility and efficient resource usage by collaborating with a content-aware layer dropping/adding strategy. Also, the convergence of the proposed auction games is theoretically proved. Experimental results show that the auction strategies can adapt to dynamic join of competing peers and video layers.

  5. Behavioral Economics Applied to Energy Demand Analysis: A Foundation

    EIA Publications

    2014-01-01

    Neoclassical economics has shaped our understanding of human behavior for several decades. While still an important starting point for economic studies, neoclassical frameworks have generally imposed strong assumptions, for example regarding utility maximization, information, and foresight, while treating consumer preferences as given or external to the framework. In real life, however, such strong assumptions tend to be less than fully valid. Behavioral economics refers to the study and formalizing of theories regarding deviations from traditionally-modeled economic decision-making in the behavior of individuals. The U.S. Energy Information Administration (EIA) has an interest in behavioral economics as one influence on energy demand.

  6. Bridging Technometric Method and Innovation Process: An Initial Study

    NASA Astrophysics Data System (ADS)

    Rumanti, A. A.; Reynaldo, R.; Samadhi, T. M. A. A.; Wiratmadja, I. I.; Dwita, A. C.

    2018-03-01

    The process of innovation is one of ways utilized to increase the capability of a technology component that reflects the need of SME. Technometric method can be used to identify to what extent the level of technology advancement in a SME is, and also which technology component that needs to be maximized in order to significantly deliver an innovation. This paper serves as an early study, which lays out a conceptual framework that identifies and elaborates the principles of innovation process from a well-established innovation model by Martin with the technometric method, based on the initial background research conducted at SME Ira Silver in Jogjakarta, Indonesia.

  7. Noisy Preferences in Risky Choice: A Cautionary Note

    PubMed Central

    2017-01-01

    We examine the effects of multiple sources of noise in risky decision making. Noise in the parameters that characterize an individual’s preferences can combine with noise in the response process to distort observed choice proportions. Thus, underlying preferences that conform to expected value maximization can appear to show systematic risk aversion or risk seeking. Similarly, core preferences that are consistent with expected utility theory, when perturbed by such noise, can appear to display nonlinear probability weighting. For this reason, modal choices cannot be used simplistically to infer underlying preferences. Quantitative model fits that do not allow for both sorts of noise can lead to wrong conclusions. PMID:28569526

  8. Frequency domain laser velocimeter signal processor

    NASA Technical Reports Server (NTRS)

    Meyers, James F.; Murphy, R. Jay

    1991-01-01

    A new scheme for processing signals from laser velocimeter systems is described. The technique utilizes the capabilities of advanced digital electronics to yield a signal processor capable of operating in the frequency domain maximizing the information obtainable from each signal burst. This allows a sophisticated approach to signal detection and processing, with a more accurate measurement of the chirp frequency resulting in an eight-fold increase in measurable signals over the present high-speed burst counter technology. Further, the required signal-to-noise ratio is reduced by a factor of 32, allowing measurements within boundary layers of wind tunnel models. Measurement accuracy is also increased up to a factor of five.

  9. Modeling the Structural Dynamic of Industrial Networks

    NASA Astrophysics Data System (ADS)

    Wilkinson, Ian F.; Wiley, James B.; Lin, Aizhong

    Market systems consist of locally interacting agents who continuously pursue advantageous opportunities. Since the time of Adam Smith, a fundamental task of economics has been to understand how market systems develop and to explain their operation. During the intervening years, theory largely has stressed comparative statics analysis. Based on the assumptions of rational, utility or profit-maximizing agents, and negative, diminishing returns) feedback process, traditional economic analysis seeks to describe the, generally) unique state of an economy corresponding to an initial set of assumptions. The analysis is tatic in the sense that it does not describe the process by which an economy might get from one state to another.

  10. A general equilibrium model of guest-worker migration: the source-country perspective.

    PubMed

    Djajic, S; Milbourne, R

    1988-11-01

    "This paper examines the problem of guest-worker migration from an economy populated by identical, utility-maximizing individuals with finite working lives. The decision to migrate, the rate of saving while abroad, as well as the length of a migrant's stay in the foreign country, are all viewed as part of a solution to an intertemporal optimization problem. In addition to studying the microeconomic aspects of temporary migration, the paper analyses the determinants of the equilibrium flow of migrants, the corresponding domestic wage, and the level of welfare enjoyed by a typical worker. Effects of an emigration tax are also investigated." excerpt

  11. The Influence of Consumer Goals and Marketing Activities on Product Bundling

    NASA Astrophysics Data System (ADS)

    Haijun, Wang

    Upon entering a store, consumers are faced with the questions of whether to buy, what to buy, and how much to buy. Consumers include products from different categories in their decision process. Product categories can be related in different ways. Product bundling is a process that involves the choice of at least two non-substitutable items. In this research, the consumers' explicit product bundling activity at the point of sale is focused. We focuses on the retailers' perspective and therefore leaves out consumers' brand choice decisions, concentrating on purchase incidence and quantity. At the base of the current model of the exist researches, we integrate behavioural choice analysis and predictive choice modelling through the underlying behavioural models, called random utility maximization (RUM) models. The methodological contribution of this research lies therein to combine a nested logit choice model with a latent variable factor model. We point out several limitations for both theory and practice at the end.

  12. 77 FR 25145 - Commerce Spectrum Management Advisory Committee Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-27

    ... innovation as possible, and make wireless services available to all Americans. (See charter, at http://www... federal capabilities and maximizing commercial utilization. NTIA will post a detailed agenda on its Web...

  13. Optimal population size and endogenous growth.

    PubMed

    Palivos, T; Yip, C K

    1993-01-01

    "Many applications in economics require the selection of an objective function which enables the comparison of allocations involving different population sizes. The two most commonly used criteria are the Benthamite and the Millian welfare functions, also known as classical and average utilitarianism, respectively. The former maximizes total utility of the society and thus represents individuals, while the latter maximizes average utility and so represents generations. Edgeworth (1925) was the first to conjecture, that the Benthamite principle leads to a larger population size and a lower standard of living.... The purpose of this paper is to examine Edgeworth's conjecture in an endogenous growth framework in which there are interactions between output and population growth rates. It is shown that, under conditions that ensure an optimum, the Benthamite criterion leads to smaller population and higher output growth rates than the Millian." excerpt

  14. A Quantitative Three-Dimensional Image Analysis Tool for Maximal Acquisition of Spatial Heterogeneity Data.

    PubMed

    Allenby, Mark C; Misener, Ruth; Panoskaltsis, Nicki; Mantalaris, Athanasios

    2017-02-01

    Three-dimensional (3D) imaging techniques provide spatial insight into environmental and cellular interactions and are implemented in various fields, including tissue engineering, but have been restricted by limited quantification tools that misrepresent or underutilize the cellular phenomena captured. This study develops image postprocessing algorithms pairing complex Euclidean metrics with Monte Carlo simulations to quantitatively assess cell and microenvironment spatial distributions while utilizing, for the first time, the entire 3D image captured. Although current methods only analyze a central fraction of presented confocal microscopy images, the proposed algorithms can utilize 210% more cells to calculate 3D spatial distributions that can span a 23-fold longer distance. These algorithms seek to leverage the high sample cost of 3D tissue imaging techniques by extracting maximal quantitative data throughout the captured image.

  15. Developing a framework for energy technology portfolio selection

    NASA Astrophysics Data System (ADS)

    Davoudpour, Hamid; Ashrafi, Maryam

    2012-11-01

    Today, the increased consumption of energy in world, in addition to the risk of quick exhaustion of fossil resources, has forced industrial firms and organizations to utilize energy technology portfolio management tools viewed both as a process of diversification of energy sources and optimal use of available energy sources. Furthermore, the rapid development of technologies, their increasing complexity and variety, and market dynamics have made the task of technology portfolio selection difficult. Considering high level of competitiveness, organizations need to strategically allocate their limited resources to the best subset of possible candidates. This paper presents the results of developing a mathematical model for energy technology portfolio selection at a R&D center maximizing support of the organization's strategy and values. The model balances the cost and benefit of the entire portfolio.

  16. Effect of constitutive inactivation of the myostatin gene on the gain in muscle strength during postnatal growth in two murine models.

    PubMed

    Stantzou, Amalia; Ueberschlag-Pitiot, Vanessa; Thomasson, Remi; Furling, Denis; Bonnieu, Anne; Amthor, Helge; Ferry, Arnaud

    2017-02-01

    The effect of constitutive inactivation of the gene encoding myostatin on the gain in muscle performance during postnatal growth has not been well characterized. We analyzed 2 murine myostatin knockout (KO) models, (i) the Lee model (KO Lee ) and (ii) the Grobet model (KO Grobet ), and measured the contraction of tibialis anterior muscle in situ. Absolute maximal isometric force was increased in 6-month-old KO Lee and KO Grobet mice, as compared to wild-type mice. Similarly, absolute maximal power was increased in 6-month-old KO Lee mice. In contrast, specific maximal force (relative maximal force per unit of muscle mass was decreased in all 6-month-old male and female KO mice, except in 6-month-old female KO Grobet mice, whereas specific maximal power was reduced only in male KO Lee mice. Genetic inactivation of myostatin increases maximal force and power, but in return it reduces muscle quality, particularly in male mice. Muscle Nerve 55: 254-261, 2017. © 2016 Wiley Periodicals, Inc.

  17. The Perfect Glass Paradigm: Disordered Hyperuniform Glasses Down to Absolute Zero

    NASA Astrophysics Data System (ADS)

    Zhang, G.; Stillinger, F. H.; Torquato, S.

    2016-11-01

    Rapid cooling of liquids below a certain temperature range can result in a transition to glassy states. The traditional understanding of glasses includes their thermodynamic metastability with respect to crystals. However, here we present specific examples of interactions that eliminate the possibilities of crystalline and quasicrystalline phases, while creating mechanically stable amorphous glasses down to absolute zero temperature. We show that this can be accomplished by introducing a new ideal state of matter called a “perfect glass”. A perfect glass represents a soft-interaction analog of the maximally random jammed (MRJ) packings of hard particles. These latter states can be regarded as the epitome of a glass since they are out of equilibrium, maximally disordered, hyperuniform, mechanically rigid with infinite bulk and shear moduli, and can never crystallize due to configuration-space trapping. Our model perfect glass utilizes two-, three-, and four-body soft interactions while simultaneously retaining the salient attributes of the MRJ state. These models constitute a theoretical proof of concept for perfect glasses and broaden our fundamental understanding of glass physics. A novel feature of equilibrium systems of identical particles interacting with the perfect-glass potential at positive temperature is that they have a non-relativistic speed of sound that is infinite.

  18. Thermoelectric Properties of SnS with Na-Doping.

    PubMed

    Zhou, Binqiang; Li, Shuai; Li, Wen; Li, Juan; Zhang, Xinyue; Lin, Siqi; Chen, Zhiwei; Pei, Yanzhong

    2017-10-04

    Tin sulfide (SnS), a low-cost compound from the IV-VI semiconductors, has attracted particular attention due to its great potential for large-scale thermoelectric applications. However, pristine SnS shows a low carrier concentration, which leads to a low thermoelectric performance. In this work, sodium is utilized to substitute Sn to increase the hole concentration and consequently improve the thermoelectric power factor. The resultant Hall carrier concentration up to ∼10 19 cm -3 is the highest concentration reported so far for this compound. This further leads to the highest thermoelectric figure of merit, zT of 0.65, reported so far in polycrystalline SnS. The temperature-dependent Hall mobility shows a transition of carrier-scattering source from a grain boundary potential below 400 K to acoustic phonons at higher temperatures. The electronic transport properties can be well understood by a single parabolic band (SPB) model, enabling a quantitative guidance for maximizing the thermoelectric power factor. Using the experimental lattice thermal conductivity, a maximal zT of 0.8 at 850 K is expected when the carrier concentration is further increased to ∼1 × 10 20 cm -3 , according to the SPB model. This work not only demonstrates SnS as a promising low-cost thermoelectric material but also details the material parameters that fundamentally determine the thermoelectric properties.

  19. The Perfect Glass Paradigm: Disordered Hyperuniform Glasses Down to Absolute Zero.

    PubMed

    Zhang, G; Stillinger, F H; Torquato, S

    2016-11-28

    Rapid cooling of liquids below a certain temperature range can result in a transition to glassy states. The traditional understanding of glasses includes their thermodynamic metastability with respect to crystals. However, here we present specific examples of interactions that eliminate the possibilities of crystalline and quasicrystalline phases, while creating mechanically stable amorphous glasses down to absolute zero temperature. We show that this can be accomplished by introducing a new ideal state of matter called a "perfect glass". A perfect glass represents a soft-interaction analog of the maximally random jammed (MRJ) packings of hard particles. These latter states can be regarded as the epitome of a glass since they are out of equilibrium, maximally disordered, hyperuniform, mechanically rigid with infinite bulk and shear moduli, and can never crystallize due to configuration-space trapping. Our model perfect glass utilizes two-, three-, and four-body soft interactions while simultaneously retaining the salient attributes of the MRJ state. These models constitute a theoretical proof of concept for perfect glasses and broaden our fundamental understanding of glass physics. A novel feature of equilibrium systems of identical particles interacting with the perfect-glass potential at positive temperature is that they have a non-relativistic speed of sound that is infinite.

  20. The Perfect Glass Paradigm: Disordered Hyperuniform Glasses Down to Absolute Zero

    PubMed Central

    Zhang, G.; Stillinger, F. H.; Torquato, S.

    2016-01-01

    Rapid cooling of liquids below a certain temperature range can result in a transition to glassy states. The traditional understanding of glasses includes their thermodynamic metastability with respect to crystals. However, here we present specific examples of interactions that eliminate the possibilities of crystalline and quasicrystalline phases, while creating mechanically stable amorphous glasses down to absolute zero temperature. We show that this can be accomplished by introducing a new ideal state of matter called a “perfect glass”. A perfect glass represents a soft-interaction analog of the maximally random jammed (MRJ) packings of hard particles. These latter states can be regarded as the epitome of a glass since they are out of equilibrium, maximally disordered, hyperuniform, mechanically rigid with infinite bulk and shear moduli, and can never crystallize due to configuration-space trapping. Our model perfect glass utilizes two-, three-, and four-body soft interactions while simultaneously retaining the salient attributes of the MRJ state. These models constitute a theoretical proof of concept for perfect glasses and broaden our fundamental understanding of glass physics. A novel feature of equilibrium systems of identical particles interacting with the perfect-glass potential at positive temperature is that they have a non-relativistic speed of sound that is infinite. PMID:27892452

  1. Long-Term Counterinsurgency Strategy: Maximizing Special Operations and Airpower

    DTIC Science & Technology

    2010-02-01

    operations forces possess a repertoire of capabilities and attributes which impart them with unique strategic utility. “That utility reposes most...flashlight), LTMs are employed in a similar role to cue aircrews equipped with Night Vision Devices (NVDs). Concurrently, employment of small laptop...Special Operations Forces (PSS-SOF) and Precision Fires Image Generator (PFIG) have brought similar benefit to the employment of GPS/INS targeted weapons

  2. Bioengineering and Coordination of Regulatory Networks and Intracellular Complexes to Maximize Hydrogen Production by Phototrophic Microorganisms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tabita, F. Robert

    2013-07-30

    In this study, the Principal Investigator, F.R. Tabita has teamed up with J. C. Liao from UCLA. This project's main goal is to manipulate regulatory networks in phototrophic bacteria to affect and maximize the production of large amounts of hydrogen gas under conditions where wild-type organisms are constrained by inherent regulatory mechanisms from allowing this to occur. Unrestrained production of hydrogen has been achieved and this will allow for the potential utilization of waste materials as a feed stock to support hydrogen production. By further understanding the means by which regulatory networks interact, this study will seek to maximize themore » ability of currently available “unrestrained” organisms to produce hydrogen. The organisms to be utilized in this study, phototrophic microorganisms, in particular nonsulfur purple (NSP) bacteria, catalyze many significant processes including the assimilation of carbon dioxide into organic carbon, nitrogen fixation, sulfur oxidation, aromatic acid degradation, and hydrogen oxidation/evolution. Moreover, due to their great metabolic versatility, such organisms highly regulate these processes in the cell and since virtually all such capabilities are dispensable, excellent experimental systems to study aspects of molecular control and biochemistry/physiology are available.« less

  3. On Social Optima of Non-Cooperative Mean Field Games

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Sen; Zhang, Wei; Zhao, Lin

    This paper studies the social optima in noncooperative mean-field games for a large population of agents with heterogeneous stochastic dynamic systems. Each agent seeks to maximize an individual utility functional, and utility functionals of different agents are coupled through a mean field term that depends on the mean of the population states/controls. The paper has the following contributions. First, we derive a set of control strategies for the agents that possess *-Nash equilibrium property, and converge to the mean-field Nash equilibrium as the population size goes to infinity. Second, we study the social optimal in the mean field game. Wemore » derive the conditions, termed the socially optimal conditions, under which the *-Nash equilibrium of the mean field game maximizes the social welfare. Third, a primal-dual algorithm is proposed to compute the *-Nash equilibrium of the mean field game. Since the *-Nash equilibrium of the mean field game is socially optimal, we can compute the equilibrium by solving the social welfare maximization problem, which can be addressed by a decentralized primal-dual algorithm. Numerical simulations are presented to demonstrate the effectiveness of the proposed approach.« less

  4. Research priorities and plans for the International Space Station-results of the 'REMAP' Task Force

    NASA Technical Reports Server (NTRS)

    Kicza, M.; Erickson, K.; Trinh, E.

    2003-01-01

    Recent events in the International Space Station (ISS) Program have resulted in the necessity to re-examine the research priorities and research plans for future years. Due to both technical and fiscal resource constraints expected on the International Space Station, it is imperative that research priorities be carefully reviewed and clearly articulated. In consultation with OSTP and the Office of Management and budget (OMB), NASA's Office of Biological and Physical Research (OBPR) assembled an ad-hoc external advisory committee, the Biological and Physical Research Maximization and Prioritization (REMAP) Task Force. This paper describes the outcome of the Task Force and how it is being used to define a roadmap for near and long-term Biological and Physical Research objectives that supports NASA's Vision and Mission. Additionally, the paper discusses further prioritizations that were necessitated by budget and ISS resource constraints in order to maximize utilization of the International Space Station. Finally, a process has been developed to integrate the requirements for this prioritized research with other agency requirements to develop an integrated ISS assembly and utilization plan that maximizes scientific output. c2003 American Institute of Aeronautics and Astronautics. Published by Elsevier Science Ltd. All rights reserved.

  5. Genetic and Psychosocial Predictors of Aggression: Variable Selection and Model Building With Component-Wise Gradient Boosting.

    PubMed

    Suchting, Robert; Gowin, Joshua L; Green, Charles E; Walss-Bass, Consuelo; Lane, Scott D

    2018-01-01

    Rationale : Given datasets with a large or diverse set of predictors of aggression, machine learning (ML) provides efficient tools for identifying the most salient variables and building a parsimonious statistical model. ML techniques permit efficient exploration of data, have not been widely used in aggression research, and may have utility for those seeking prediction of aggressive behavior. Objectives : The present study examined predictors of aggression and constructed an optimized model using ML techniques. Predictors were derived from a dataset that included demographic, psychometric and genetic predictors, specifically FK506 binding protein 5 (FKBP5) polymorphisms, which have been shown to alter response to threatening stimuli, but have not been tested as predictors of aggressive behavior in adults. Methods : The data analysis approach utilized component-wise gradient boosting and model reduction via backward elimination to: (a) select variables from an initial set of 20 to build a model of trait aggression; and then (b) reduce that model to maximize parsimony and generalizability. Results : From a dataset of N = 47 participants, component-wise gradient boosting selected 8 of 20 possible predictors to model Buss-Perry Aggression Questionnaire (BPAQ) total score, with R 2 = 0.66. This model was simplified using backward elimination, retaining six predictors: smoking status, psychopathy (interpersonal manipulation and callous affect), childhood trauma (physical abuse and neglect), and the FKBP5_13 gene (rs1360780). The six-factor model approximated the initial eight-factor model at 99.4% of R 2 . Conclusions : Using an inductive data science approach, the gradient boosting model identified predictors consistent with previous experimental work in aggression; specifically psychopathy and trauma exposure. Additionally, allelic variants in FKBP5 were identified for the first time, but the relatively small sample size limits generality of results and calls for replication. This approach provides utility for the prediction of aggression behavior, particularly in the context of large multivariate datasets.

  6. CMSA: a heterogeneous CPU/GPU computing system for multiple similar RNA/DNA sequence alignment.

    PubMed

    Chen, Xi; Wang, Chen; Tang, Shanjiang; Yu, Ce; Zou, Quan

    2017-06-24

    The multiple sequence alignment (MSA) is a classic and powerful technique for sequence analysis in bioinformatics. With the rapid growth of biological datasets, MSA parallelization becomes necessary to keep its running time in an acceptable level. Although there are a lot of work on MSA problems, their approaches are either insufficient or contain some implicit assumptions that limit the generality of usage. First, the information of users' sequences, including the sizes of datasets and the lengths of sequences, can be of arbitrary values and are generally unknown before submitted, which are unfortunately ignored by previous work. Second, the center star strategy is suited for aligning similar sequences. But its first stage, center sequence selection, is highly time-consuming and requires further optimization. Moreover, given the heterogeneous CPU/GPU platform, prior studies consider the MSA parallelization on GPU devices only, making the CPUs idle during the computation. Co-run computation, however, can maximize the utilization of the computing resources by enabling the workload computation on both CPU and GPU simultaneously. This paper presents CMSA, a robust and efficient MSA system for large-scale datasets on the heterogeneous CPU/GPU platform. It performs and optimizes multiple sequence alignment automatically for users' submitted sequences without any assumptions. CMSA adopts the co-run computation model so that both CPU and GPU devices are fully utilized. Moreover, CMSA proposes an improved center star strategy that reduces the time complexity of its center sequence selection process from O(mn 2 ) to O(mn). The experimental results show that CMSA achieves an up to 11× speedup and outperforms the state-of-the-art software. CMSA focuses on the multiple similar RNA/DNA sequence alignment and proposes a novel bitmap based algorithm to improve the center star strategy. We can conclude that harvesting the high performance of modern GPU is a promising approach to accelerate multiple sequence alignment. Besides, adopting the co-run computation model can maximize the entire system utilization significantly. The source code is available at https://github.com/wangvsa/CMSA .

  7. Individual versus Household Migration Decision Rules: Gender and Marital Status Differences in Intentions to Migrate in South Africa.

    PubMed

    Gubhaju, Bina; De Jong, Gordon F

    2009-03-01

    This research tests the thesis that the neoclassical micro-economic and the new household economic theoretical assumptions on migration decision-making rules are segmented by gender, marital status, and time frame of intention to migrate. Comparative tests of both theories within the same study design are relatively rare. Utilizing data from the Causes of Migration in South Africa national migration survey, we analyze how individually held "own-future" versus alternative "household well-being" migration decision rules effect the intentions to migrate of male and female adults in South Africa. Results from the gender and marital status specific logistic regressions models show consistent support for the different gender-marital status decision rule thesis. Specifically, the "maximizing one's own future" neoclassical microeconomic theory proposition is more applicable for never married men and women, the "maximizing household income" proposition for married men with short-term migration intentions, and the "reduce household risk" proposition for longer time horizon migration intentions of married men and women. Results provide new evidence on the way household strategies and individual goals jointly affect intentions to move or stay.

  8. DOT report for implementing OMB's information dissemination quality guidelines

    DOT National Transportation Integrated Search

    2002-08-01

    Consistent with The Office of : Management and Budgets (OMB) Guidelines (for Ensuring and Maximizing the Quality, : Objectivity, Utility, and Integrity of Information Disseminated by Federal Agencies) : implementing Section 515 of the Treasury and...

  9. Consumer Behavior in the Choice of Mode of Transport: A Case Study in the Toledo-Madrid Corridor

    PubMed Central

    Muro-Rodríguez, Ana I.; Perez-Jiménez, Israel R.; Gutiérrez-Broncano, Santiago

    2017-01-01

    Within the context of the consumption of goods or services the decisions made by individuals involve the choice between a set of discrete alternatives, such as the choice of mode of transport. The methodology for analyzing the consumer behavior are the models of discrete choice based on the Theory of Random Utility. These models are based on the definition of preferences through a utility function that is maximized. These models also denominated of disaggregated demand derived from the decision of a set of individuals, who are formalized by the application of probabilistic models. The objective of this study is to determine the behavior of the consumer in the choice of a service, namely of transport services and in a short-distance corridor, such as Toledo-Madrid. The Toledo-Madrid corridor is characterized by being short distance, with high speed train available within the choice options to get the airport, along with the bus and the car. And where offers of HST and aircraft services can be proposed as complementary modes. By applying disaggregated transport models with revealed preference survey data and declared preferences, one can determine the most important variables involved in the choice and determine the arrangements for payment of individuals. These payment provisions may condition the use of certain transport policies to promote the use of efficient transportation. PMID:28676776

  10. Consumer Behavior in the Choice of Mode of Transport: A Case Study in the Toledo-Madrid Corridor.

    PubMed

    Muro-Rodríguez, Ana I; Perez-Jiménez, Israel R; Gutiérrez-Broncano, Santiago

    2017-01-01

    Within the context of the consumption of goods or services the decisions made by individuals involve the choice between a set of discrete alternatives, such as the choice of mode of transport. The methodology for analyzing the consumer behavior are the models of discrete choice based on the Theory of Random Utility. These models are based on the definition of preferences through a utility function that is maximized. These models also denominated of disaggregated demand derived from the decision of a set of individuals, who are formalized by the application of probabilistic models. The objective of this study is to determine the behavior of the consumer in the choice of a service, namely of transport services and in a short-distance corridor, such as Toledo-Madrid. The Toledo-Madrid corridor is characterized by being short distance, with high speed train available within the choice options to get the airport, along with the bus and the car. And where offers of HST and aircraft services can be proposed as complementary modes. By applying disaggregated transport models with revealed preference survey data and declared preferences, one can determine the most important variables involved in the choice and determine the arrangements for payment of individuals. These payment provisions may condition the use of certain transport policies to promote the use of efficient transportation.

  11. Cardio-Respiratory Responses to Maximal Work During Arm and Bicycle Ergometry.

    ERIC Educational Resources Information Center

    Israel, Richard G.; Hardison, George T.

    This study compared cardio-respiratory responses during maximal arm work using a Monarch Model 880 Rehab Trainer to cardio-respiratory responses during maximal leg work on a Monarch Model 850 Bicycle Ergometer. Subjects for the investigation were 17 male university students ranging from 18 to 28 years of age. The specific variables compared…

  12. Do People Use the Shortest Path? An Empirical Test of Wardrop’s First Principle

    PubMed Central

    Zhu, Shanjiang; Levinson, David

    2015-01-01

    Most recent route choice models, following either the random utility maximization or rule-based paradigm, require explicit enumeration of feasible routes. The quality of model estimation and prediction is sensitive to the appropriateness of the consideration set. However, few empirical studies of revealed route characteristics have been reported in the literature. This study evaluates the widely applied shortest path assumption by evaluating routes followed by residents of the Minneapolis—St. Paul metropolitan area. Accurate Global Positioning System (GPS) and Geographic Information System (GIS) data were employed to reveal routes people used over an eight to thirteen week period. Most people did not choose the shortest path. Using three weeks of that data, we find that current route choice set generation algorithms do not reveal the majority of paths that individuals took. Findings from this study may guide future efforts in building better route choice models. PMID:26267756

  13. Availability analysis of mechanical systems with condition-based maintenance using semi-Markov and evaluation of optimal condition monitoring interval

    NASA Astrophysics Data System (ADS)

    Kumar, Girish; Jain, Vipul; Gandhi, O. P.

    2018-03-01

    Maintenance helps to extend equipment life by improving its condition and avoiding catastrophic failures. Appropriate model or mechanism is, thus, needed to quantify system availability vis-a-vis a given maintenance strategy, which will assist in decision-making for optimal utilization of maintenance resources. This paper deals with semi-Markov process (SMP) modeling for steady state availability analysis of mechanical systems that follow condition-based maintenance (CBM) and evaluation of optimal condition monitoring interval. The developed SMP model is solved using two-stage analytical approach for steady-state availability analysis of the system. Also, CBM interval is decided for maximizing system availability using Genetic Algorithm approach. The main contribution of the paper is in the form of a predictive tool for system availability that will help in deciding the optimum CBM policy. The proposed methodology is demonstrated for a centrifugal pump.

  14. Joint Segmentation and Deformable Registration of Brain Scans Guided by a Tumor Growth Model

    PubMed Central

    Gooya, Ali; Pohl, Kilian M.; Bilello, Michel; Biros, George; Davatzikos, Christos

    2011-01-01

    This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR ) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth. PMID:21995070

  15. Joint segmentation and deformable registration of brain scans guided by a tumor growth model.

    PubMed

    Gooya, Ali; Pohl, Kilian M; Bilello, Michel; Biros, George; Davatzikos, Christos

    2011-01-01

    This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth.

  16. The cost of quality: Implementing generalization and suppression for anonymizing biomedical data with minimal information loss.

    PubMed

    Kohlmayer, Florian; Prasser, Fabian; Kuhn, Klaus A

    2015-12-01

    With the ARX data anonymization tool structured biomedical data can be de-identified using syntactic privacy models, such as k-anonymity. Data is transformed with two methods: (a) generalization of attribute values, followed by (b) suppression of data records. The former method results in data that is well suited for analyses by epidemiologists, while the latter method significantly reduces loss of information. Our tool uses an optimal anonymization algorithm that maximizes output utility according to a given measure. To achieve scalability, existing optimal anonymization algorithms exclude parts of the search space by predicting the outcome of data transformations regarding privacy and utility without explicitly applying them to the input dataset. These optimizations cannot be used if data is transformed with generalization and suppression. As optimal data utility and scalability are important for anonymizing biomedical data, we had to develop a novel method. In this article, we first confirm experimentally that combining generalization with suppression significantly increases data utility. Next, we proof that, within this coding model, the outcome of data transformations regarding privacy and utility cannot be predicted. As a consequence, existing algorithms fail to deliver optimal data utility. We confirm this finding experimentally. The limitation of previous work can be overcome at the cost of increased computational complexity. However, scalability is important for anonymizing data with user feedback. Consequently, we identify properties of datasets that may be predicted in our context and propose a novel and efficient algorithm. Finally, we evaluate our solution with multiple datasets and privacy models. This work presents the first thorough investigation of which properties of datasets can be predicted when data is anonymized with generalization and suppression. Our novel approach adopts existing optimization strategies to our context and combines different search methods. The experiments show that our method is able to efficiently solve a broad spectrum of anonymization problems. Our work shows that implementing syntactic privacy models is challenging and that existing algorithms are not well suited for anonymizing data with transformation models which are more complex than generalization alone. As such models have been recommended for use in the biomedical domain, our results are of general relevance for de-identifying structured biomedical data. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Surface scanning inspection system particle detection dependence on aluminum film morphology

    NASA Astrophysics Data System (ADS)

    Prater, Walter; Tran, Natalie; McGarvey, Steve

    2012-03-01

    Physical vapor deposition (PVD) aluminum films present unique challenges when detecting particulate defects with a Surface Scanning Inspection System (SSIS). Aluminum (Al) films 4500Å thick were deposited on 300mm particle grade bare Si wafers at two temperatures using a Novellus Systems INOVA® NExT,.. Film surface roughness and morphology measurements were performed using a Veeco Vx310® atomic force microscope (AFM). AFM characterization found the high deposition temperature (TD) Al roughness (Root Mean Square 16.5 nm) to be five-times rougher than the low-TD Al roughness (rms 3.7 nm). High-TD Al had grooves at the grain boundaries that were measured to be 20 to 80 nm deep. Scanning electron microscopy (SEM) examination, with a Hitachi RS6000 defect review SEM, confirmed the presence of pronounced grain grooves. SEM images established that the low-TD filmed wafers have fine grains (0.1 to 0.3 um diameter) and the high-TD film wafers have fifty-times larger equiaxed plateletshape grains (5 to 15 um diameter). Calibrated Poly-Styrene Latex (PSL) spheres ranging in size from 90 nm to 1 μm were deposited in circular patterns on the wafers using an aerosol deposition chamber. PSL sphere depositions at each spot were controlled to yield 2000 to 5000 counts. A Hitachi LS9100® dark field full wafer SSIS was used to experimentally determine the relationship of the PSL sphere scattered light intensity with S-polarized light, a measure of scattering cross-section, with respect to the calibrated PSL sphere diameter. Comparison of the SSIS scattered light versus PSL spot size calibration curves shows two distinct differences. Scattering cross-section (intensity) of the PSL spheres increased on the low-TD Al film with smooth surface roughness and the low-TD Al film defect detection sensitivity was 126 nm compared to 200 nm for the rougher high- TD Al film. This can be explained by the higher signal to noise attributed to the smooth low-TD Al. Dark field defect detection on surface scanning inspection systems is used to rapidly measure defectivity data. The user generates a calibration curve on the SSIS to plot the intensity of the light scattering derived at each National Institute of Standards and Technology (NIST) certified PSL deposition spot that was deposited. It is not uncommon for the end user to embark upon the time consuming process of attempting to "push" the maximal SSIS film specific sensitivity curve beyond the optical performance capability of the SSIS. Bidirectional reflectance distribution function (BRDF) light scattering modeling was utilized as a means of determining the most appropriate polarity prior to the SSIS recipe creation process. The modeling utilized the Al refractive index (n) and extinction coefficient (k) and the SSIS detector angles and laser wavelength. The modeling results allowed predetermination of the maximal sensitivity for each different Al thickness and eliminate unnecessary recipe modification trial-and-error in search of the SSIS maximal sensitivity. The modeling accurately forecasted the optimal polarization and maximal sensitivity of the SSIS recipe, which, by avoiding a trial and error approach, can result in a substantial savings in time and resources.

  18. Mass and Volume Optimization of Space Flight Medical Kits

    NASA Technical Reports Server (NTRS)

    Keenan, A. B.; Foy, Millennia Hope; Myers, Jerry

    2014-01-01

    Resource allocation is a critical aspect of space mission planning. All resources, including medical resources, are subject to a number of mission constraints such a maximum mass and volume. However, unlike many resources, there is often limited understanding in how to optimize medical resources for a mission. The Integrated Medical Model (IMM) is a probabilistic model that estimates medical event occurrences and mission outcomes for different mission profiles. IMM simulates outcomes and describes the impact of medical events in terms of lost crew time, medical resource usage, and the potential for medically required evacuation. Previously published work describes an approach that uses the IMM to generate optimized medical kits that maximize benefit to the crew subject to mass and volume constraints. We improve upon the results obtained previously and extend our approach to minimize mass and volume while meeting some benefit threshold. METHODS We frame the medical kit optimization problem as a modified knapsack problem and implement an algorithm utilizing dynamic programming. Using this algorithm, optimized medical kits were generated for 3 mission scenarios with the goal of minimizing the medical kit mass and volume for a specified likelihood of evacuation or Crew Health Index (CHI) threshold. The algorithm was expanded to generate medical kits that maximize likelihood of evacuation or CHI subject to mass and volume constraints. RESULTS AND CONCLUSIONS In maximizing benefit to crew health subject to certain constraints, our algorithm generates medical kits that more closely resemble the unlimited-resource scenario than previous approaches which leverage medical risk information generated by the IMM. Our work here demonstrates that this algorithm provides an efficient and effective means to objectively allocate medical resources for spaceflight missions and provides an effective means of addressing tradeoffs in medical resource allocations and crew mission success parameters.

  19. Garcina cambogia leaf and seawater for tannase production by marine Aspergillus awamori BTMFW032 under slurry state fermentation.

    PubMed

    Beena, S P; Basheer, Soorej M; Bhat, Sarita G; Chandrasekaran, M

    2011-12-01

    Garcinia gummi-gutta (syn. G. cambogia, G. quaesita), known to have medicinal properties, was evaluated as a substrate and inducer for tannase production by a marine Aspergillus awamori BTMFW032, under slurry state fermentation using Czapekdox-minimal medium and sea water as the cultivation medium. Among the various natural tannin substrates evaluated, Garcinia leaf supported maximal tannase production. The cultivation conditions and components of the cultivation medium were optimized employing response surface methodology. The experimental results were fitted to a second-order polynomial model at a 92.2% level of significance (p < 0.0001). The maximal tannase activity was obtained in a slurry state medium containing 26.6%, w/v, Garcinia leaf, supplemented with 0.1% tannic acid as inducer. The optimum values of pH, temperature and inoculum concentration obtained were 5.0, 40 degrees C and 3%, respectively. A Box-Behnken model study of the fermentation conditions was carried out, and the best production of tannase was registered at 40 degrees C without agitation. Optimization strategy employing response surface methodology led to nearly 3-fold increase in the enzyme production from 26.2 U/mL obtained in unoptimized medium to 75.2 Units/mL in Box Behnken design, within 18 h of fermentation. It was observed that sea water could support maximal tannase production by A. awamori compared with other media suggesting that the sea water salts could have played an inducer role in expression of tannase encoding genes. To the best of our knowledge, this is the first report on production of tannase, an industrially important enzyme, utilizing Garcinia leaf as substrate under slurry state fermentation by marine A. awamori and sea water as the cultivation medium.

  20. An Integrated Framework for Analysis of Water Supply Strategies in a Developing City: Chennai, India

    NASA Astrophysics Data System (ADS)

    Srinivasan, V.; Gorelick, S.; Goulder, L.

    2009-12-01

    Indian cities are facing a severe water crisis: rapidly growing population, low tariffs, high leakage rates, inadequate reservoir storage, are straining water supply systems, resulting in unreliable, intermittent piped supply. Conventional approaches to studying the problem of urban water supply have typically considered only centralized piped supply by the water utility. Specifically, they have tended to overlook decentralized actions by consumers such as groundwater extraction via private wells and aquifer recharge by rainwater harvesting. We present an innovative integrative framework for analyzing urban water supply in Indian cities. The framework is used in a systems model of water supply in the city of Chennai, India that integrates different components of the urban water system: water flows into the reservoir system, diversion and distribution by the public water utility, groundwater flow in the urban aquifer, informal water markets and consumer behavior. Historical system behavior from 2002-2006 is used to calibrate the model. The historical system behavior highlights the buffering role of the urban aquifer; storing water in periods of surplus for extraction by consumers via private wells. The model results show that in Chennai, distribution pipeline leaks result in the transfer of water from the inadequate reservoir system to the urban aquifer. The systems approach also makes it possible to evaluate and compare a wide range of centralized and decentralized policies. Three very different policies: Supply Augmentation (desalination), Efficiency Improvement (raising tariffs and fixing pipe leaks), and Rainwater Harvesting (recharging the urban aquifer by capturing rooftop and yard runoff) were evaluated using the model. The model results suggest that a combination of Rainwater Harvesting and Efficiency Improvement best meets our criteria of welfare maximization, equity, system reliability, and utility profitability. Importantly, the study shows that combination policy emerges as optimal because of three conditions that are prevalent in Chennai: 1) widespread presence of private wells, 2) inadequate availability of reservoir storage to the utility, and 2) high cost of new supply sources.

  1. Evolutionary prisoner's dilemma games coevolving on adaptive networks.

    PubMed

    Lee, Hsuan-Wei; Malik, Nishant; Mucha, Peter J

    2018-02-01

    We study a model for switching strategies in the Prisoner's Dilemma game on adaptive networks of player pairings that coevolve as players attempt to maximize their return. We use a node-based strategy model wherein each player follows one strategy at a time (cooperate or defect) across all of its neighbors, changing that strategy and possibly changing partners in response to local changes in the network of player pairing and in the strategies used by connected partners. We compare and contrast numerical simulations with existing pair approximation differential equations for describing this system, as well as more accurate equations developed here using the framework of approximate master equations. We explore the parameter space of the model, demonstrating the relatively high accuracy of the approximate master equations for describing the system observations made from simulations. We study two variations of this partner-switching model to investigate the system evolution, predict stationary states, and compare the total utilities and other qualitative differences between these two model variants.

  2. Matching mice to malignancy: molecular subgroups and models of medulloblastoma

    PubMed Central

    Lau, Jasmine; Schmidt, Christin; Markant, Shirley L.; Taylor, Michael D.; Wechsler-Reya, Robert J.

    2012-01-01

    Introduction Medulloblastoma, the largest group of embryonal brain tumors, has historically been classified into five variants based on histopathology. More recently, epigenetic and transcriptional analyses of primary tumors have sub-classified medulloblastoma into four to six subgroups, most of which are incongruous with histopathological classification. Discussion Improved stratification is required for prognosis and development of targeted treatment strategies, to maximize cure and minimize adverse effects. Several mouse models of medulloblastoma have contributed both to an improved understanding of progression and to developmental therapeutics. In this review, we summarize the classification of human medulloblastoma subtypes based on histopathology and molecular features. We describe existing genetically engineered mouse models, compare these to human disease, and discuss the utility of mouse models for developmental therapeutics. Just as accurate knowledge of the correct molecular subtype of medulloblastoma is critical to the development of targeted therapy in patients, we propose that accurate modeling of each subtype of medulloblastoma in mice will be necessary for preclinical evaluation and optimization of those targeted therapies. PMID:22315164

  3. A Systematic Process for Developing High Quality SaaS Cloud Services

    NASA Astrophysics Data System (ADS)

    La, Hyun Jung; Kim, Soo Dong

    Software-as-a-Service (SaaS) is a type of cloud service which provides software functionality through Internet. Its benefits are well received in academia and industry. To fully utilize the benefits, there should be effective methodologies to support the development of SaaS services which provide high reusability and applicability. Conventional approaches such as object-oriented methods do not effectively support SaaS-specific engineering activities such as modeling common features, variability, and designing quality services. In this paper, we present a systematic process for developing high quality SaaS and highlight the essentiality of commonality and variability (C&V) modeling to maximize the reusability. We first define criteria for designing the process model and provide a theoretical foundation for SaaS; its meta-model and C&V model. We clarify the notion of commonality and variability in SaaS, and propose a SaaS development process which is accompanied with engineering instructions. Using the proposed process, SaaS services with high quality can be effectively developed.

  4. Strategic Style in Pared-Down Poker

    NASA Astrophysics Data System (ADS)

    Burns, Kevin

    This chapter deals with the manner of making diagnoses and decisions, called strategic style, in a gambling game called Pared-down Poker. The approach treats style as a mental mode in which choices are constrained by expected utilities. The focus is on two classes of utility, i.e., money and effort, and how cognitive styles compare to normative strategies in optimizing these utilities. The insights are applied to real-world concerns like managing the war against terror networks and assessing the risks of system failures. After "Introducing the Interactions" involved in playing poker, the contents are arranged in four sections, as follows. "Underpinnings of Utility" outlines four classes of utility and highlights the differences between them: economic utility (money), ergonomic utility (effort), informatic utility (knowledge), and aesthetic utility (pleasure). "Inference and Investment" dissects the cognitive challenges of playing poker and relates them to real-world situations of business and war, where the key tasks are inference (of cards in poker, or strength in war) and investment (of chips in poker, or force in war) to maximize expected utility. "Strategies and Styles" presents normative (optimal) approaches to inference and investment, and compares them to cognitive heuristics by which people play poker--focusing on Bayesian methods and how they differ from human styles. The normative strategy is then pitted against cognitive styles in head-to-head tournaments, and tournaments are also held between different styles. The results show that style is ergonomically efficient and economically effective, i.e., style is smart. "Applying the Analysis" explores how style spaces, of the sort used to model individual behavior in Pared-down Poker, might also be applied to real-world problems where organizations evolve in terror networks and accidents arise from system failures.

  5. Trading Water Conservation Credits: A Coordinative Approach for Enhanced Urban Water Reliability

    NASA Astrophysics Data System (ADS)

    Gonzales, P.; Ajami, N. K.

    2016-12-01

    Water utilities in arid and semi-arid regions are increasingly relying on water use efficiency and conservation to extend the availability of supplies. Despite spatial and institutional inter-dependency of many service providers, these demand-side management initiatives have traditionally been tackled by individual utilities operating in a silo. In this study, we introduce a new approach to water conservation that addresses regional synergies—a novel system of tradable water conservation credits. Under the proposed approach, utilities have the flexibility to invest in water conservation measures that are appropriate for their specific service area. When utilities have insufficient capacity for local cost-effective measures, they may opt to purchase credits, contributing to fund subsidies for utilities that do have that capacity and can provide the credits, while the region as whole benefits from more reliable water supplies. While similar programs have been used to address water quality concerns, to our knowledge this is one of the first studies proposing tradable credits for incentivizing water conservation. Through mathematical optimization, this study estimates the potential benefits of a trading program and demonstrates the institutional and economic characteristics needed for such a policy to be viable, including a proposed web platform to facilitate transparent regional planning, data-driven decision-making, and enhanced coordination of utilities. We explore the impacts of defining conservation targets tailored to local realities of utilities, setting credit prices, and different policy configurations. We apply these models to the case study of water utility members of the Bay Area Water Supply and Conservation Agency. Preliminary work shows that the diverse characteristics of these utilities present opportunities for the region to achieve conservation goals while maximizing the benefits to individual utilities through more flexible coordinative efforts.

  6. Box-Cox Mixed Logit Model for Travel Behavior Analysis

    NASA Astrophysics Data System (ADS)

    Orro, Alfonso; Novales, Margarita; Benitez, Francisco G.

    2010-09-01

    To represent the behavior of travelers when they are deciding how they are going to get to their destination, discrete choice models, based on the random utility theory, have become one of the most widely used tools. The field in which these models were developed was halfway between econometrics and transport engineering, although the latter now constitutes one of their principal areas of application. In the transport field, they have mainly been applied to mode choice, but also to the selection of destination, route, and other important decisions such as the vehicle ownership. In usual practice, the most frequently employed discrete choice models implement a fixed coefficient utility function that is linear in the parameters. The principal aim of this paper is to present the viability of specifying utility functions with random coefficients that are nonlinear in the parameters, in applications of discrete choice models to transport. Nonlinear specifications in the parameters were present in discrete choice theory at its outset, although they have seldom been used in practice until recently. The specification of random coefficients, however, began with the probit and the hedonic models in the 1970s, and, after a period of apparent little practical interest, has burgeoned into a field of intense activity in recent years with the new generation of mixed logit models. In this communication, we present a Box-Cox mixed logit model, original of the authors. It includes the estimation of the Box-Cox exponents in addition to the parameters of the random coefficients distribution. Probability of choose an alternative is an integral that will be calculated by simulation. The estimation of the model is carried out by maximizing the simulated log-likelihood of a sample of observed individual choices between alternatives. The differences between the predictions yielded by models that are inconsistent with real behavior have been studied with simulation experiments.

  7. The Integrated Medical Model: A Probabilistic Simulation Model for Predicting In-Flight Medical Risks

    NASA Technical Reports Server (NTRS)

    Keenan, Alexandra; Young, Millennia; Saile, Lynn; Boley, Lynn; Walton, Marlei; Kerstman, Eric; Shah, Ronak; Goodenow, Debra A.; Myers, Jerry G.

    2015-01-01

    The Integrated Medical Model (IMM) is a probabilistic model that uses simulation to predict mission medical risk. Given a specific mission and crew scenario, medical events are simulated using Monte Carlo methodology to provide estimates of resource utilization, probability of evacuation, probability of loss of crew, and the amount of mission time lost due to illness. Mission and crew scenarios are defined by mission length, extravehicular activity (EVA) schedule, and crew characteristics including: sex, coronary artery calcium score, contacts, dental crowns, history of abdominal surgery, and EVA eligibility. The Integrated Medical Evidence Database (iMED) houses the model inputs for one hundred medical conditions using in-flight, analog, and terrestrial medical data. Inputs include incidence, event durations, resource utilization, and crew functional impairment. Severity of conditions is addressed by defining statistical distributions on the dichotomized best and worst-case scenarios for each condition. The outcome distributions for conditions are bounded by the treatment extremes of the fully treated scenario in which all required resources are available and the untreated scenario in which no required resources are available. Upon occurrence of a simulated medical event, treatment availability is assessed, and outcomes are generated depending on the status of the affected crewmember at the time of onset, including any pre-existing functional impairments or ongoing treatment of concurrent conditions. The main IMM outcomes, including probability of evacuation and loss of crew life, time lost due to medical events, and resource utilization, are useful in informing mission planning decisions. To date, the IMM has been used to assess mission-specific risks with and without certain crewmember characteristics, to determine the impact of eliminating certain resources from the mission medical kit, and to design medical kits that maximally benefit crew health while meeting mass and volume constraints.

  8. The Integrated Medical Model: A Probabilistic Simulation Model Predicting In-Flight Medical Risks

    NASA Technical Reports Server (NTRS)

    Keenan, Alexandra; Young, Millennia; Saile, Lynn; Boley, Lynn; Walton, Marlei; Kerstman, Eric; Shah, Ronak; Goodenow, Debra A.; Myers, Jerry G., Jr.

    2015-01-01

    The Integrated Medical Model (IMM) is a probabilistic model that uses simulation to predict mission medical risk. Given a specific mission and crew scenario, medical events are simulated using Monte Carlo methodology to provide estimates of resource utilization, probability of evacuation, probability of loss of crew, and the amount of mission time lost due to illness. Mission and crew scenarios are defined by mission length, extravehicular activity (EVA) schedule, and crew characteristics including: sex, coronary artery calcium score, contacts, dental crowns, history of abdominal surgery, and EVA eligibility. The Integrated Medical Evidence Database (iMED) houses the model inputs for one hundred medical conditions using in-flight, analog, and terrestrial medical data. Inputs include incidence, event durations, resource utilization, and crew functional impairment. Severity of conditions is addressed by defining statistical distributions on the dichotomized best and worst-case scenarios for each condition. The outcome distributions for conditions are bounded by the treatment extremes of the fully treated scenario in which all required resources are available and the untreated scenario in which no required resources are available. Upon occurrence of a simulated medical event, treatment availability is assessed, and outcomes are generated depending on the status of the affected crewmember at the time of onset, including any pre-existing functional impairments or ongoing treatment of concurrent conditions. The main IMM outcomes, including probability of evacuation and loss of crew life, time lost due to medical events, and resource utilization, are useful in informing mission planning decisions. To date, the IMM has been used to assess mission-specific risks with and without certain crewmember characteristics, to determine the impact of eliminating certain resources from the mission medical kit, and to design medical kits that maximally benefit crew health while meeting mass and volume constraints.

  9. Effects of pretension on work and power output of the muscle-tendon complex in dynamic elbow flexion.

    PubMed

    Wakayama, Akinobu; Nagano, Akinori; Hay, Dean; Fukashiro, Senshi

    2005-06-01

    The purpose of the present study was to investigate the effects of pretension on work and power output of the muscle-tendon complex during dynamic elbow flexion under several submaximal and maximal conditions. The subjects were 10 healthy female students. Randomized trials from 0% to 100% maximal voluntary contraction (MVC) pretension (PT) at 60 degrees elbow flexion were conducted. After about 3 s of static PT, subjects maximally flexed the elbow joint to 90 degrees using a quick release method. The weight was individually selected for each subject to provide an optimal load for the development of maximal power. A Hill-type model was utilized to analyze the performance of the elbow muscle-tendon complex (MTC). PT 0, 30, 60 and 90% MVC data were used for comparison, and all data were expressed as the mean and standard deviation. Multiple paired comparisons between the value of PT 0% MVC and that of the other PT levels were performed post-hoc using Dunnett's method. The work of the series elastic component (SEC) increased gradually with the PT level because elastic energy was stored in the PT phase. However, the work of the contractile component (CC) decreased gradually with an increase in PT level. Moreover, the work of the MTC also decreased, closely related to the CC work decrement. The phenomenon of CC work decrement was caused by force depression and was not related to either the force-length or force-velocity relationships of the CC. EMG activity (agonist and antagonist) showed no significant differences. Muscle geometry changes or intracellular chemical shifts may have occurred in the PT phase.

  10. CD134/CD137 Dual Costimulation-Elicited IFN-γ Maximizes Effector T Cell Function but Limits Treg Expansion

    PubMed Central

    Rose, Marie-Clare St.; Taylor, Roslyn A.; Bandyopadhyay, Suman; Qui, Harry Z.; Hagymasi, Adam T.; Vella, Anthony T.; Adler, Adam J.

    2012-01-01

    T cell tolerance to tumor antigens represents a major hurdle in generating tumor immunity. Combined administration of agonistic monoclonal antibodies to the costimulatory receptors CD134 plus CD137 can program T cells responding to tolerogenic antigen to undergo expansion and effector T cell differentiation, and also elicits tumor immunity. Nevertheless, CD134 and CD137 agonists can also engage inhibitory immune components. To understand how immune stimulatory versus inhibitory components are regulated during CD134 plus CD137 dual costimulation, the current study utilized a model where dual costimulation programs T cells encountering a highly tolerogenic self-antigen to undergo effector differentiation. IFN-γ was found to play a pivotal role in maximizing the function of effector T cells while simultaneously limiting the expansion of CD4+CD25+Foxp3+ Tregs. In antigen-responding effector T cells, IFN-γ operates via a direct cell-intrinsic mechanism to cooperate with IL-2 to program maximal expression of granzyme B. Simultaneously, IFN-γ limits expression of the IL-2 receptor alpha chain (CD25) and IL-2 signaling through a mechanism that does not involve T-bet-mediated repression of IL-2. IFN-γ also limited CD25 and Foxp3 expression on bystanding CD4+Foxp3+ Tregs, and limited the potential of these Tregs to expand. These effects could not be explained by the ability of IFN-γ to limit IL-2 availability. Taken together, during dual costimulation IFN-γ interacts with IL-2 through distinct mechanisms to program maximal expression of effector molecules in antigen-responding T cells while simultaneously limiting Treg expansion. PMID:23295363

  11. Optimal threshold estimator of a prognostic marker by maximizing a time-dependent expected utility function for a patient-centered stratified medicine.

    PubMed

    Dantan, Etienne; Foucher, Yohann; Lorent, Marine; Giral, Magali; Tessier, Philippe

    2018-06-01

    Defining thresholds of prognostic markers is essential for stratified medicine. Such thresholds are mostly estimated from purely statistical measures regardless of patient preferences potentially leading to unacceptable medical decisions. Quality-Adjusted Life-Years are a widely used preferences-based measure of health outcomes. We develop a time-dependent Quality-Adjusted Life-Years-based expected utility function for censored data that should be maximized to estimate an optimal threshold. We performed a simulation study to compare estimated thresholds when using the proposed expected utility approach and purely statistical estimators. Two applications illustrate the usefulness of the proposed methodology which was implemented in the R package ROCt ( www.divat.fr ). First, by reanalysing data of a randomized clinical trial comparing the efficacy of prednisone vs. placebo in patients with chronic liver cirrhosis, we demonstrate the utility of treating patients with a prothrombin level higher than 89%. Second, we reanalyze the data of an observational cohort of kidney transplant recipients: we conclude to the uselessness of the Kidney Transplant Failure Score to adapt the frequency of clinical visits. Applying such a patient-centered methodology may improve future transfer of novel prognostic scoring systems or markers in clinical practice.

  12. Intelligent transportation systems : tools to maximize state transportation investments

    DOT National Transportation Integrated Search

    1997-07-28

    This Issue Brief summarizes national ITS goals and state transportation needs. It reviews states experience with ITS to date and discusses the utility of ITS technologies to improve transportation infrastructure. The Issue Brief also provides cost...

  13. Optimization model for the allocation of water resources based on the maximization of employment in the agriculture and industry sectors

    NASA Astrophysics Data System (ADS)

    Habibi Davijani, M.; Banihabib, M. E.; Nadjafzadeh Anvar, A.; Hashemi, S. R.

    2016-02-01

    In many discussions, work force is mentioned as the most important factor of production. Principally, work force is a factor which can compensate for the physical and material limitations and shortcomings of other factors to a large extent which can help increase the production level. On the other hand, employment is considered as an effective factor in social issues. The goal of the present research is the allocation of water resources so as to maximize the number of jobs created in the industry and agriculture sectors. An objective that has attracted the attention of policy makers involved in water supply and distribution is the maximization of the interests of beneficiaries and consumers in case of certain policies adopted. The present model applies the particle swarm optimization (PSO) algorithm in order to determine the optimum amount of water allocated to each water-demanding sector, area under cultivation, agricultural production, employment in the agriculture sector, industrial production and employment in the industry sector. Based on the results obtained from this research, by optimally allocating water resources in the central desert region of Iran, 1096 jobs can be created in the industry and agriculture sectors, which constitutes an improvement of about 13% relative to the previous situation (non-optimal water utilization). It is also worth mentioning that by optimizing the employment factor as a social parameter, the other areas such as the economic sector are influenced as well. For example, in this investigation, the resulting economic benefits (incomes) have improved from 73 billion Rials at baseline employment figures to 112 billion Rials in the case of optimized employment condition. Therefore, it is necessary to change the inter-sector and intra-sector water allocation models in this region, because this change not only leads to more jobs in this area, but also causes an improvement in the region's economic conditions.

  14. Cost-Effective Cloud Computing: A Case Study Using the Comparative Genomics Tool, Roundup

    PubMed Central

    Kudtarkar, Parul; DeLuca, Todd F.; Fusaro, Vincent A.; Tonellato, Peter J.; Wall, Dennis P.

    2010-01-01

    Background Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource—Roundup—using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Methods Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon’s Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. Results We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon’s computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure. PMID:21258651

  15. Improving wind energy forecasts using an Ensemble Kalman Filter data assimilation technique in a fully coupled hydrologic and atmospheric model

    NASA Astrophysics Data System (ADS)

    Williams, J. L.; Maxwell, R. M.; Delle Monache, L.

    2012-12-01

    Wind power is rapidly gaining prominence as a major source of renewable energy. Harnessing this promising energy source is challenging because of the chaotic nature of wind and its propensity to change speed and direction over short time scales. Accurate forecasting tools are critical to support the integration of wind energy into power grids and to maximize its impact on renewable energy portfolios. Numerous studies have shown that soil moisture distribution and land surface vegetative processes profoundly influence atmospheric boundary layer development and weather processes on local and regional scales. Using the PF.WRF model, a fully-coupled hydrologic and atmospheric model employing the ParFlow hydrologic model with the Weather Research and Forecasting model coupled via mass and energy fluxes across the land surface, we have explored the connections between the land surface and the atmosphere in terms of land surface energy flux partitioning and coupled variable fields including hydraulic conductivity, soil moisture and wind speed, and demonstrated that reductions in uncertainty in these coupled fields propagate through the hydrologic and atmospheric system. We have adapted the Data Assimilation Research Testbed (DART), an implementation of the robust Ensemble Kalman Filter data assimilation algorithm, to expand our capability to nudge forecasts produced with the PF.WRF model using observational data. Using a semi-idealized simulation domain, we examine the effects of assimilating observations of variables such as wind speed and temperature collected in the atmosphere, and land surface and subsurface observations such as soil moisture on the quality of forecast outputs. The sensitivities we find in this study will enable further studies to optimize observation collection to maximize the utility of the PF.WRF-DART forecasting system.

  16. A Maximal Graded Exercise Test to Accurately Predict VO2max in 18-65-Year-Old Adults

    ERIC Educational Resources Information Center

    George, James D.; Bradshaw, Danielle I.; Hyde, Annette; Vehrs, Pat R.; Hager, Ronald L.; Yanowitz, Frank G.

    2007-01-01

    The purpose of this study was to develop an age-generalized regression model to predict maximal oxygen uptake (VO sub 2 max) based on a maximal treadmill graded exercise test (GXT; George, 1996). Participants (N = 100), ages 18-65 years, reached a maximal level of exertion (mean plus or minus standard deviation [SD]; maximal heart rate [HR sub…

  17. Network formation: neighborhood structures, establishment costs, and distributed learning.

    PubMed

    Chasparis, Georgios C; Shamma, Jeff S

    2013-12-01

    We consider the problem of network formation in a distributed fashion. Network formation is modeled as a strategic-form game, where agents represent nodes that form and sever unidirectional links with other nodes and derive utilities from these links. Furthermore, agents can form links only with a limited set of neighbors. Agents trade off the benefit from links, which is determined by a distance-dependent reward function, and the cost of maintaining links. When each agent acts independently, trying to maximize its own utility function, we can characterize “stable” networks through the notion of Nash equilibrium. In fact, the introduced reward and cost functions lead to Nash equilibria (networks), which exhibit several desirable properties such as connectivity, bounded-hop diameter, and efficiency (i.e., minimum number of links). Since Nash networks may not necessarily be efficient, we also explore the possibility of “shaping” the set of Nash networks through the introduction of state-based utility functions. Such utility functions may represent dynamic phenomena such as establishment costs (either positive or negative). Finally, we show how Nash networks can be the outcome of a distributed learning process. In particular, we extend previous learning processes to so-called “state-based” weakly acyclic games, and we show that the proposed network formation games belong to this class of games.

  18. Effective utilization of clinical laboratories.

    PubMed

    Murphy, J; Henry, J B

    1978-11-01

    Effective utilization of clinical laboratories requires that underutilization, overutilization, and malutilization be appreciated and eliminated or reduced. Optimal patient care service, although subjective to a major extent, is reflected in terms of outcome and cost. Increased per diem charges, reduced hospital stay, and increased laboratory workload over the past decade all require each laboratory to examine its internal operations to achieve economy and efficiency as well as maximal effectiveness. Increased research and development, an active managerial role on the part of pathologists, internal self-assessment, and an aggressive response to sophisticated scientific and clinical laboratory data base requirements are not only desirable but essential. The importance of undergraduate and graduate medical education in laboratory medicine to insure understanding as well as effective utilization is stressed. The costs and limitations as well as the accuracy, precision, sensitivity, specificity, and pitfalls of measurements and examinations must also be fully appreciated. Medical malpractice and defensive medicine and the use of critical values, emergency and routine services, and an active clinical role by the pathologist are of the utmost value in assuring effective utilization of the laboratory. A model for the optimal use of the laboratory including economy and efficiency has been achieved in the blood bank in regard to optimal hemotherapy for elective surgery, assuring superior patient care in a cost effective and safe manner.

  19. Kinetic models for goods exchange in a multi-agent market

    NASA Astrophysics Data System (ADS)

    Brugna, Carlo; Toscani, Giuseppe

    2018-06-01

    In this paper we introduce a system of kinetic equations describing an exchange market consisting of two populations of agents (dealers and speculators) expressing the same preferences for two goods, but applying different strategies in their exchanges. Similarly to the model proposed in Toscani et al. (2013), we describe the trading of the goods by means of some fundamental rules in price theory, in particular by using Cobb-Douglas utility functions for the exchange. The strategy of the speculators is to recover maximal utility from the trade by suitably acting on the percentage of goods which are exchanged. This microscopic description leads to a system of linear Boltzmann-type equations for the probability distributions of the goods on the two populations, in which the post-interaction variables depend from the pre-interaction ones in terms of the mean quantities of the goods present in the market. In this case, it is shown analytically that the strategy of the speculators can drive the price of the two goods towards a zone in which there is a branded utility for their group. Also, according to Toscani et al. (2013), the general system of nonlinear kinetic equations of Boltzmann type for the probability distributions of the goods on the two populations is described in details. Numerical experiments then show how the policy of speculators can modify the final price of goods in this nonlinear setting.

  20. Virtual medicine: Utilization of the advanced cardiac imaging patient avatar for procedural planning and facilitation.

    PubMed

    Shinbane, Jerold S; Saxon, Leslie A

    Advances in imaging technology have led to a paradigm shift from planning of cardiovascular procedures and surgeries requiring the actual patient in a "brick and mortar" hospital to utilization of the digitalized patient in the virtual hospital. Cardiovascular computed tomographic angiography (CCTA) and cardiovascular magnetic resonance (CMR) digitalized 3-D patient representation of individual patient anatomy and physiology serves as an avatar allowing for virtual delineation of the most optimal approaches to cardiovascular procedures and surgeries prior to actual hospitalization. Pre-hospitalization reconstruction and analysis of anatomy and pathophysiology previously only accessible during the actual procedure could potentially limit the intrinsic risks related to time in the operating room, cardiac procedural laboratory and overall hospital environment. Although applications are specific to areas of cardiovascular specialty focus, there are unifying themes related to the utilization of technologies. The virtual patient avatar computer can also be used for procedural planning, computational modeling of anatomy, simulation of predicted therapeutic result, printing of 3-D models, and augmentation of real time procedural performance. Examples of the above techniques are at various stages of development for application to the spectrum of cardiovascular disease processes, including percutaneous, surgical and hybrid minimally invasive interventions. A multidisciplinary approach within medicine and engineering is necessary for creation of robust algorithms for maximal utilization of the virtual patient avatar in the digital medical center. Utilization of the virtual advanced cardiac imaging patient avatar will play an important role in the virtual health care system. Although there has been a rapid proliferation of early data, advanced imaging applications require further assessment and validation of accuracy, reproducibility, standardization, safety, efficacy, quality, cost effectiveness, and overall value to medical care. Copyright © 2018 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.

  1. Facebook's personal page modelling and simulation

    NASA Astrophysics Data System (ADS)

    Sarlis, Apostolos S.; Sakas, Damianos P.; Vlachos, D. S.

    2015-02-01

    In this paper we will try to define the utility of Facebook's Personal Page marketing method. This tool that Facebook provides, is modelled and simulated using iThink in the context of a Facebook marketing agency. The paper has leveraged the system's dynamic paradigm to conduct Facebook marketing tools and methods modelling, using iThink™ system to implement them. It uses the design science research methodology for the proof of concept of the models and modelling processes. The following model has been developed for a social media marketing agent/company, Facebook platform oriented and tested in real circumstances. This model is finalized through a number of revisions and iterators of the design, development, simulation, testing and evaluation processes. The validity and usefulness of this Facebook marketing model for the day-to-day decision making are authenticated by the management of the company organization. Facebook's Personal Page method can be adjusted, depending on the situation, in order to maximize the total profit of the company which is to bring new customers, keep the interest of the old customers and deliver traffic to its website.

  2. Nonadditive entropy maximization is inconsistent with Bayesian updating

    NASA Astrophysics Data System (ADS)

    Pressé, Steve

    2014-11-01

    The maximum entropy method—used to infer probabilistic models from data—is a special case of Bayes's model inference prescription which, in turn, is grounded in basic propositional logic. By contrast to the maximum entropy method, the compatibility of nonadditive entropy maximization with Bayes's model inference prescription has never been established. Here we demonstrate that nonadditive entropy maximization is incompatible with Bayesian updating and discuss the immediate implications of this finding. We focus our attention on special cases as illustrations.

  3. Nonadditive entropy maximization is inconsistent with Bayesian updating.

    PubMed

    Pressé, Steve

    2014-11-01

    The maximum entropy method-used to infer probabilistic models from data-is a special case of Bayes's model inference prescription which, in turn, is grounded in basic propositional logic. By contrast to the maximum entropy method, the compatibility of nonadditive entropy maximization with Bayes's model inference prescription has never been established. Here we demonstrate that nonadditive entropy maximization is incompatible with Bayesian updating and discuss the immediate implications of this finding. We focus our attention on special cases as illustrations.

  4. Using a Pareto-optimal solution set to characterize trade-offs between a broad range of values and preferences in climate risk management

    NASA Astrophysics Data System (ADS)

    Garner, Gregory; Reed, Patrick; Keller, Klaus

    2015-04-01

    Integrated assessment models (IAMs) are often used to inform the design of climate risk management strategies. Previous IAM studies have broken important new ground on analyzing the effects of parametric uncertainties, but they are often silent on the implications of uncertainties regarding the problem formulation. Here we use the Dynamic Integrated model of Climate and the Economy (DICE) to analyze the effects of uncertainty surrounding the definition of the objective(s). The standard DICE model adopts a single objective to maximize a weighted sum of utilities of per-capita consumption. Decision makers, however, are often concerned with a broader range of values and preferences that may be poorly captured by this a priori definition of utility. We reformulate the problem by introducing three additional objectives that represent values such as (i) reliably limiting global average warming to two degrees Celsius and minimizing (ii) the costs of abatement and (iii) the climate change damages. We use advanced multi-objective optimization methods to derive a set of Pareto-optimal solutions over which decision makers can trade-off and assess performance criteria a posteriori. We illustrate the potential for myopia in the traditional problem formulation and discuss the capability of this multiobjective formulation to provide decision support.

  5. Sophisticated approval voting, ignorance priors, and plurality heuristics: a behavioral social choice analysis in a Thurstonian framework.

    PubMed

    Regenwetter, Michel; Ho, Moon-Ho R; Tsetlin, Ilia

    2007-10-01

    This project reconciles historically distinct paradigms at the interface between individual and social choice theory, as well as between rational and behavioral decision theory. The authors combine a utility-maximizing prescriptive rule for sophisticated approval voting with the ignorance prior heuristic from behavioral decision research and two types of plurality heuristics to model approval voting behavior. When using a sincere plurality heuristic, voters simplify their decision process by voting for their single favorite candidate. When using a strategic plurality heuristic, voters strategically focus their attention on the 2 front-runners and vote for their preferred candidate among these 2. Using a hierarchy of Thurstonian random utility models, the authors implemented these different decision rules and tested them statistically on 7 real world approval voting elections. They cross-validated their key findings via a psychological Internet experiment. Although a substantial number of voters used the plurality heuristic in the real elections, they did so sincerely, not strategically. Moreover, even though Thurstonian models do not force such agreement, the results show, in contrast to common wisdom about social choice rules, that the sincere social orders by Condorcet, Borda, plurality, and approval voting are identical in all 7 elections and in the Internet experiment. PsycINFO Database Record (c) 2007 APA, all rights reserved.

  6. An application of prospect theory to a SHM-based decision problem

    NASA Astrophysics Data System (ADS)

    Bolognani, Denise; Verzobio, Andrea; Tonelli, Daniel; Cappello, Carlo; Glisic, Branko; Zonta, Daniele

    2017-04-01

    Decision making investigates choices that have uncertain consequences and that cannot be completely predicted. Rational behavior may be described by the so-called expected utility theory (EUT), whose aim is to help choosing among several solutions to maximize the expectation of the consequences. However, Kahneman and Tversky developed an alternative model, called prospect theory (PT), showing that the basic axioms of EUT are violated in several instances. In respect of EUT, PT takes into account irrational behaviors and heuristic biases. It suggests an alternative approach, in which probabilities are replaced by decision weights, which are strictly related to the decision maker's preferences and may change for different individuals. In particular, people underestimate the utility of uncertain scenarios compared to outcomes obtained with certainty, and show inconsistent preferences when the same choice is presented in different forms. The goal of this paper is precisely to analyze a real case study involving a decision problem regarding the Streicker Bridge, a pedestrian bridge on Princeton University campus. By modelling the manager of the bridge with the EUT first, and with PT later, we want to verify the differences between the two approaches and to investigate how the two models are sensitive to unpacking probabilities, which represent a common cognitive bias in irrational behaviors.

  7. Proteomic approaches to study the pig intestinal system.

    PubMed

    Soler, Laura; Niewold, Theo A; Moreno, Ángela; Garrido, Juan Jose

    2014-03-01

    One of the major challenges in pig production is managing digestive health to maximize feed conversion and growth rates, but also to minimize treatment costs and to warrant public health. There is a great interest in the development of useful tools for intestinal health monitoring and the investigation of possible prophylactic/ therapeutic intervention pathways. A great variety of in vivo and in vitro intestinal models of study have been developed in the recent years. The understanding of such a complex system as the intestinal system (IS), and the study of its physiology and pathology is not an easy task. Analysis of such a complex system requires the use of systems biology techniques, like proteomics. However, for a correct interpretation of results and to maximize analysis performance, a careful selection of the IS model of study and proteomic platform is required. The study of the IS system is especially important in the pig, a species whose farming requires a very careful management of husbandry procedures regarding feeding and nutrition. The incorrect management of the pig digestive system leads directly to economic losses related suboptimal growth and feed utilization and/or the appearance of intestinal infections, in particular diarrhea. Furthermore, this species is the most suitable experimental model for human IS studies. Proteomics has risen as one of the most promising approaches to study the pig IS. In this review, we describe the most useful models of IS research in porcine and the different proteomic platforms available. An overview of the recent findings in pig IS proteomics is also provided.

  8. Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter

    PubMed Central

    Loganathan, Shyamala; Mukherjee, Saswati

    2015-01-01

    Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms. PMID:26473166

  9. Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter.

    PubMed

    Loganathan, Shyamala; Mukherjee, Saswati

    2015-01-01

    Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms.

  10. China's medical savings accounts: an analysis of the price elasticity of demand for health care.

    PubMed

    Yu, Hao

    2017-07-01

    Although medical savings accounts (MSAs) have drawn intensive attention across the world for their potential in cost control, there is limited evidence of their impact on the demand for health care. This paper is intended to fill that gap. First, we built up a dynamic model of a consumer's problem of utility maximization in the presence of a nonlinear price schedule embedded in an MSA. Second, the model was implemented using data from a 2-year MSA pilot program in China. The estimated price elasticity under MSAs was between -0.42 and -0.58, i.e., higher than that reported in the literature. The relatively high price elasticity suggests that MSAs as an insurance feature may help control costs. However, the long-term effect of MSAs on health costs is subject to further analysis.

  11. Noisy preferences in risky choice: A cautionary note.

    PubMed

    Bhatia, Sudeep; Loomes, Graham

    2017-10-01

    We examine the effects of multiple sources of noise in risky decision making. Noise in the parameters that characterize an individual's preferences can combine with noise in the response process to distort observed choice proportions. Thus, underlying preferences that conform to expected value maximization can appear to show systematic risk aversion or risk seeking. Similarly, core preferences that are consistent with expected utility theory, when perturbed by such noise, can appear to display nonlinear probability weighting. For this reason, modal choices cannot be used simplistically to infer underlying preferences. Quantitative model fits that do not allow for both sorts of noise can lead to wrong conclusions. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. A lexicographic weighted Tchebycheff approach for multi-constrained multi-objective optimization of the surface grinding process

    NASA Astrophysics Data System (ADS)

    Khalilpourazari, Soheyl; Khalilpourazary, Saman

    2017-05-01

    In this article a multi-objective mathematical model is developed to minimize total time and cost while maximizing the production rate and surface finish quality in the grinding process. The model aims to determine optimal values of the decision variables considering process constraints. A lexicographic weighted Tchebycheff approach is developed to obtain efficient Pareto-optimal solutions of the problem in both rough and finished conditions. Utilizing a polyhedral branch-and-cut algorithm, the lexicographic weighted Tchebycheff model of the proposed multi-objective model is solved using GAMS software. The Pareto-optimal solutions provide a proper trade-off between conflicting objective functions which helps the decision maker to select the best values for the decision variables. Sensitivity analyses are performed to determine the effect of change in the grain size, grinding ratio, feed rate, labour cost per hour, length of workpiece, wheel diameter and downfeed of grinding parameters on each value of the objective function.

  13. Anthropogenic sulfate aerosol and the southward shift of tropical precipitation in the late 20th century

    NASA Astrophysics Data System (ADS)

    Hwang, Yen-Ting; Frierson, Dargan M. W.; Kang, Sarah M.

    2013-06-01

    In this paper, we demonstrate a global scale southward shift of the tropical rain belt during the latter half of the 20th century in observations and global climate models (GCMs). In rain gauge data, the southward shift maximizes in the 1980s and is associated with signals in Africa, Asia, and South America. A southward shift exists at a similar time in nearly all CMIP3 and CMIP5 historical simulations, and occurs on both land and ocean, although in most models the shifts are significantly less than in observations. Utilizing a theoretical framework based on atmospheric energetics, we perform an attribution of the zonal mean southward shift of precipitation across a large suite of CMIP3 and CMIP5 GCMs. Our results suggest that anthropogenic aerosol cooling of the Northern Hemisphere is the primary cause of the consistent southward shift across GCMs, although other processes affecting the atmospheric energy budget also contribute to the model-to-model spread.

  14. Wabash Valley Integrated Gasification Combined Cycle, Coal to Fischer Tropsch Jet Fuel Conversion Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shah, Jayesh; Hess, Fernando; Horzen, Wessel van

    This reports examines the feasibility of converting the existing Wabash Integrated Gasification Combined Cycle (IGCC) plant into a liquid fuel facility, with the goal of maximizing jet fuel production. The fuels produced are required to be in compliance with Section 526 of the Energy Independence and Security Act of 2007 (EISA 2007 §526) lifecycle greenhouse gas (GHG) emissions requirements, so lifecycle GHG emissions from the fuel must be equal to or better than conventional fuels. Retrofitting an existing gasification facility reduces the technical risk and capital costs associated with a coal to liquids project, leading to a higher probability ofmore » implementation and more competitive liquid fuel prices. The existing combustion turbine will continue to operate on low cost natural gas and low carbon fuel gas from the gasification facility. The gasification technology utilized at Wabash is the E-Gas™ Technology and has been in commercial operation since 1995. In order to minimize capital costs, the study maximizes reuse of existing equipment with minimal modifications. Plant data and process models were used to develop process data for downstream units. Process modeling was utilized for the syngas conditioning, acid gas removal, CO 2 compression and utility units. Syngas conversion to Fischer Tropsch (FT) liquids and upgrading of the liquids was modeled and designed by Johnson Matthey Davy Technologies (JM Davy). In order to maintain the GHG emission profile below that of conventional fuels, the CO 2 from the process must be captured and exported for sequestration or enhanced oil recovery. In addition the power utilized for the plant’s auxiliary loads had to be supplied by a low carbon fuel source. Since the process produces a fuel gas with sufficient energy content to power the plant’s loads, this fuel gas was converted to hydrogen and exported to the existing gas turbine for low carbon power production. Utilizing low carbon fuel gas and process steam in the existing combined cycle power plant provides sufficient power for all plant loads. The lifecycle GHG profile of the produced jet fuel is 95% of conventional jet fuel. Without converting the fuel gas to a low carbon fuel gas, the emissions would be 108% of conventional jet fuel and without any GHG mitigation, the profile would be 206%. Oil prices greater than $120 per barrel are required to reach a targeted internal rate of return on equity (IRROE) of 12%. Although capital expenditure is much less than if a greenfield facility was built, the relatively small size of the plant, assumed coal price, and the CTL risk profile used in the economic assumptions lead to a high cost of production. Assuming more favorable factors, the economic oil price could be reduced to $78 per barrel with GHG mitigation and $55 per barrel with no GHG mitigation.« less

  15. Multi-Node Thermal System Model for Lithium-Ion Battery Packs: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Ying; Smith, Kandler; Wood, Eric

    Temperature is one of the main factors that controls the degradation in lithium ion batteries. Accurate knowledge and control of cell temperatures in a pack helps the battery management system (BMS) to maximize cell utilization and ensure pack safety and service life. In a pack with arrays of cells, a cells temperature is not only affected by its own thermal characteristics but also by its neighbors, the cooling system and pack configuration, which increase the noise level and the complexity of cell temperatures prediction. This work proposes to model lithium ion packs thermal behavior using a multi-node thermal network model,more » which predicts the cell temperatures by zones. The model was parametrized and validated using commercial lithium-ion battery packs. neighbors, the cooling system and pack configuration, which increase the noise level and the complexity of cell temperatures prediction. This work proposes to model lithium ion packs thermal behavior using a multi-node thermal network model, which predicts the cell temperatures by zones. The model was parametrized and validated using commercial lithium-ion battery packs.« less

  16. Encapsulating Non-Human Primate Multipotent Stromal Cells in Alginate via High Voltage for Cell-Based Therapies and Cryopreservation

    PubMed Central

    Gryshkov, Oleksandr; Pogozhykh, Denys; Hofmann, Nicola; Pogozhykh, Olena; Mueller, Thomas; Glasmacher, Birgit

    2014-01-01

    Alginate cell-based therapy requires further development focused on clinical application. To assess engraftment, risk of mutations and therapeutic benefit studies should be performed in an appropriate non-human primate model, such as the common marmoset (Callithrix jacchus). In this work we encapsulated amnion derived multipotent stromal cells (MSCs) from Callithrix jacchus in defined size alginate beads using a high voltage technique. Our results indicate that i) alginate-cell mixing procedure and cell concentration do not affect the diameter of alginate beads, ii) encapsulation of high cell numbers (up to 10×106 cells/ml) can be performed in alginate beads utilizing high voltage and iii) high voltage (15–30 kV) does not alter the viability, proliferation and differentiation capacity of MSCs post-encapsulation compared with alginate encapsulated cells produced by the traditional air-flow method. The consistent results were obtained over the period of 7 days of encapsulated MSCs culture and after cryopreservation utilizing a slow cooling procedure (1 K/min). The results of this work show that high voltage encapsulation can further be maximized to develop cell-based therapies with alginate beads in a non-human primate model towards human application. PMID:25259731

  17. Case-mix groups for VA hospital-based home care.

    PubMed

    Smith, M E; Baker, C R; Branch, L G; Walls, R C; Grimes, R M; Karklins, J M; Kashner, M; Burrage, R; Parks, A; Rogers, P

    1992-01-01

    The purpose of this study is to group hospital-based home care (HBHC) patients homogeneously by their characteristics with respect to cost of care to develop alternative case mix methods for management and reimbursement (allocation) purposes. Six Veterans Affairs (VA) HBHC programs in Fiscal Year (FY) 1986 that maximized patient, program, and regional variation were selected, all of which agreed to participate. All HBHC patients active in each program on October 1, 1987, in addition to all new admissions through September 30, 1988 (FY88), comprised the sample of 874 unique patients. Statistical methods include the use of classification and regression trees (CART software: Statistical Software; Lafayette, CA), analysis of variance, and multiple linear regression techniques. The resulting algorithm is a three-factor model that explains 20% of the cost variance (R2 = 20%, with a cross validation R2 of 12%). Similar classifications such as the RUG-II, which is utilized for VA nursing home and intermediate care, the VA outpatient resource allocation model, and the RUG-HHC, utilized in some states for reimbursing home health care in the private sector, explained less of the cost variance and, therefore, are less adequate for VA home care resource allocation.

  18. The Impact of Early Substance Use Disorder Treatment Response on Treatment Outcomes Among Pregnant Women With Primary Opioid Use.

    PubMed

    Tuten, Michelle; Fitzsimons, Heather; Hochheimer, Martin; Jones, Hendree E; Chisolm, Margaret S

    2018-03-13

    This study examined the impact of early patient response on treatment utilization and substance use among pregnant participants enrolled in substance use disorder (SUD) treatment. Treatment responders (TRs) and treatment nonresponders (TNRs) were compared on pretreatment and treatment measures. Regression models predicted treatment utilization and substance use. TR participants attended more treatment and had lower rates of substance use relative to TNR participants. Regression models for treatment utilization and substance use were significant. Maternal estimated gestational age (EGA) and baseline cocaine use were negatively associated with treatment attendance. Medication-assisted treatment, early treatment response, and baseline SUD treatment were positively associated with treatment attendance. Maternal EGA was negatively associated with counseling attendance; early treatment response was positively associated with counseling attendance. Predictors of any substance use at 1 month were maternal education, EGA, early treatment nonresponse, and baseline cocaine use. The single predictor of any substance use at 2 months was early treatment nonresponse. Predictors of opioid use at 1 month were maternal education, EGA, early treatment nonresponse, and baseline SUD treatment. Predictors of opioid use at 2 months were early treatment nonresponse, and baseline cocaine and marijuana use. Predictors of cocaine use at 1 month were early treatment nonresponse, baseline cocaine use, and baseline SUD treatment. Predictors of cocaine use at 2 months were early treatment nonresponse and baseline cocaine use. Early treatment response predicts more favorable maternal treatment utilization and substance use outcomes. Treatment providers should implement interventions to maximize patient early response to treatment.

  19. Intensely Exposed Oklahoma City Terrorism Survivors: Long-term Mental Health and Health Needs and Posttraumatic Growth.

    PubMed

    Tucker, Phebe; Pfefferbaum, Betty; Nitiéma, Pascal; Wendling, Tracy L; Brown, Sheryll

    2016-03-01

    In this study, we explore directly exposed terrorism survivors' mental health and health status, healthcare utilization, alcohol and tobacco use, and posttraumatic growth 18½ years postdisaster. Telephone surveys compared terrorism survivors and nonexposed community control subjects, using Hopkins Symptom Checklist, Breslau's PTSD screen, Posttraumatic Growth Inventory, and Health Status Questionnaire 12. Statistical analyses included multivariable logistic regression and linear modeling. Survivors, more than 80% injured, reported more anxiety and depression symptoms than did control subjects, with survivors' anxiety and depression associated with heavy drinking (≥5 drinks) and worse mental health and social functioning. While survivors had continued posttraumatic stress disorder symptoms (32 [23.2%] met probable posttraumatic stress disorder threshold), they also reported posttraumatic growth. Survivors had more care from physical, speech, respiratory, and occupational therapists. In this unprecedented long-term assessment, survivors' psychiatric symptoms, alcohol use, and ancillary health service utilization suggest unmet mental health and health needs. Extended recovery efforts might benefit from maximizing positive growth and coping.

  20. Price game and chaos control among three oligarchs with different rationalities in property insurance market.

    PubMed

    Ma, Junhai; Zhang, Junling

    2012-12-01

    Combining with the actual competition in Chinese property insurance market and assuming that the property insurance companies take the marginal utility maximization as the basis of decision-making when they play price games, we first established the price game model with three oligarchs who have different rationalities. Then, we discussed the existence and stability of equilibrium points. Third, we studied the theoretical value of Lyapunov exponent at Nash equilibrium point and its change process with the main parameters' changes though having numerical simulation for the system such as the bifurcation, chaos attractors, and so on. Finally, we analyzed the influences which the changes of different parameters have on the profits and utilities of oligarchs and their corresponding competition advantage. Based on this, we used the variable feedback control method to control the chaos of the system and stabilized the chaos state to Nash equilibrium point again. The results have significant theoretical and practical application value.

  1. Using Atmospheric Dispersion Theory to Inform the Design of a Short-lived Radioactive Particle Release Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rishel, Jeremy P.; Keillor, Martin E.; Arrigo, Leah M.

    2016-01-01

    Atmospheric dispersion theory can be used to predict ground deposition of particulates downwind of a radionuclide release. This paper utilizes standard formulations found in Gaussian plume models to inform the design of an experimental release of short-lived radioactive particles into the atmosphere. Specifically, a source depletion algorithm is used to determine the optimum particle size and release height that maximizes the near-field deposition while minimizing the both the required source activity and the fraction of activity lost to long-distance transport. The purpose of the release is to provide a realistic deposition pattern that might be observed downwind of a small-scalemore » vent from an underground nuclear explosion. The deposition field will be used, in part, to investigate several techniques of gamma radiation survey and spectrometry that could be utilized by an On-Site Inspection team under the verification regime of the Comprehensive Nuclear-Test-Ban Treaty.« less

  2. Price game and chaos control among three oligarchs with different rationalities in property insurance market

    NASA Astrophysics Data System (ADS)

    Ma, Junhai; Zhang, Junling

    2012-12-01

    Combining with the actual competition in Chinese property insurance market and assuming that the property insurance companies take the marginal utility maximization as the basis of decision-making when they play price games, we first established the price game model with three oligarchs who have different rationalities. Then, we discussed the existence and stability of equilibrium points. Third, we studied the theoretical value of Lyapunov exponent at Nash equilibrium point and its change process with the main parameters' changes though having numerical simulation for the system such as the bifurcation, chaos attractors, and so on. Finally, we analyzed the influences which the changes of different parameters have on the profits and utilities of oligarchs and their corresponding competition advantage. Based on this, we used the variable feedback control method to control the chaos of the system and stabilized the chaos state to Nash equilibrium point again. The results have significant theoretical and practical application value.

  3. Hospital reimbursement incentives: is there a more effective option?--Part II.

    PubMed

    Weil, Thomas P

    2013-01-01

    As discussed in Part I of this article, hospital executives in Canada, Germany, and the United States manage their facilities' resources to maximize the incentives inherent in their respective reimbursement system and thereby increase their bottom line. It was also discussed that an additional supply of available hospitals, physicians, and other services will generate increased utilization. Part II discusses how the Patient Protection and Affordable Care Act of 2010 will eventually fail since it neither controls prices nor utilization (e.g., imaging, procedures, ambulatory surgery, discretionary spending). This article concludes with the discussion of the German multipayer approach with universal access and global budgets that might well be a model for U.S. healthcare in the future. Although the German healthcare system has a number of shortfalls, its paradigm could offer the most appropriate compromise when selecting the economic incentives to reduce the percentage of the U.S. gross domestic product expenditure for healthcare from 17.4% to roughly 12.0%.

  4. Game theoretic sensor management for target tracking

    NASA Astrophysics Data System (ADS)

    Shen, Dan; Chen, Genshe; Blasch, Erik; Pham, Khanh; Douville, Philip; Yang, Chun; Kadar, Ivan

    2010-04-01

    This paper develops and evaluates a game-theoretic approach to distributed sensor-network management for target tracking via sensor-based negotiation. We present a distributed sensor-based negotiation game model for sensor management for multi-sensor multi-target tacking situations. In our negotiation framework, each negotiation agent represents a sensor and each sensor maximizes their utility using a game approach. The greediness of each sensor is limited by the fact that the sensor-to-target assignment efficiency will decrease if too many sensor resources are assigned to a same target. It is similar to the market concept in real world, such as agreements between buyers and sellers in an auction market. Sensors are willing to switch targets so that they can obtain their highest utility and the most efficient way of applying their resources. Our sub-game perfect equilibrium-based negotiation strategies dynamically and distributedly assign sensors to targets. Numerical simulations are performed to demonstrate our sensor-based negotiation approach for distributed sensor management.

  5. Optimal control, investment and utilization schemes for energy storage under uncertainty

    NASA Astrophysics Data System (ADS)

    Mirhosseini, Niloufar Sadat

    Energy storage has the potential to offer new means for added flexibility on the electricity systems. This flexibility can be used in a number of ways, including adding value towards asset management, power quality and reliability, integration of renewable resources and energy bill savings for the end users. However, uncertainty about system states and volatility in system dynamics can complicate the question of when to invest in energy storage and how best to manage and utilize it. This work proposes models to address different problems associated with energy storage within a microgrid, including optimal control, investment, and utilization. Electric load, renewable resources output, storage technology cost and electricity day-ahead and spot prices are the factors that bring uncertainty to the problem. A number of analytical methodologies have been adopted to develop the aforementioned models. Model Predictive Control and discretized dynamic programming, along with a new decomposition algorithm are used to develop optimal control schemes for energy storage for two different levels of renewable penetration. Real option theory and Monte Carlo simulation, coupled with an optimal control approach, are used to obtain optimal incremental investment decisions, considering multiple sources of uncertainty. Two stage stochastic programming is used to develop a novel and holistic methodology, including utilization of energy storage within a microgrid, in order to optimally interact with energy market. Energy storage can contribute in terms of value generation and risk reduction for the microgrid. The integration of the models developed here are the basis for a framework which extends from long term investments in storage capacity to short term operational control (charge/discharge) of storage within a microgrid. In particular, the following practical goals are achieved: (i) optimal investment on storage capacity over time to maximize savings during normal and emergency operations; (ii) optimal market strategy of buy and sell over 24-hour periods; (iii) optimal storage charge and discharge in much shorter time intervals.

  6. Increased cardiac output elicits higher V̇O2max in response to self-paced exercise.

    PubMed

    Astorino, Todd Anthony; McMillan, David William; Edmunds, Ross Montgomery; Sanchez, Eduardo

    2015-03-01

    Recently, a self-paced protocol demonstrated higher maximal oxygen uptake versus the traditional ramp protocol. The primary aim of the current study was to further explore potential differences in maximal oxygen uptake between the ramp and self-paced protocols using simultaneous measurement of cardiac output. Active men and women of various fitness levels (N = 30, mean age = 26.0 ± 5.0 years) completed 3 graded exercise tests separated by a minimum of 48 h. Participants initially completed progressive ramp exercise to exhaustion to determine maximal oxygen uptake followed by a verification test to confirm maximal oxygen uptake attainment. Over the next 2 sessions, they performed a self-paced and an additional ramp protocol. During exercise, gas exchange data were obtained using indirect calorimetry, and thoracic impedance was utilized to estimate hemodynamic function (stroke volume and cardiac output). One-way ANOVA with repeated measures was used to determine differences in maximal oxygen uptake and cardiac output between ramp and self-paced testing. Results demonstrated lower (p < 0.001) maximal oxygen uptake via the ramp (47.2 ± 10.2 mL·kg(-1)·min(-1)) versus the self-paced (50.2 ± 9.6 mL·kg(-1)·min(-1)) protocol, with no interaction (p = 0.06) seen for fitness level. Maximal heart rate and cardiac output (p = 0.02) were higher in the self-paced protocol versus ramp exercise. In conclusion, data show that the traditional ramp protocol may underestimate maximal oxygen uptake compared with a newly developed self-paced protocol, with a greater cardiac output potentially responsible for this outcome.

  7. Dietary 2-oxoglutarate prevents bone loss caused by neonatal treatment with maximal dexamethasone dose

    PubMed Central

    Tomaszewska, Ewa; Muszyński, Siemowit; Blicharski, Tomasz; Pierzynowski, Stefan G

    2017-01-01

    Synthetic glucocorticoids (GCs) are widely used in the variety of dosages for treatment of premature infants with chronic lung disease, respiratory distress syndrome, allergies, asthma, and other inflammatory and autoimmune conditions. Yet, adverse effects such as glucocorticoid-induced osteoporosis and growth retardation are recognized. Conversely, 2-oxoglutarate (2-Ox), a precursor of glutamine, glutamate, and collagen amino acids, exerts protective effects on bone development. Our aim was to elucidate the effect of dietary administered 2-Ox on bone loss caused by neonatal treatment with clinically relevant maximal therapeutic dexamethasone (Dex) dose. Long bones of neonatal female piglets receiving Dex, Dex+2-Ox, or untreated were examined through measurements of mechanical properties, density, mineralization, geometry, histomorphometry, and histology. Selected hormones, bone turnover, and growth markers were also analyzed. Neonatal administration of clinically relevant maximal dose of Dex alone led to over 30% decrease in bone mass and the ultimate strength (P < 0.001 for all). The length (13 and 7% for femur and humerus, respectively) and other geometrical parameters (13–45%) decreased compared to the control (P < 0.001 for all). Dex impaired bone growth and caused hormonal imbalance. Dietary 2-Ox prevented Dex influence and vast majority of assessed bone parameters were restored almost to the control level. Piglets receiving 2-Ox had heavier, denser, and stronger bones; higher levels of growth hormone and osteocalcin concentration; and preserved microarchitecture of trabecular bone compared to the Dex group. 2-Ox administered postnatally had a potential to maintain bone structure of animals simultaneously treated with maximal therapeutic doses of Dex, which, in our opinion, may open up a new opportunity in developing combined treatment for children treated with GCs. Impact statement The present study has showed, for the first time, that dietary 2-oxoglutarate (2-Ox) administered postnatally has a potential to improve/maintain bone structure of animals simultaneously treated with maximal therapeutic doses of dexamethasone (Dex). It may open the new direction in searching and developing combined treatment for children treated with glucocorticoids (GCs) since growing group of children is exposed to synthetic GCs and adverse effects such as glucocorticoid-induced osteoporosis and growth retardation are recognized. Currently proposed combined therapies have numerous side effects. Thus, this study proposed a new direction in combined therapies utilizing dietary supplementation with glutamine derivative. Impairment caused by Dex in presented long bones animal model was prevented by dietary supplementation with 2-Ox and vast majority of assessed bone parameters were restored almost to the control level. These results support previous thesis on the regulatory mechanism of nutrient utilization regulated by glutamine derivatives and enrich the nutritional science. PMID:28178857

  8. Effects of Maximal Sodium and Potassium Conductance on the Stability of Hodgkin-Huxley Model

    PubMed Central

    Wang, Kuanquan; Yuan, Yongfeng; Zhang, Henggui

    2014-01-01

    Hodgkin-Huxley (HH) equation is the first cell computing model in the world and pioneered the use of model to study electrophysiological problems. The model consists of four differential equations which are based on the experimental data of ion channels. Maximal conductance is an important characteristic of different channels. In this study, mathematical method is used to investigate the importance of maximal sodium conductance g-Na and maximal potassium conductance g-K. Applying stability theory, and taking g-Na and g-K as variables, we analyze the stability and bifurcations of the model. Bifurcations are found when the variables change, and bifurcation points and boundary are also calculated. There is only one bifurcation point when g-Na is the variable, while there are two points when g-K is variable. The (g-Na,  g-K) plane is partitioned into two regions and the upper bifurcation boundary is similar to a line when both g-Na and g-K are variables. Numerical simulations illustrate the validity of the analysis. The results obtained could be helpful in studying relevant diseases caused by maximal conductance anomaly. PMID:25104970

  9. Renal Perfusion in Scleroderma Patients Assessed by Microbubble-Based Contrast-Enhanced Ultrasound

    PubMed Central

    Kleinert, Stefan; Roll, Petra; Baumgaertner, Christian; Himsel, Andrea; Mueller, Adelheid; Fleck, Martin; Feuchtenberger, Martin; Jenett, Manfred; Tony, Hans-Peter

    2012-01-01

    Objectives: Renal damage is common in scleroderma. It can occur acutely or chronically. Renal reserve might already be impaired before it can be detected by laboratory findings. Microbubble-based contrast-enhanced ultrasound has been demonstrated to improve blood perfusion imaging in organs. Therefore, we conducted a study to assess renal perfusion in scleroderma patients utilizing this novel technique. Materials and Methodology: Microbubble-based contrast agent was infused and destroyed by using high mechanical index by Siemens Sequoia (curved array, 4.5 MHz). Replenishment was recorded for 8 seconds. Regions of interests (ROI) were analyzed in renal parenchyma, interlobular artery and renal pyramid with quantitative contrast software (CUSQ 1.4, Siemens Acuson, Mountain View, California). Time to maximal Enhancement (TmE), maximal enhancement (mE) and maximal enhancement relative to maximal enhancement of the interlobular artery (mE%A) were calculated for different ROIs. Results: There was a linear correlation between the time to maximal enhancement in the parenchyma and the glomerular filtration rate. However, the other parameters did not reveal significant differences between scleroderma patients and healthy controls. Conclusion: Renal perfusion of scleroderma patients including the glomerular filtration rate can be assessed using microbubble-based contrast media. PMID:22670165

  10. Nutritional benefit from leaf litter utilization in the pitcher plant Nepenthes ampullaria.

    PubMed

    Pavlovič, Andrej; Slováková, Ludmila; Šantrůček, Jiří

    2011-11-01

    The pitcher plant Nepenthes ampullaria has an unusual growth pattern, which differs markedly from other species in the carnivorous genus Nepenthes. Its pitchers have a reflexed lid and sit above the soil surface in a tighly packed 'carpet'. They contain a significant amount of plant-derived materials, suggesting that this species is partially herbivorous. We tested the hypothesis that the plant benefits from leaf litter utilization by increased photosynthetic efficiency sensu stricto cost/benefit model. Stable nitrogen isotope abundance indicated that N. ampullaria derived around 41.7 ± 5.5% of lamina and 54.8 ± 7.0% of pitcher nitrogen from leaf litter. The concentrations of nitrogen and assimilation pigments, and the rate of net photosynthesis (A(N)), increased in the lamina as a result of feeding, but did not increase in the trap. However, maximal (F(v) /F(m)) and effective photochemical quantum yield of photosystem II (Φ(PSII)) were unaffected. Our data indicate that N. ampullaria benefits from leaf litter utilization and our study provides the first experimental evidence that the unique nitrogen sequestration strategy of N. ampullaria provides benefits in term of photosynthesis and growth. © 2011 Blackwell Publishing Ltd.

  11. Optimizing the Energy and Throughput of a Water-Quality Monitoring System.

    PubMed

    Olatinwo, Segun O; Joubert, Trudi-H

    2018-04-13

    This work presents a new approach to the maximization of energy and throughput in a wireless sensor network (WSN), with the intention of applying the approach to water-quality monitoring. Water-quality monitoring using WSN technology has become an interesting research area. Energy scarcity is a critical issue that plagues the widespread deployment of WSN systems. Different power supplies, harvesting energy from sustainable sources, have been explored. However, when energy-efficient models are not put in place, energy harvesting based WSN systems may experience an unstable energy supply, resulting in an interruption in communication, and low system throughput. To alleviate these problems, this paper presents the joint maximization of the energy harvested by sensor nodes and their information-transmission rate using a sum-throughput technique. A wireless information and power transfer (WIPT) method is considered by harvesting energy from dedicated radio frequency sources. Due to the doubly near-far condition that confronts WIPT systems, a new WIPT system is proposed to improve the fairness of resource utilization in the network. Numerical simulation results are presented to validate the mathematical formulations for the optimization problem, which maximize the energy harvested and the overall throughput rate. Defining the performance metrics of achievable throughput and fairness in resource sharing, the proposed WIPT system outperforms an existing state-of-the-art WIPT system, with the comparison based on numerical simulations of both systems. The improved energy efficiency of the proposed WIPT system contributes to addressing the problem of energy scarcity.

  12. Optimizing the Energy and Throughput of a Water-Quality Monitoring System

    PubMed Central

    Olatinwo, Segun O.

    2018-01-01

    This work presents a new approach to the maximization of energy and throughput in a wireless sensor network (WSN), with the intention of applying the approach to water-quality monitoring. Water-quality monitoring using WSN technology has become an interesting research area. Energy scarcity is a critical issue that plagues the widespread deployment of WSN systems. Different power supplies, harvesting energy from sustainable sources, have been explored. However, when energy-efficient models are not put in place, energy harvesting based WSN systems may experience an unstable energy supply, resulting in an interruption in communication, and low system throughput. To alleviate these problems, this paper presents the joint maximization of the energy harvested by sensor nodes and their information-transmission rate using a sum-throughput technique. A wireless information and power transfer (WIPT) method is considered by harvesting energy from dedicated radio frequency sources. Due to the doubly near–far condition that confronts WIPT systems, a new WIPT system is proposed to improve the fairness of resource utilization in the network. Numerical simulation results are presented to validate the mathematical formulations for the optimization problem, which maximize the energy harvested and the overall throughput rate. Defining the performance metrics of achievable throughput and fairness in resource sharing, the proposed WIPT system outperforms an existing state-of-the-art WIPT system, with the comparison based on numerical simulations of both systems. The improved energy efficiency of the proposed WIPT system contributes to addressing the problem of energy scarcity. PMID:29652866

  13. Lightweight, High Performance, Low Cost Propulsion Systems for Mars Exploration Missions to Maximize Science Payload

    NASA Astrophysics Data System (ADS)

    Trinh, H. P.

    2012-06-01

    Utilization of new cold hypergolic propellants and leverage Missile Defense Agency technology for propulsion systems on Mars explorations will provide an increase of science payload and have significant payoffs and benefits for NASA missions.

  14. Genetic variation in the USDA Chamaecrista fasciculata collection

    USDA-ARS?s Scientific Manuscript database

    Germplasm collections serve as critical repositories of genetic variation. Characterizing genetic diversity in existing collections is necessary to maximize their utility and to guide future collecting efforts. We have used AFLP markers to characterize genetic variation in the USDA germplasm collect...

  15. Muscle-spring dynamics in time-limited, elastic movements.

    PubMed

    Rosario, M V; Sutton, G P; Patek, S N; Sawicki, G S

    2016-09-14

    Muscle contractions that load in-series springs with slow speed over a long duration do maximal work and store the most elastic energy. However, time constraints, such as those experienced during escape and predation behaviours, may prevent animals from achieving maximal force capacity from their muscles during spring-loading. Here, we ask whether animals that have limited time for elastic energy storage operate with springs that are tuned to submaximal force production. To answer this question, we used a dynamic model of a muscle-spring system undergoing a fixed-end contraction, with parameters from a time-limited spring-loader (bullfrog: Lithobates catesbeiana) and a non-time-limited spring-loader (grasshopper: Schistocerca gregaria). We found that when muscles have less time to contract, stored elastic energy is maximized with lower spring stiffness (quantified as spring constant). The spring stiffness measured in bullfrog tendons permitted less elastic energy storage than was predicted by a modelled, maximal muscle contraction. However, when muscle contractions were modelled using biologically relevant loading times for bullfrog jumps (50 ms), tendon stiffness actually maximized elastic energy storage. In contrast, grasshoppers, which are not time limited, exhibited spring stiffness that maximized elastic energy storage when modelled with a maximal muscle contraction. These findings demonstrate the significance of evolutionary variation in tendon and apodeme properties to realistic jumping contexts as well as the importance of considering the effect of muscle dynamics and behavioural constraints on energy storage in muscle-spring systems. © 2016 The Author(s).

  16. HPLC studio: a novel software utility to perform HPLC chromatogram comparison for screening purposes.

    PubMed

    García, J B; Tormo, José R

    2003-06-01

    A new tool, HPLC Studio, was developed for the comparison of high-performance liquid chromatography (HPLC) chromatograms from microbial extracts. The new utility makes it possible to create a virtual chromatogram by mixing up to 20 individual chromatograms. The virtual chromatogram is the first step in establishing a ranking of the microbial fermentation conditions based on either the area or diversity of HPLC peaks. The utility was used to maximize the diversity of secondary metabolites tested from a microorganism and therefore increase the chances of finding new lead compounds in a drug discovery program.

  17. Network approach for decision making under risk—How do we choose among probabilistic options with the same expected value?

    PubMed Central

    Chen, Yi-Shin

    2018-01-01

    Conventional decision theory suggests that under risk, people choose option(s) by maximizing the expected utility. However, theories deal ambiguously with different options that have the same expected utility. A network approach is proposed by introducing ‘goal’ and ‘time’ factors to reduce the ambiguity in strategies for calculating the time-dependent probability of reaching a goal. As such, a mathematical foundation that explains the irrational behavior of choosing an option with a lower expected utility is revealed, which could imply that humans possess rationality in foresight. PMID:29702665

  18. Network approach for decision making under risk-How do we choose among probabilistic options with the same expected value?

    PubMed

    Pan, Wei; Chen, Yi-Shin

    2018-01-01

    Conventional decision theory suggests that under risk, people choose option(s) by maximizing the expected utility. However, theories deal ambiguously with different options that have the same expected utility. A network approach is proposed by introducing 'goal' and 'time' factors to reduce the ambiguity in strategies for calculating the time-dependent probability of reaching a goal. As such, a mathematical foundation that explains the irrational behavior of choosing an option with a lower expected utility is revealed, which could imply that humans possess rationality in foresight.

  19. Martian resource locations: Identification and optimization

    NASA Astrophysics Data System (ADS)

    Chamitoff, Gregory; James, George; Barker, Donald; Dershowitz, Adam

    2005-04-01

    The identification and utilization of in situ Martian natural resources is the key to enable cost-effective long-duration missions and permanent human settlements on Mars. This paper presents a powerful software tool for analyzing Martian data from all sources, and for optimizing mission site selection based on resource collocation. This program, called Planetary Resource Optimization and Mapping Tool (PROMT), provides a wide range of analysis and display functions that can be applied to raw data or imagery. Thresholds, contours, custom algorithms, and graphical editing are some of the various methods that can be used to process data. Output maps can be created to identify surface regions on Mars that meet any specific criteria. The use of this tool for analyzing data, generating maps, and collocating features is demonstrated using data from the Mars Global Surveyor and the Odyssey spacecraft. The overall mission design objective is to maximize a combination of scientific return and self-sufficiency based on utilization of local materials. Landing site optimization involves maximizing accessibility to collocated science and resource features within a given mission radius. Mission types are categorized according to duration, energy resources, and in situ resource utilization. Preliminary optimization results are shown for a number of mission scenarios.

  20. Endogenous patient responses and the consistency principle in cost-effectiveness analysis.

    PubMed

    Liu, Liqun; Rettenmaier, Andrew J; Saving, Thomas R

    2012-01-01

    In addition to incurring direct treatment costs and generating direct health benefits that improve longevity and/or health-related quality of life, medical interventions often have further or "unrelated" financial and health impacts, raising the issue of what costs and effects should be included in calculating the cost-effectiveness ratio of an intervention. The "consistency principle" in medical cost-effectiveness analysis (CEA) requires that one include both the cost and the utility benefit of a change (in medical expenditures, consumption, or leisure) caused by an intervention or neither of them. By distinguishing between exogenous changes directly brought about by an intervention and endogenous patient responses to the exogenous changes, and within a lifetime utility maximization framework, this article addresses 2 questions related to the consistency principle: 1) how to choose among alternative internally consistent exclusion/inclusion rules, and 2) what to do with survival consumption costs and earnings. It finds that, for an endogenous change, excluding or including both the cost and the utility benefit of the change does not alter cost-effectiveness results. Further, in agreement with the consistency principle, welfare maximization implies that consumption costs and earnings during the extended life directly caused by an intervention should be included in CEA.

  1. A fuel-efficient cruise performance model for general aviation piston engine airplanes. Ph.D. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Parkinson, R. C. H.

    1983-01-01

    A fuel-efficient cruise performance model which facilitates maximizing the specific range of General Aviation airplanes powered by spark-ignition piston engines and propellers is presented. Airplanes of fixed design only are considered. The uses and limitations of typical Pilot Operating Handbook cruise performance data, for constructing cruise performance models suitable for maximizing specific range, are first examined. These data are found to be inadequate for constructing such models. A new model of General Aviation piston-prop airplane cruise performance is then developed. This model consists of two subsystem models: the airframe-propeller-atmosphere subsystem model; and the engine-atmosphere subsystem model. The new model facilitates maximizing specific range; and by virtue of its implicity and low volume data storge requirements, appears suitable for airborne microprocessor implementation.

  2. A multi-objective optimization model for hub network design under uncertainty: An inexact rough-interval fuzzy approach

    NASA Astrophysics Data System (ADS)

    Niakan, F.; Vahdani, B.; Mohammadi, M.

    2015-12-01

    This article proposes a multi-objective mixed-integer model to optimize the location of hubs within a hub network design problem under uncertainty. The considered objectives include minimizing the maximum accumulated travel time, minimizing the total costs including transportation, fuel consumption and greenhouse emissions costs, and finally maximizing the minimum service reliability. In the proposed model, it is assumed that for connecting two nodes, there are several types of arc in which their capacity, transportation mode, travel time, and transportation and construction costs are different. Moreover, in this model, determining the capacity of the hubs is part of the decision-making procedure and balancing requirements are imposed on the network. To solve the model, a hybrid solution approach is utilized based on inexact programming, interval-valued fuzzy programming and rough interval programming. Furthermore, a hybrid multi-objective metaheuristic algorithm, namely multi-objective invasive weed optimization (MOIWO), is developed for the given problem. Finally, various computational experiments are carried out to assess the proposed model and solution approaches.

  3. An auxiliary graph based dynamic traffic grooming algorithm in spatial division multiplexing enabled elastic optical networks with multi-core fibers

    NASA Astrophysics Data System (ADS)

    Zhao, Yongli; Tian, Rui; Yu, Xiaosong; Zhang, Jiawei; Zhang, Jie

    2017-03-01

    A proper traffic grooming strategy in dynamic optical networks can improve the utilization of bandwidth resources. An auxiliary graph (AG) is designed to solve the traffic grooming problem under a dynamic traffic scenario in spatial division multiplexing enabled elastic optical networks (SDM-EON) with multi-core fibers. Five traffic grooming policies achieved by adjusting the edge weights of an AG are proposed and evaluated through simulation: maximal electrical grooming (MEG), maximal optical grooming (MOG), maximal SDM grooming (MSG), minimize virtual hops (MVH), and minimize physical hops (MPH). Numeric results show that each traffic grooming policy has its own features. Among different traffic grooming policies, an MPH policy can achieve the lowest bandwidth blocking ratio, MEG can save the most transponders, and MSG can obtain the fewest cores for each request.

  4. Mathematical problems of quantum teleportation

    NASA Astrophysics Data System (ADS)

    Tanaka, Yoshiharu; Asano, Masanari; Ohya, Masanori

    2011-03-01

    It has been considered that a maximal entangled state is needed for complete quantum teleportation. However, Kossakowski and Ohya proposed a scheme of complete teleportation for nonmaximal entangled state [1]. Basing on their scheme, we proposed a teleportation model of 2-level state with a non-maximal entangled state [2]. In the present study, we construct its expanded model, in which Alice can teleport m-level state even if non-maximal entangled state is used.

  5. Feasibility Study of Coal Gasification/Fuel Cell/Cogeneration Project. Scranton, Pennsylvania Site. Project Description,

    DTIC Science & Technology

    1985-11-01

    arranged to maximize thermal output; - Plant will meet PURPA criteria for recognition as a "Qualifying Facility" (QF). 7587A 2 - GFC emissions will be...10. Plant must meet Public Utilities Regulatory Policies Act ( PURPA ) criteria for classification as a "Qualifying Facility" (QF). 11. Visual effect...assessments. 3 The Public Utilities Regulatory Policies Act ( PURPA ) which is administered by the Federal Energy Regulatory Commission (FERC), governs how a

  6. Global Snow from Space: Development of a Satellite-based, Terrestrial Snow Mission Planning Tool

    NASA Astrophysics Data System (ADS)

    Forman, B. A.; Kumar, S.; LeMoigne, J.; Nag, S.

    2017-12-01

    A global, satellite-based, terrestrial snow mission planning tool is proposed to help inform experimental mission design with relevance to snow depth and snow water equivalent (SWE). The idea leverages the capabilities of NASA's Land Information System (LIS) and the Tradespace Analysis Tool for Constellations (TAT-C) to harness the information content of Earth science mission data across a suite of hypothetical sensor designs, orbital configurations, data assimilation algorithms, and optimization and uncertainty techniques, including cost estimates and risk assessments of each hypothetical permutation. One objective of the proposed observing system simulation experiment (OSSE) is to assess the complementary - or perhaps contradictory - information content derived from the simultaneous collection of passive microwave (radiometer), active microwave (radar), and LIDAR observations from space-based platforms. The integrated system will enable a true end-to-end OSSE that can help quantify the value of observations based on their utility towards both scientific research and applications as well as to better guide future mission design. Science and mission planning questions addressed as part of this concept include: What observational records are needed (in space and time) to maximize terrestrial snow experimental utility? How might observations be coordinated (in space and time) to maximize this utility? What is the additional utility associated with an additional observation? How can future mission costs be minimized while ensuring Science requirements are fulfilled?

  7. Towards the Development of a Global, Satellite-based, Terrestrial Snow Mission Planning Tool

    NASA Technical Reports Server (NTRS)

    Forman, Bart; Kumar, Sujay; Le Moigne, Jacqueline; Nag, Sreeja

    2017-01-01

    A global, satellite-based, terrestrial snow mission planning tool is proposed to help inform experimental mission design with relevance to snow depth and snow water equivalent (SWE). The idea leverages the capabilities of NASAs Land Information System (LIS) and the Tradespace Analysis Tool for Constellations (TAT C) to harness the information content of Earth science mission data across a suite of hypothetical sensor designs, orbital configurations, data assimilation algorithms, and optimization and uncertainty techniques, including cost estimates and risk assessments of each hypothetical orbital configuration.One objective the proposed observing system simulation experiment (OSSE) is to assess the complementary or perhaps contradictory information content derived from the simultaneous collection of passive microwave (radiometer), active microwave (radar), and LIDAR observations from space-based platforms. The integrated system will enable a true end-to-end OSSE that can help quantify the value of observations based on their utility towards both scientific research and applications as well as to better guide future mission design. Science and mission planning questions addressed as part of this concept include:1. What observational records are needed (in space and time) to maximize terrestrial snow experimental utility?2. How might observations be coordinated (in space and time) to maximize utility? 3. What is the additional utility associated with an additional observation?4. How can future mission costs being minimized while ensuring Science requirements are fulfilled?

  8. Towards the Development of a Global, Satellite-Based, Terrestrial Snow Mission Planning Tool

    NASA Technical Reports Server (NTRS)

    Forman, Bart; Kumar, Sujay; Le Moigne, Jacqueline; Nag, Sreeja

    2017-01-01

    A global, satellite-based, terrestrial snow mission planning tool is proposed to help inform experimental mission design with relevance to snow depth and snow water equivalent (SWE). The idea leverages the capabilities of NASA's Land Information System (LIS) and the Tradespace Analysis Tool for Constellations (TAT-C) to harness the information content of Earth science mission data across a suite of hypothetical sensor designs, orbital configurations, data assimilation algorithms, and optimization and uncertainty techniques, including cost estimates and risk assessments of each hypothetical permutation. One objective of the proposed observing system simulation experiment (OSSE) is to assess the complementary or perhaps contradictory information content derived from the simultaneous collection of passive microwave (radiometer), active microwave (radar), and LIDAR observations from space-based platforms. The integrated system will enable a true end-to-end OSSE that can help quantify the value of observations based on their utility towards both scientific research and applications as well as to better guide future mission design. Science and mission planning questions addressed as part of this concept include: What observational records are needed (in space and time) to maximize terrestrial snow experimental utility? How might observations be coordinated (in space and time) to maximize this utility? What is the additional utility associated with an additional observation? How can future mission costs be minimized while ensuring Science requirements are fulfilled?

  9. Impact of Private Health Insurance on Lengths of Hospitalization and Healthcare Expenditure in India: Evidences from a Quasi-Experiment Study.

    PubMed

    Vellakkal, Sukumar

    2013-01-01

    The health insurers administer retrospectively package rates for various inpatient procedures as a provider payment mechanism to empanelled hospitals in Indian healthcare market. This study analyzed the impact of private health insurance on healthcare utilization in terms of both lengths of hospitalization and per-day hospitalization expenditure in Indian healthcare market where package rates are retrospectively defined as healthcare provider payment mechanism. The claim records of 94443 insured individuals and the hospitalisation data of 32665 uninsured individuals were used. By applying stepwise and propensity score matching method, the sample of uninsured individual was matched with insured and 'average treatment effect on treated' (ATT) was estimated. Overall, the strategies of hospitals, insured and insurers for maximizing their utility were competing with each other. However, two aligning co-operative strategies between insurer and hospitals were significant with dominant role of hospitals. The hospitals maximize their utility by providing high cost healthcare in par with pre-defined package rates but align with the interest of insurers by reducing the number (length) of hospitalisation days. The empirical results show that private health insurance coverage leads to i) reduction in length of hospitalization, and ii) increase in per day hospital (health) expenditure. It is necessary to regulate and develop a competent healthcare market in the country with proper monitoring mechanism on healthcare utilization and benchmarks for pricing and provision of healthcare services.

  10. Effective Fund-Raising for Non-profit Camps.

    ERIC Educational Resources Information Center

    Larson, Paula

    1998-01-01

    Identifies and describes strategies for effective fundraising: imagining the possibilities, identifying fund-raising sources, targeting fund-raising efforts, maximizing time by utilizing public relations efforts and involving staff, writing quality proposals and requests, and staying educated on fund-raising topics. Sidebars describe planned…

  11. RFC: EPA's Action Plan for Bisphenol A Pursuant to EPA's Data Quality Guidelines

    EPA Pesticide Factsheets

    The American Chemistry Council (ACC) submits this Request for Correction to the U.S. Environmental Protection Agency under the Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility, and Integrity of Information Disseminated by the Environmental Protection Agency

  12. Maximizing Educational Opportunity through Community Resources.

    ERIC Educational Resources Information Center

    Maradian, Steve

    In the face of increased demands and diminishing resources, educational administrators at correctional facilities should look beyond institutional resources and utilize the services of area community colleges. The community college has an established track record in correctional education. Besides the nationally recognized correctional programs…

  13. Understanding medication compliance and persistence from an economics perspective.

    PubMed

    Elliott, Rachel A; Shinogle, Judith A; Peele, Pamela; Bhosle, Monali; Hughes, Dyfrig A

    2008-01-01

    An increased understanding of the reasons for noncompliance and lack of persistence with prescribed medication is an important step to improve treatment effectiveness, and thus patient health. Explanations have been attempted from epidemiological, sociological, and psychological perspectives. Economic models (utility maximization, time preferences, health capital, bilateral bargaining, stated preference, and prospect theory) may contribute to the understanding of medication-taking behavior. Economic models are applied to medication noncompliance. Traditional consumer choice models under a budget constraint do apply to medication-taking behavior in that increased prices cause decreased utilization. Nevertheless, empiric evidence suggests that budget constraints are not the only factor affecting consumer choice around medicines. Examination of time preference models suggests that the intuitive association between time preference and medication compliance has not been investigated extensively, and has not been proven empirically. The health capital model has theoretical relevance, but has not been applied to compliance. Bilateral bargaining may present an alternative model to concordance of the patient-prescriber relationship, taking account of game-playing by either party. Nevertheless, there is limited empiric evidence to test its usefulness. Stated preference methods have been applied most extensively to medicines use. Evidence suggests that patients' preferences are consistently affected by side effects, and that preferences change over time, with age and experience. Prospect theory attempts to explain how new information changes risk perceptions and associated behavior but has not been applied empirically to medication use. Economic models of behavior may contribute to the understanding of medication use, but more empiric work is needed to assess their applicability.

  14. Dishonest Academic Conduct: From the Perspective of the Utility Function.

    PubMed

    Sun, Ying; Tian, Rui

    Dishonest academic conduct has aroused extensive attention in academic circles. To explore how scholars make decisions according to the principle of maximal utility, the author has constructed the general utility function based on the expected utility theory. The concrete utility functions of different types of scholars were deduced. They are as follows: risk neutral, risk averse, and risk preference. Following this, the assignment method was adopted to analyze and compare the scholars' utilities of academic conduct. It was concluded that changing the values of risk costs, internal condemnation costs, academic benefits, and the subjective estimation of penalties following dishonest academic conduct can lead to changes in the utility of academic dishonesty. The results of the current study suggest that within scientific research, measures to prevent and govern dishonest academic conduct should be formulated according to the various effects of the above four variables.

  15. Quantitative ionization chamber alignment to a water surface: Theory and simulation.

    PubMed

    Siebers, Jeffrey V; Ververs, James D; Tessier, Frédéric

    2017-07-01

    To examine the response properties of cylindrical cavity ionization chambers (ICs) in the depth-ionization buildup region so as to obtain a robust chamber-signal - based method for definitive water surface identification, hence absolute ionization chamber depth localization. An analytical model with simplistic physics and geometry is developed to explore the theoretical aspects of ionization chamber response near a phantom water surface. Monte Carlo simulations with full physics and ionization chamber geometry are utilized to extend the model's findings to realistic ion chambers in realistic beams and to study the effects of IC design parameters on the entrance dose response. Design parameters studied include full and simplified IC designs with varying central electrode thickness, wall thickness, and outer chamber radius. Piecewise continuous fits to the depth-ionization signal gradient are used to quantify potential deviation of the gradient discontinuity from the chamber outer radius. Exponential, power, and hyperbolic sine functional forms are used to model the gradient for chamber depths of zero to the depth of the gradient discontinuity. The depth-ionization gradient as a function of depth is maximized and discontinuous when a submerged IC's outer radius coincides with the water surface. We term this depth the gradient chamber alignment point (gCAP). The maximum deviation between the gCAP location and the chamber outer radius is 0.13 mm for a hypothetical 4 mm thick wall, 6.45 mm outer radius chamber using the power function fit, however, the chamber outer radius is within the 95% confidence interval of the gCAP determined by this fit. gCAP dependence on the chamber wall thickness is possible, but not at a clinically relevant level. The depth-ionization gradient has a discontinuity and is maximized when the outer-radius of a submerged IC coincides with the water surface. This feature can be used to auto-align ICs to the water surface at the time of scanning and/or be applied retrospectively to scan data to quantify absolute IC depth. Utilization of the gCAP should yield accurate and reproducible depth calibration for clinical depth-ionization measurements between setups and between users. © 2017 American Association of Physicists in Medicine.

  16. Optimal Operation of Data Centers in Future Smart Grid

    NASA Astrophysics Data System (ADS)

    Ghamkhari, Seyed Mahdi

    The emergence of cloud computing has established a growing trend towards building massive, energy-hungry, and geographically distributed data centers. Due to their enormous energy consumption, data centers are expected to have major impact on the electric grid by significantly increasing the load at locations where they are built. However, data centers also provide opportunities to help the grid with respect to robustness and load balancing. For instance, as data centers are major and yet flexible electric loads, they can be proper candidates to offer ancillary services, such as voluntary load reduction, to the smart grid. Also, data centers may better stabilize the price of energy in the electricity markets, and at the same time reduce their electricity cost by exploiting the diversity in the price of electricity in the day-ahead and real-time electricity markets. In this thesis, such potentials are investigated within an analytical profit maximization framework by developing new mathematical models based on queuing theory. The proposed models capture the trade-off between quality-of-service and power consumption in data centers. They are not only accurate, but also they posses convexity characteristics that facilitate joint optimization of data centers' service rates, demand levels and demand bids to different electricity markets. The analysis is further expanded to also develop a unified comprehensive energy portfolio optimization for data centers in the future smart grid. Specifically, it is shown how utilizing one energy option may affect selecting other energy options that are available to a data center. For example, we will show that the use of on-site storage and the deployment of geographical workload distribution can particularly help data centers in utilizing high-risk energy options such as renewable generation. The analytical approach in this thesis takes into account service-level-agreements, risk management constraints, and also the statistical characteristics of the Internet workload and the electricity prices. Using empirical data, the performance of our proposed profit maximization models for data centers are evaluated, and the capability of data centers to benefit from participation in a variety of Demand Response programs is assessed.

  17. Evaluation of an Adaptive Game that Uses EEG Measures Validated during the Design Process as Inputs to a Biocybernetic Loop.

    PubMed

    Ewing, Kate C; Fairclough, Stephen H; Gilleade, Kiel

    2016-01-01

    Biocybernetic adaptation is a form of physiological computing whereby real-time data streaming from the brain and body is used by a negative control loop to adapt the user interface. This article describes the development of an adaptive game system that is designed to maximize player engagement by utilizing changes in real-time electroencephalography (EEG) to adjust the level of game demand. The research consists of four main stages: (1) the development of a conceptual framework upon which to model the interaction between person and system; (2) the validation of the psychophysiological inference underpinning the loop; (3) the construction of a working prototype; and (4) an evaluation of the adaptive game. Two studies are reported. The first demonstrates the sensitivity of EEG power in the (frontal) theta and (parietal) alpha bands to changing levels of game demand. These variables were then reformulated within the working biocybernetic control loop designed to maximize player engagement. The second study evaluated the performance of an adaptive game of Tetris with respect to system behavior and user experience. Important issues for the design and evaluation of closed-loop interfaces are discussed.

  18. A Reward-Maximizing Spiking Neuron as a Bounded Rational Decision Maker.

    PubMed

    Leibfried, Felix; Braun, Daniel A

    2015-08-01

    Rate distortion theory describes how to communicate relevant information most efficiently over a channel with limited capacity. One of the many applications of rate distortion theory is bounded rational decision making, where decision makers are modeled as information channels that transform sensory input into motor output under the constraint that their channel capacity is limited. Such a bounded rational decision maker can be thought to optimize an objective function that trades off the decision maker's utility or cumulative reward against the information processing cost measured by the mutual information between sensory input and motor output. In this study, we interpret a spiking neuron as a bounded rational decision maker that aims to maximize its expected reward under the computational constraint that the mutual information between the neuron's input and output is upper bounded. This abstract computational constraint translates into a penalization of the deviation between the neuron's instantaneous and average firing behavior. We derive a synaptic weight update rule for such a rate distortion optimizing neuron and show in simulations that the neuron efficiently extracts reward-relevant information from the input by trading off its synaptic strengths against the collected reward.

  19. Individual versus Household Migration Decision Rules: Gender and Marital Status Differences in Intentions to Migrate in South Africa

    PubMed Central

    Gubhaju, Bina; De Jong, Gordon F.

    2009-01-01

    This research tests the thesis that the neoclassical micro-economic and the new household economic theoretical assumptions on migration decision-making rules are segmented by gender, marital status, and time frame of intention to migrate. Comparative tests of both theories within the same study design are relatively rare. Utilizing data from the Causes of Migration in South Africa national migration survey, we analyze how individually held “own-future” versus alternative “household well-being” migration decision rules effect the intentions to migrate of male and female adults in South Africa. Results from the gender and marital status specific logistic regressions models show consistent support for the different gender-marital status decision rule thesis. Specifically, the “maximizing one’s own future” neoclassical microeconomic theory proposition is more applicable for never married men and women, the “maximizing household income” proposition for married men with short-term migration intentions, and the “reduce household risk” proposition for longer time horizon migration intentions of married men and women. Results provide new evidence on the way household strategies and individual goals jointly affect intentions to move or stay. PMID:20161187

  20. Streamflow variability and optimal capacity of run-of-river hydropower plants

    NASA Astrophysics Data System (ADS)

    Basso, S.; Botter, G.

    2012-10-01

    The identification of the capacity of a run-of-river plant which allows for the optimal utilization of the available water resources is a challenging task, mainly because of the inherent temporal variability of river flows. This paper proposes an analytical framework to describe the energy production and the economic profitability of small run-of-river power plants on the basis of the underlying streamflow regime. We provide analytical expressions for the capacity which maximize the produced energy as a function of the underlying flow duration curve and minimum environmental flow requirements downstream of the plant intake. Similar analytical expressions are derived for the capacity which maximize the economic return deriving from construction and operation of a new plant. The analytical approach is applied to a minihydro plant recently proposed in a small Alpine catchment in northeastern Italy, evidencing the potential of the method as a flexible and simple design tool for practical application. The analytical model provides useful insight on the major hydrologic and economic controls (e.g., streamflow variability, energy price, costs) on the optimal plant capacity and helps in identifying policy strategies to reduce the current gap between the economic and energy optimizations of run-of-river plants.

  1. Evaluation of an Adaptive Game that Uses EEG Measures Validated during the Design Process as Inputs to a Biocybernetic Loop

    PubMed Central

    Ewing, Kate C.; Fairclough, Stephen H.; Gilleade, Kiel

    2016-01-01

    Biocybernetic adaptation is a form of physiological computing whereby real-time data streaming from the brain and body is used by a negative control loop to adapt the user interface. This article describes the development of an adaptive game system that is designed to maximize player engagement by utilizing changes in real-time electroencephalography (EEG) to adjust the level of game demand. The research consists of four main stages: (1) the development of a conceptual framework upon which to model the interaction between person and system; (2) the validation of the psychophysiological inference underpinning the loop; (3) the construction of a working prototype; and (4) an evaluation of the adaptive game. Two studies are reported. The first demonstrates the sensitivity of EEG power in the (frontal) theta and (parietal) alpha bands to changing levels of game demand. These variables were then reformulated within the working biocybernetic control loop designed to maximize player engagement. The second study evaluated the performance of an adaptive game of Tetris with respect to system behavior and user experience. Important issues for the design and evaluation of closed-loop interfaces are discussed. PMID:27242486

  2. The Dynamics of Multilateral Exchange

    NASA Astrophysics Data System (ADS)

    Hausken, Kjell; Moxnes, John F.

    The article formulates a dynamic mathematical model where arbitrarily many players produce, consume, exchange, loan, and deposit arbitrarily many goods over time to maximize utility. Consuming goods constitutes a benefit, and producing, exporting, and loaning away goods constitute a cost. Utilities are benefits minus costs, which depend on the exchange ratios and bargaining functions. Three-way exchange occurs when one player acquires, through exchange, one good from another player with the sole purpose of using this good to exchange against the desired good from a third player. Such a triple handshake is not merely a set of double handshakes since the player assigns no interest to the first good in his benefit function. Cognitive and organization costs increase dramatically for higher order exchanges. An exchange theory accounting for media of exchange follows from simple generalization of two-way exchange. The examples of r-way exchange are the triangle trade between Africa, the USA, and England in the 17th and 18th centuries, the hypothetical hypercycle involving RNAs as players and enzymes as goods, and reaction-diffusion processes. The emergence of exchange, and the role of trading agents are discussed. We simulate an example where two-way exchange gives zero production and zero utility, while three-way exchange causes considerable production and positive utility. Maximum utility for each player is reached when exchanges of the same order as the number of players in society are allowed. The article merges micro theory and macro theory within the social, natural, and physical sciences.

  3. Validity and reliability of the PowerTap mobile cycling powermeter when compared with the SRM Device.

    PubMed

    Bertucci, W; Duc, S; Villerius, V; Pernin, J N; Grappe, F

    2005-12-01

    The SRM power measuring crank system is nowadays a popular device for cycling power output (PO) measurements in the field and in laboratories. The PowerTap (CycleOps, Madison, USA) is a more recent and less well-known device that allows mobile PO measurements of cycling via the rear wheel hub. The aim of this study is to test the validity and reliability of the PowerTap by comparing it with the most accurate (i.e. the scientific model) of the SRM system. The validity of the PowerTap is tested during i) sub-maximal incremental intensities (ranging from 100 to 420 W) on a treadmill with different pedalling cadences (45 to 120 rpm) and cycling positions (standing and seated) on different grades, ii) a continuous sub-maximal intensity lasting 30 min, iii) a maximal intensity (8-s sprint), and iiii) real road cycling. The reliability is assessed by repeating ten times the sub-maximal incremental and continuous tests. The results show a good validity of the PowerTap during sub-maximal intensities between 100 and 450 W (mean PO difference -1.2 +/- 1.3 %) when it is compared to the scientific SRM model, but less validity for the maximal PO during sprint exercise, where the validity appears to depend on the gear ratio. The reliability of the PowerTap during the sub-maximal intensities is similar to the scientific SRM model (the coefficient of variation is respectively 0.9 to 2.9 % and 0.7 to 2.1 % for PowerTap and SRM). The PowerTap must be considered as a suitable device for PO measurements during sub-maximal real road cycling and in sub-maximal laboratory tests.

  4. Oropharyngeal dysphagia: surveying practice patterns of the speech-language pathologist.

    PubMed

    Martino, Rosemary; Pron, Gaylene; Diamant, Nicholas E

    2004-01-01

    The present study was designed to obtain a comprehensive view of the dysphagia assessment practice patterns of speech-language pathologists and their opinion on the importance of these practices using survey methods and taking into consideration clinician, patient, and practice-setting variables. A self-administered mail questionnaire was developed following established methodology to maximize response rates. Eight dysphagia experts independently rated the new survey for content validity. Test-retest reliability was assessed with a random sample of 23 participants. The survey was sent to 50 speech-language pathologists randomly selected from the Canadian professional association database of members who practice in dysphagia. Surveys were mailed according to the Dillman Total Design Method and included an incentive offer. High survey (64%) and item response (95%) rates were achieved and clinicians were reliable reporters of their practice behaviors (ICC>0.60). Of all the clinical assessment items, 36% were reported with high (>80%) utilization and 24% with low (<20%) utilization, the former pertaining to tongue motion and vocal quality after food/fluid intake and the latter to testing of oral sensation without food. One-third (33%) of instrumental assessment items were highly utilized and included assessment of bolus movement and laryngeal response to bolus misdirection. Overall, clinician experience and teaching institutions influenced greater utilization. Opinions of importance were similar to utilization behaviors (r = 0.947, p = 0.01). Of all patients referred for dysphagia assessment, full clinical assessments were administered to 71% of patients but instrumental assessments to only 36%. A hierarchical model of practice behavior is proposed to explain this pattern of progressively decreasing item utilization.

  5. Unsupervised Spatial Event Detection in Targeted Domains with Applications to Civil Unrest Modeling

    PubMed Central

    Zhao, Liang; Chen, Feng; Dai, Jing; Hua, Ting; Lu, Chang-Tien; Ramakrishnan, Naren

    2014-01-01

    Twitter has become a popular data source as a surrogate for monitoring and detecting events. Targeted domains such as crime, election, and social unrest require the creation of algorithms capable of detecting events pertinent to these domains. Due to the unstructured language, short-length messages, dynamics, and heterogeneity typical of Twitter data streams, it is technically difficult and labor-intensive to develop and maintain supervised learning systems. We present a novel unsupervised approach for detecting spatial events in targeted domains and illustrate this approach using one specific domain, viz. civil unrest modeling. Given a targeted domain, we propose a dynamic query expansion algorithm to iteratively expand domain-related terms, and generate a tweet homogeneous graph. An anomaly identification method is utilized to detect spatial events over this graph by jointly maximizing local modularity and spatial scan statistics. Extensive experiments conducted in 10 Latin American countries demonstrate the effectiveness of the proposed approach. PMID:25350136

  6. Multiple ecosystem services in a working landscape

    PubMed Central

    Eastburn, Danny J.; O’Geen, Anthony T.; Tate, Kenneth W.; Roche, Leslie M.

    2017-01-01

    Policy makers and practitioners are in need of useful tools and models for assessing ecosystem service outcomes and the potential risks and opportunities of ecosystem management options. We utilize a state-and-transition model framework integrating dynamic soil and vegetation properties to examine multiple ecosystem services—specifically agricultural production, biodiversity and habitat, and soil health—across human created vegetation states in a managed oak woodland landscape in a Mediterranean climate. We found clear tradeoffs and synergies in management outcomes. Grassland states maximized agricultural productivity at a loss of soil health, biodiversity, and other ecosystem services. Synergies existed among multiple ecosystem services in savanna and woodland states with significantly larger nutrient pools, more diversity and native plant richness, and less invasive species. This integrative approach can be adapted to a diversity of working landscapes to provide useful information for science-based ecosystem service valuations, conservation decision making, and management effectiveness assessments. PMID:28301475

  7. Stochastic Optimization for an Analytical Model of Saltwater Intrusion in Coastal Aquifers

    PubMed Central

    Stratis, Paris N.; Karatzas, George P.; Papadopoulou, Elena P.; Zakynthinaki, Maria S.; Saridakis, Yiannis G.

    2016-01-01

    The present study implements a stochastic optimization technique to optimally manage freshwater pumping from coastal aquifers. Our simulations utilize the well-known sharp interface model for saltwater intrusion in coastal aquifers together with its known analytical solution. The objective is to maximize the total volume of freshwater pumped by the wells from the aquifer while, at the same time, protecting the aquifer from saltwater intrusion. In the direction of dealing with this problem in real time, the ALOPEX stochastic optimization method is used, to optimize the pumping rates of the wells, coupled with a penalty-based strategy that keeps the saltwater front at a safe distance from the wells. Several numerical optimization results, that simulate a known real aquifer case, are presented. The results explore the computational performance of the chosen stochastic optimization method as well as its abilities to manage freshwater pumping in real aquifer environments. PMID:27689362

  8. Triangulating the neural, psychological, and economic bases of guilt aversion

    PubMed Central

    Chang, Luke J.; Smith, Alec; Dufwenberg, Martin; Sanfey, Alan G.

    2011-01-01

    Why do people often choose to cooperate when they can better serve their interests by acting selfishly? One potential mechanism is that the anticipation of guilt can motivate cooperative behavior. We utilize a formal model of this process in conjunction with fMRI to identify brain regions that mediate cooperative behavior while participants decided whether or not to honor a partner’s trust. We observed increased activation in the insula, supplementary motor area, dorsolateral prefrontal cortex (PFC), and temporal parietal junction when participants were behaving consistent with our model, and found increased activity in the ventromedial PFC, dorsomedial PFC, and nucleus accumbens when they chose to abuse trust and maximize their financial reward. This study demonstrates that a neural system previously implicated in expectation processing plays a critical role in assessing moral sentiments that in turn can sustain human cooperation in the face of temptation. PMID:21555080

  9. Optimal model of PDIG based microgrid and design of complementary stabilizer using ICA.

    PubMed

    Amini, R Mohammad; Safari, A; Ravadanegh, S Najafi

    2016-09-01

    The generalized Heffron-Phillips model (GHPM) for a microgrid containing a photovoltaic (PV)-diesel machine (DM)-induction motor (IM)-governor (GV) (PDIG) has been developed at the low voltage level. A GHPM is calculated by linearization method about a loading condition. An effective Maximum Power Point Tracking (MPPT) approach for PV network has been done using sliding mode control (SMC) to maximize output power. Additionally, to improve stability of microgrid for more penetration of renewable energy resources with nonlinear load, a complementary stabilizer has been presented. Imperialist competitive algorithm (ICA) is utilized to design of gains for the complementary stabilizer with the multiobjective function. The stability analysis of the PDIG system has been completed with eigenvalues analysis and nonlinear simulations. Robustness and validity of the proposed controllers on damping of electromechanical modes examine through time domain simulation under input mechanical torque disturbances. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Multiple ecosystem services in a working landscape.

    PubMed

    Eastburn, Danny J; O'Geen, Anthony T; Tate, Kenneth W; Roche, Leslie M

    2017-01-01

    Policy makers and practitioners are in need of useful tools and models for assessing ecosystem service outcomes and the potential risks and opportunities of ecosystem management options. We utilize a state-and-transition model framework integrating dynamic soil and vegetation properties to examine multiple ecosystem services-specifically agricultural production, biodiversity and habitat, and soil health-across human created vegetation states in a managed oak woodland landscape in a Mediterranean climate. We found clear tradeoffs and synergies in management outcomes. Grassland states maximized agricultural productivity at a loss of soil health, biodiversity, and other ecosystem services. Synergies existed among multiple ecosystem services in savanna and woodland states with significantly larger nutrient pools, more diversity and native plant richness, and less invasive species. This integrative approach can be adapted to a diversity of working landscapes to provide useful information for science-based ecosystem service valuations, conservation decision making, and management effectiveness assessments.

  11. DEEP-SaM - Energy-Efficient Provisioning Policies for Computing Environments

    NASA Astrophysics Data System (ADS)

    Bodenstein, Christian; Püschel, Tim; Hedwig, Markus; Neumann, Dirk

    The cost of electricity for datacenters is a substantial operational cost that can and should be managed, not only for saving energy, but also due to the ecologic commitment inherent to power consumption. Often, pursuing this goal results in chronic underutilization of resources, a luxury most resource providers do not have in light of their corporate commitments. This work proposes, formalizes and numerically evaluates DEEP-Sam, for clearing provisioning markets, based on the maximization of welfare, subject to utility-level dependant energy costs and customer satisfaction levels. We focus specifically on linear power models, and the implications of the inherent fixed costs related to energy consumption of modern datacenters and cloud environments. We rigorously test the model by running multiple simulation scenarios and evaluate the results critically. We conclude with positive results and implications for long-term sustainable management of modern datacenters.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pichara, Karim; Protopapas, Pavlos

    We present an automatic classification method for astronomical catalogs with missing data. We use Bayesian networks and a probabilistic graphical model that allows us to perform inference to predict missing values given observed data and dependency relationships between variables. To learn a Bayesian network from incomplete data, we use an iterative algorithm that utilizes sampling methods and expectation maximization to estimate the distributions and probabilistic dependencies of variables from data with missing values. To test our model, we use three catalogs with missing data (SAGE, Two Micron All Sky Survey, and UBVI) and one complete catalog (MACHO). We examine howmore » classification accuracy changes when information from missing data catalogs is included, how our method compares to traditional missing data approaches, and at what computational cost. Integrating these catalogs with missing data, we find that classification of variable objects improves by a few percent and by 15% for quasar detection while keeping the computational cost the same.« less

  13. Rejuvenation of Spent Media via Supported Emulsion Liquid Membranes

    NASA Technical Reports Server (NTRS)

    Wiencek, John M.

    2002-01-01

    The overall goal of this project was to maximize the reuseability of spent fermentation media. Supported emulsion liquid membrane separation, a highly efficient extraction technique, was used to remove inhibitory byproducts during fermentation; thus, improve the yield while reducing the need for fresh water. The key objectives of this study were: (1) Develop an emulsion liquid membrane system targeting low molecular weight organic acids which has minimal toxicity on a variety of microbial systems. (2) Conduct mass transfer studies to allow proper modeling and design of a supported emulsion liquid membrane system. (3) Investigate the effect of gravity on emulsion coalescence within the membrane unit. (4) Access the effect of water re-use on fermentation yields in a model microbial system. and (5) Develop a perfusion-type fermentor utilizing a supported emulsion liquid membrane system to control inhibitory fermentation byproducts (not completed due to lack of funds)

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, V; Nguyen, D; Tran, A

    Purpose: To develop and clinically implement 4π radiotherapy, an inverse optimization platform that maximally utilizes non-coplanar intensity modulated radiotherapy (IMRT) beams to significantly improve critical organ sparing. Methods: A 3D scanner was used to digitize the human and phantom subject surfaces, which were positioned in the computer assisted design (CAD) model of a TrueBeam machine to create a virtual geometrical model, based on which, the feasible beam space was calculated for different tumor locations. Beamlets were computed for all feasible beams using convolution/superposition. A column generation algorithm was employed to optimize patient specific beam orientations and fluence maps. Optimal routingmore » through all selected beams were calculated by a level set method. The resultant plans were converted to XML files and delivered to phantoms in the TrueBeam developer mode. Finally, 4π plans were recomputed in Eclipse and manually delivered to recurrent GBM patients. Results: Compared to IMRT utilizing manually selected beams and volumetric modulated arc therapy plans, markedly improved dosimetry was observed using 4π for the brain, head and neck, liver, lung, and prostate patients. The improvements were due to significantly improved conformality and reduced high dose spillage to organs mediolateral to the PTV. The virtual geometrical model was experimentally validated. Safety margins with 99.9% confidence in collision avoidance were included to the model based model accuracy estimates determined via 300 physical machine to phantom distance measurements. Automated delivery in the developer mode was completed in 10 minutes and collision free. Manual 4 π treatment on the GBM cases resulted in significant brainstem sparing and took 35–45 minutes including multiple images, which showed submillimeter cranial intrafractional motion. Conclusion: The mathematical modeling utilized in 4π is accurate to create and guide highly complex non-coplanar IMRT treatments that consistently and significantly outperform human-operator-created plans. Deliverability of such plans is clinically demonstrated. This work is funded by Varian Medical Systems and the NSF Graduate Research Fellowship DGE-1144087.« less

  15. A study of the 200-metre fast walk test as a possible new assessment tool to predict maximal heart rate and define target heart rate for exercise training of coronary heart disease patients.

    PubMed

    Casillas, Jean-Marie; Joussain, Charles; Gremeaux, Vincent; Hannequin, Armelle; Rapin, Amandine; Laurent, Yves; Benaïm, Charles

    2015-02-01

    To develop a new predictive model of maximal heart rate based on two walking tests at different speeds (comfortable and brisk walking) as an alternative to a cardiopulmonary exercise test during cardiac rehabilitation. Evaluation of a clinical assessment tool. A Cardiac Rehabilitation Department in France. A total of 148 patients (133 men), mean age of 59 ±9 years, at the end of an outpatient cardiac rehabilitation programme. Patients successively performed a 6-minute walk test, a 200 m fast-walk test (200mFWT), and a cardiopulmonary exercise test, with measure of heart rate at the end of each test. An all-possible regression procedure was used to determine the best predictive regression models of maximal heart rate. The best model was compared with the Fox equation in term of predictive error of maximal heart rate using the paired t-test. Results of the two walking tests correlated significantly with maximal heart rate determined during the cardiopulmonary exercise test, whereas anthropometric parameters and resting heart rate did not. The simplified predictive model with the most acceptable mean error was: maximal heart rate = 130 - 0.6 × age + 0.3 × HR200mFWT (R(2) = 0.24). This model was superior to the Fox formula (R(2) = 0.138). The relationship between training target heart rate calculated from measured reserve heart rate and that established using this predictive model was statistically significant (r = 0.528, p < 10(-6)). A formula combining heart rate measured during a safe simple fast walk test and age is more efficient than an equation only including age to predict maximal heart rate and training target heart rate. © The Author(s) 2014.

  16. Stand-alone error characterisation of microwave satellite soil moisture using a Fourier method

    USDA-ARS?s Scientific Manuscript database

    Error characterisation of satellite-retrieved soil moisture (SM) is crucial for maximizing their utility in research and applications in hydro-meteorology and climatology. Error characteristics can provide insights for retrieval development and validation, and inform suitable strategies for data fus...

  17. Biomass for biorefining: Resources, allocation, utilization, and policies

    USDA-ARS?s Scientific Manuscript database

    The importance of biomass in the development of renewable energy, the availability and allocation of biomass, its preparation for use in biorefineries, and the policies affecting biomass are discussed in this chapter. Bioenergy development will depend on maximizing the amount of biomass obtained fro...

  18. The Child and Adolescent Psychiatry Trials Network

    ERIC Educational Resources Information Center

    March, John S.; Silva, Susan G.; Compton, Scott; Anthony, Ginger; DeVeaugh-Geiss, Joseph; Califf, Robert; Krishnan, Ranga

    2004-01-01

    Objective: The current generation of clinical trials in pediatric psychiatry often fails to maximize clinical utility for practicing clinicians, thereby diluting its impact. Method: To attain maximum clinical relevance and acceptability, the Child and Adolescent Psychiatry Trials Network (CAPTN) will transport to pediatric psychiatry the practical…

  19. Medical Problem-Solving: A Critique of the Literature.

    ERIC Educational Resources Information Center

    McGuire, Christine H.

    1985-01-01

    Prescriptive, decision-analysis of medical problem-solving has been based on decision theory that involves calculation and manipulation of complex probability and utility values to arrive at optimal decisions that will maximize patient benefits. The studies offer a methodology for improving clinical judgment. (Author/MLW)

  20. Why is Improving Water Quality in the Gulf of Mexico so Critical?

    EPA Pesticide Factsheets

    The EPA regional offices and the Gulf of Mexico Program work with Gulf States to continue to maximize the efficiency and utility of water quality monitoring efforts for local managers by coordinating and standardizing state and federal water quality data

  1. Maximizing internal opportunities for healthcare facilities facing a managed-care environment.

    PubMed

    Gillespie, M

    1997-01-01

    The primary theme of this article concerns the pressures on healthcare facilities to become efficient utilizers of their existing resources. This acute need for efficiency has been extremely obvious since the changing reimbursement patterns of managed care have proliferated across the nation.

  2. Utilizing Partnerships to Maximize Resources in College Counseling Services

    ERIC Educational Resources Information Center

    Stewart, Allison; Moffat, Meridith; Travers, Heather; Cummins, Douglas

    2015-01-01

    Research indicates an increasing number of college students are experiencing severe psychological problems that are impacting their academic performance. However, many colleges and universities operate with constrained budgets that limit their ability to provide adequate counseling services for their student population. Moreover, accessing…

  3. Designing advanced biochar products for maximizing greenhouse gas mitigation potential

    USDA-ARS?s Scientific Manuscript database

    Greenhouse gas (GHG) emissions from agricultural operations continue to increase. Carbon enriched char materials like biochar have been described as a mitigation strategy. Utilization of biochar material as a soil amendment has been demonstrated to provide potentially further soil GHG suppression du...

  4. Paracrine communication maximizes cellular response fidelity in wound signaling

    PubMed Central

    Handly, L Naomi; Pilko, Anna; Wollman, Roy

    2015-01-01

    Population averaging due to paracrine communication can arbitrarily reduce cellular response variability. Yet, variability is ubiquitously observed, suggesting limits to paracrine averaging. It remains unclear whether and how biological systems may be affected by such limits of paracrine signaling. To address this question, we quantify the signal and noise of Ca2+ and ERK spatial gradients in response to an in vitro wound within a novel microfluidics-based device. We find that while paracrine communication reduces gradient noise, it also reduces the gradient magnitude. Accordingly we predict the existence of a maximum gradient signal to noise ratio. Direct in vitro measurement of paracrine communication verifies these predictions and reveals that cells utilize optimal levels of paracrine signaling to maximize the accuracy of gradient-based positional information. Our results demonstrate the limits of population averaging and show the inherent tradeoff in utilizing paracrine communication to regulate cellular response fidelity. DOI: http://dx.doi.org/10.7554/eLife.09652.001 PMID:26448485

  5. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models

    DOE PAGES

    King, Zachary A.; Lu, Justin; Drager, Andreas; ...

    2015-10-17

    In this study, genome-scale metabolic models are mathematically structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scalemore » metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data.« less

  6. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models

    PubMed Central

    King, Zachary A.; Lu, Justin; Dräger, Andreas; Miller, Philip; Federowicz, Stephen; Lerman, Joshua A.; Ebrahim, Ali; Palsson, Bernhard O.; Lewis, Nathan E.

    2016-01-01

    Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data. PMID:26476456

  7. Recovery Act: Brea California Combined Cycle Electric Generating Plant Fueled by Waste Landfill Gas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galowitz, Stephen

    The primary objective of the Project was to maximize the productive use of the substantial quantities of waste landfill gas generated and collected at the Olinda Landfill near Brea, California. An extensive analysis was conducted and it was determined that utilization of the waste gas for power generation in a combustion turbine combined cycle facility was the highest and best use. The resulting Project reflected a cost effective balance of the following specific sub-objectives: • Meeting the environmental and regulatory requirements, particularly the compliance obligations imposed on the landfill to collect, process and destroy landfill gas • Utilizing proven andmore » reliable technology and equipment • Maximizing electrical efficiency • Maximizing electric generating capacity, consistent with the anticipated quantities of landfill gas generated and collected at the Olinda Landfill • Maximizing equipment uptime • Minimizing water consumption • Minimizing post-combustion emissions • The Project produced and will produce a myriad of beneficial impacts. o The Project created 360 FTE construction and manufacturing jobs and 15 FTE permanent jobs associated with the operation and maintenance of the plant and equipment. o By combining state-of-the-art gas clean up systems with post combustion emissions control systems, the Project established new national standards for best available control technology (BACT). o The Project will annually produce 280,320 MWh’s of clean energy o By destroying the methane in the landfill gas, the Project will generate CO2 equivalent reductions of 164,938 tons annually. The completed facility produces 27.4 MWnet and operates 24 hours a day, seven days a week.« less

  8. Beyond discrimination: A comparison of calibration methods and clinical usefulness of predictive models of readmission risk.

    PubMed

    Walsh, Colin G; Sharman, Kavya; Hripcsak, George

    2017-12-01

    Prior to implementing predictive models in novel settings, analyses of calibration and clinical usefulness remain as important as discrimination, but they are not frequently discussed. Calibration is a model's reflection of actual outcome prevalence in its predictions. Clinical usefulness refers to the utilities, costs, and harms of using a predictive model in practice. A decision analytic approach to calibrating and selecting an optimal intervention threshold may help maximize the impact of readmission risk and other preventive interventions. To select a pragmatic means of calibrating predictive models that requires a minimum amount of validation data and that performs well in practice. To evaluate the impact of miscalibration on utility and cost via clinical usefulness analyses. Observational, retrospective cohort study with electronic health record data from 120,000 inpatient admissions at an urban, academic center in Manhattan. The primary outcome was thirty-day readmission for three causes: all-cause, congestive heart failure, and chronic coronary atherosclerotic disease. Predictive modeling was performed via L1-regularized logistic regression. Calibration methods were compared including Platt Scaling, Logistic Calibration, and Prevalence Adjustment. Performance of predictive modeling and calibration was assessed via discrimination (c-statistic), calibration (Spiegelhalter Z-statistic, Root Mean Square Error [RMSE] of binned predictions, Sanders and Murphy Resolutions of the Brier Score, Calibration Slope and Intercept), and clinical usefulness (utility terms represented as costs). The amount of validation data necessary to apply each calibration algorithm was also assessed. C-statistics by diagnosis ranged from 0.7 for all-cause readmission to 0.86 (0.78-0.93) for congestive heart failure. Logistic Calibration and Platt Scaling performed best and this difference required analyzing multiple metrics of calibration simultaneously, in particular Calibration Slopes and Intercepts. Clinical usefulness analyses provided optimal risk thresholds, which varied by reason for readmission, outcome prevalence, and calibration algorithm. Utility analyses also suggested maximum tolerable intervention costs, e.g., $1720 for all-cause readmissions based on a published cost of readmission of $11,862. Choice of calibration method depends on availability of validation data and on performance. Improperly calibrated models may contribute to higher costs of intervention as measured via clinical usefulness. Decision-makers must understand underlying utilities or costs inherent in the use-case at hand to assess usefulness and will obtain the optimal risk threshold to trigger intervention with intervention cost limits as a result. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Averting Behavior Framework for Perceived Risk of Yersinia enterocolitica Infections.

    PubMed

    Aziz, Sonia N; Aziz, Khwaja M S

    2012-01-01

    The focus of this research is to present a theoretical model of averting actions that households take to avoid exposure to Yersinia enterocolitica in contaminated food. The cost of illness approach only takes into account the value of a cure, while the averting behavior approach can estimate the value of preventing the illness. The household, rather than the individual, is the unit of analysis in this model, where one household member is primarily responsible for procuring uncontaminated food for their family. Since children are particularly susceptible and live with parents who are primary decision makers for sustenance, the designated household head makes the choices that are investigated in this paper. This model uses constrained optimization to characterize activities that may offer protection from exposure to Yersinia enterocolitica contaminated food. A representative household decision maker is assumed to allocate family resources to maximize utility of an altruistic parent, an assumption used in most research involving economics of the family.

  10. An evolutionary perspective on gradual formation of superego in the primal horde

    PubMed Central

    Pulcu, Erdem

    2014-01-01

    Freud proposed that the processes which occurred in the primal horde are essential for understanding superego formation and therefore, the successful dissolution of the Oedipus complex. However, Freud theorized superego formation in the primal horde as if it is an instant, all-or-none achievement. The present paper proposes an alternative model aiming to explain gradual development of superego in the primitive man. The proposed model is built on knowledge from evolutionary and neural sciences as well as anthropology, and it particularly focuses on the evolutionary significance of the acquisition of fire by hominids in the Pleistocene period in the light of up-to-date archaeological findings. Acquisition of fire is discussed as a form of sublimation which might have helped Prehistoric man to maximize the utility of limited evolutionary biological resources, potentially contributing to the rate and extent of bodily evolution. The limitations of both Freud's original conceptualization and the present model are discussed accordingly in an interdisciplinary framework. PMID:24478740

  11. Pharmacodynamics of Isavuconazole in a Dynamic In Vitro Model of Invasive Pulmonary Aspergillosis

    PubMed Central

    Box, Helen; Livermore, Joanne; Johnson, Adam; McEntee, Laura; Felton, Timothy W.; Whalley, Sarah; Goodwin, Joanne

    2015-01-01

    Isavuconazonium sulfate is a novel triazole prodrug that has been recently approved for the treatment of invasive aspergillosis by the FDA. The active moiety (isavuconazole) has a broad spectrum of activity against many pathogenic fungi. This study utilized a dynamic in vitro model of the human alveolus to describe the pharmacodynamics of isavuconazole against two wild-type and two previously defined azole-resistant isolates of Aspergillus fumigatus. A human-like concentration-time profile for isavuconazole was generated. MICs were determined using CLSI and EUCAST methodologies. Galactomannan was used as a measure of fungal burden. Target values for the area under the concentration-time curve (AUC)/MIC were calculated using a population pharmacokinetics-pharmacodynamics (PK-PD) mathematical model. Isolates with higher MICs required higher AUCs in order to achieve maximal suppression of galactomannan. The AUC/MIC targets necessary to achieve 90% probability of galactomannan suppression of <1 were 11.40 and 11.20 for EUCAST and CLSI, respectively. PMID:26503648

  12. Proactive action preparation: seeing action preparation as a continuous and proactive process.

    PubMed

    Pezzulo, Giovanni; Ognibene, Dimitri

    2012-07-01

    In this paper, we aim to elucidate the processes that occur during action preparation from both a conceptual and a computational point of view. We first introduce the traditional, serial model of goal-directed action and discuss from a computational viewpoint its subprocesses occurring during the two phases of covert action preparation and overt motor control. Then, we discuss recent evidence indicating that these subprocesses are highly intertwined at representational and neural levels, which undermines the validity of the serial model and points instead to a parallel model of action specification and selection. Within the parallel view, we analyze the case of delayed choice, arguing that action preparation can be proactive, and preparatory processes can take place even before decisions are made. Specifically, we discuss how prior knowledge and prospective abilities can be used to maximize utility even before deciding what to do. To support our view, we present a computational implementation of (an approximated version of) proactive action preparation, showing its advantages in a simulated tennis-like scenario.

  13. Optimal scheduling of micro grids based on single objective programming

    NASA Astrophysics Data System (ADS)

    Chen, Yue

    2018-04-01

    Faced with the growing demand for electricity and the shortage of fossil fuels, how to optimally optimize the micro-grid has become an important research topic to maximize the economic, technological and environmental benefits of the micro-grid. This paper considers the role of the battery and the micro-grid and power grid to allow the exchange of power not exceeding 150kW preconditions, the main study of the economy to load for the goal is to minimize the electricity cost (abandonment of wind), to establish an optimization model, and to solve the problem by genetic algorithm. The optimal scheduling scheme is obtained and the utilization of renewable energy and the impact of the battery involved in regulation are analyzed.

  14. Negative correlation learning for customer churn prediction: a comparison study.

    PubMed

    Rodan, Ali; Fayyoumi, Ayham; Faris, Hossam; Alsakran, Jamal; Al-Kadi, Omar

    2015-01-01

    Recently, telecommunication companies have been paying more attention toward the problem of identification of customer churn behavior. In business, it is well known for service providers that attracting new customers is much more expensive than retaining existing ones. Therefore, adopting accurate models that are able to predict customer churn can effectively help in customer retention campaigns and maximizing the profit. In this paper we will utilize an ensemble of Multilayer perceptrons (MLP) whose training is obtained using negative correlation learning (NCL) for predicting customer churn in a telecommunication company. Experiments results confirm that NCL based MLP ensemble can achieve better generalization performance (high churn rate) compared with ensemble of MLP without NCL (flat ensemble) and other common data mining techniques used for churn analysis.

  15. Adaptive Broadcasting Mechanism for Bandwidth Allocation in Mobile Services

    PubMed Central

    Horng, Gwo-Jiun; Wang, Chi-Hsuan; Chou, Chih-Lun

    2014-01-01

    This paper proposes a tree-based adaptive broadcasting (TAB) algorithm for data dissemination to improve data access efficiency. The proposed TAB algorithm first constructs a broadcast tree to determine the broadcast frequency of each data and splits the broadcast tree into some broadcast wood to generate the broadcast program. In addition, this paper develops an analytical model to derive the mean access latency of the generated broadcast program. In light of the derived results, both the index channel's bandwidth and the data channel's bandwidth can be optimally allocated to maximize bandwidth utilization. This paper presents experiments to help evaluate the effectiveness of the proposed strategy. From the experimental results, it can be seen that the proposed mechanism is feasible in practice. PMID:25057509

  16. Net reclassification index at event rate: properties and relationships.

    PubMed

    Pencina, Michael J; Steyerberg, Ewout W; D'Agostino, Ralph B

    2017-12-10

    The net reclassification improvement (NRI) is an attractively simple summary measure quantifying improvement in performance because of addition of new risk marker(s) to a prediction model. Originally proposed for settings with well-established classification thresholds, it quickly extended into applications with no thresholds in common use. Here we aim to explore properties of the NRI at event rate. We express this NRI as a difference in performance measures for the new versus old model and show that the quantity underlying this difference is related to several global as well as decision analytic measures of model performance. It maximizes the relative utility (standardized net benefit) across all classification thresholds and can be viewed as the Kolmogorov-Smirnov distance between the distributions of risk among events and non-events. It can be expressed as a special case of the continuous NRI, measuring reclassification from the 'null' model with no predictors. It is also a criterion based on the value of information and quantifies the reduction in expected regret for a given regret function, casting the NRI at event rate as a measure of incremental reduction in expected regret. More generally, we find it informative to present plots of standardized net benefit/relative utility for the new versus old model across the domain of classification thresholds. Then, these plots can be summarized with their maximum values, and the increment in model performance can be described by the NRI at event rate. We provide theoretical examples and a clinical application on the evaluation of prognostic biomarkers for atrial fibrillation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Maximizing the Impact of e-Therapy and Serious Gaming: Time for a Paradigm Shift.

    PubMed

    Fleming, Theresa M; de Beurs, Derek; Khazaal, Yasser; Gaggioli, Andrea; Riva, Giuseppe; Botella, Cristina; Baños, Rosa M; Aschieri, Filippo; Bavin, Lynda M; Kleiboer, Annet; Merry, Sally; Lau, Ho Ming; Riper, Heleen

    2016-01-01

    Internet interventions for mental health, including serious games, online programs, and apps, hold promise for increasing access to evidence-based treatments and prevention. Many such interventions have been shown to be effective and acceptable in trials; however, uptake and adherence outside of trials is seldom reported, and where it is, adherence at least, generally appears to be underwhelming. In response, an international Collaboration On Maximizing the impact of E-Therapy and Serious Gaming (COMETS) was formed. In this perspectives' paper, we call for a paradigm shift to increase the impact of internet interventions toward the ultimate goal of improved population mental health. We propose four pillars for change: (1) increased focus on user-centered approaches, including both user-centered design of programs and greater individualization within programs, with the latter perhaps utilizing increased modularization; (2) Increased emphasis on engagement utilizing processes such as gaming, gamification, telepresence, and persuasive technology; (3) Increased collaboration in program development, testing, and data sharing, across both sectors and regions, in order to achieve higher quality, more sustainable outcomes with greater reach; and (4) Rapid testing and implementation, including the measurement of reach, engagement, and effectiveness, and timely implementation. We suggest it is time for researchers, clinicians, developers, and end-users to collaborate on these aspects in order to maximize the impact of e-therapies and serious gaming.

  18. Maximizing the Impact of e-Therapy and Serious Gaming: Time for a Paradigm Shift

    PubMed Central

    Fleming, Theresa M.; de Beurs, Derek; Khazaal, Yasser; Gaggioli, Andrea; Riva, Giuseppe; Botella, Cristina; Baños, Rosa M.; Aschieri, Filippo; Bavin, Lynda M.; Kleiboer, Annet; Merry, Sally; Lau, Ho Ming; Riper, Heleen

    2016-01-01

    Internet interventions for mental health, including serious games, online programs, and apps, hold promise for increasing access to evidence-based treatments and prevention. Many such interventions have been shown to be effective and acceptable in trials; however, uptake and adherence outside of trials is seldom reported, and where it is, adherence at least, generally appears to be underwhelming. In response, an international Collaboration On Maximizing the impact of E-Therapy and Serious Gaming (COMETS) was formed. In this perspectives’ paper, we call for a paradigm shift to increase the impact of internet interventions toward the ultimate goal of improved population mental health. We propose four pillars for change: (1) increased focus on user-centered approaches, including both user-centered design of programs and greater individualization within programs, with the latter perhaps utilizing increased modularization; (2) Increased emphasis on engagement utilizing processes such as gaming, gamification, telepresence, and persuasive technology; (3) Increased collaboration in program development, testing, and data sharing, across both sectors and regions, in order to achieve higher quality, more sustainable outcomes with greater reach; and (4) Rapid testing and implementation, including the measurement of reach, engagement, and effectiveness, and timely implementation. We suggest it is time for researchers, clinicians, developers, and end-users to collaborate on these aspects in order to maximize the impact of e-therapies and serious gaming. PMID:27148094

  19. Identifying Epigenetic Biomarkers using Maximal Relevance and Minimal Redundancy Based Feature Selection for Multi-Omics Data.

    PubMed

    Mallik, Saurav; Bhadra, Tapas; Maulik, Ujjwal

    2017-01-01

    Epigenetic Biomarker discovery is an important task in bioinformatics. In this article, we develop a new framework of identifying statistically significant epigenetic biomarkers using maximal-relevance and minimal-redundancy criterion based feature (gene) selection for multi-omics dataset. Firstly, we determine the genes that have both expression as well as methylation values, and follow normal distribution. Similarly, we identify the genes which consist of both expression and methylation values, but do not follow normal distribution. For each case, we utilize a gene-selection method that provides maximal-relevant, but variable-weighted minimum-redundant genes as top ranked genes. For statistical validation, we apply t-test on both the expression and methylation data consisting of only the normally distributed top ranked genes to determine how many of them are both differentially expressed andmethylated. Similarly, we utilize Limma package for performing non-parametric Empirical Bayes test on both expression and methylation data comprising only the non-normally distributed top ranked genes to identify how many of them are both differentially expressed and methylated. We finally report the top-ranking significant gene-markerswith biological validation. Moreover, our framework improves positive predictive rate and reduces false positive rate in marker identification. In addition, we provide a comparative analysis of our gene-selection method as well as othermethods based on classificationperformances obtained using several well-known classifiers.

  20. Understanding the factors that effect maximal fat oxidation.

    PubMed

    Purdom, Troy; Kravitz, Len; Dokladny, Karol; Mermier, Christine

    2018-01-01

    Lipids as a fuel source for energy supply during submaximal exercise originate from subcutaneous adipose tissue derived fatty acids (FA), intramuscular triacylglycerides (IMTG), cholesterol and dietary fat. These sources of fat contribute to fatty acid oxidation (FAox) in various ways. The regulation and utilization of FAs in a maximal capacity occur primarily at exercise intensities between 45 and 65% VO 2max , is known as maximal fat oxidation (MFO), and is measured in g/min. Fatty acid oxidation occurs during submaximal exercise intensities, but is also complimentary to carbohydrate oxidation (CHOox). Due to limitations within FA transport across the cell and mitochondrial membranes, FAox is limited at higher exercise intensities. The point at which FAox reaches maximum and begins to decline is referred to as the crossover point. Exercise intensities that exceed the crossover point (~65% VO 2max ) utilize CHO as the predominant fuel source for energy supply. Training status, exercise intensity, exercise duration, sex differences, and nutrition have all been shown to affect cellular expression responsible for FAox rate. Each stimulus affects the process of FAox differently, resulting in specific adaptions that influence endurance exercise performance. Endurance training, specifically long duration (>2 h) facilitate adaptations that alter both the origin of FAs and FAox rate. Additionally, the influence of sex and nutrition on FAox are discussed. Finally, the role of FAox in the improvement of performance during endurance training is discussed.

  1. Interprofessional education and distance education: A review and appraisal of the current literature.

    PubMed

    McCutcheon, Livia R M; Alzghari, Saeed K; Lee, Young R; Long, William G; Marquez, Robyn

    2017-07-01

    Interprofessional education (IPE) is becoming essential for students and healthcare professionals. An evolving approach to implement it is via distance education. Distance education can provide a viable solution to deliver IPE in a variety of settings. A literature search on PubMed and Academic Search Complete databases was conducted, revealing 478 articles ranging from the years of 1971-2015. The articles were screened for relevance using the following inclusion criteria: 1) Is this study implementing IPE? 2) Is this study utilizing the instructional delivery method of distance education? 3) Does this study contain students from two or more healthcare professions? Fifteen studies met the inclusion criteria and were systematically analyzed to identify data relevant for this review. Findings from this review provide a description of the teaching methods involved in distance education in promoting IPE and an assessment of the continuing use of distance education to foster IPE. Success varied depending upon on the distance-based instructional model utilized to facilitate IPE. Incorporating distance education to implement IPE can be an opportunity to develop team collaboration and communication skills among students. Teaching models presented in this review have the potential to be adapted to methods that leverage the power of evolving technology. Further research is needed to understand which distance education instructional delivery models best maximize the IPE experience. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. A novel model-based control strategy for aerobic filamentous fungal fed-batch fermentation processes.

    PubMed

    Mears, Lisa; Stocks, Stuart M; Albaek, Mads O; Cassells, Benny; Sin, Gürkan; Gernaey, Krist V

    2017-07-01

    A novel model-based control strategy has been developed for filamentous fungal fed-batch fermentation processes. The system of interest is a pilot scale (550 L) filamentous fungus process operating at Novozymes A/S. In such processes, it is desirable to maximize the total product achieved in a batch in a defined process time. In order to achieve this goal, it is important to maximize both the product concentration, and also the total final mass in the fed-batch system. To this end, we describe the development of a control strategy which aims to achieve maximum tank fill, while avoiding oxygen limited conditions. This requires a two stage approach: (i) calculation of the tank start fill; and (ii) on-line control in order to maximize fill subject to oxygen transfer limitations. First, a mechanistic model was applied off-line in order to determine the appropriate start fill for processes with four different sets of process operating conditions for the stirrer speed, headspace pressure, and aeration rate. The start fills were tested with eight pilot scale experiments using a reference process operation. An on-line control strategy was then developed, utilizing the mechanistic model which is recursively updated using on-line measurements. The model was applied in order to predict the current system states, including the biomass concentration, and to simulate the expected future trajectory of the system until a specified end time. In this way, the desired feed rate is updated along the progress of the batch taking into account the oxygen mass transfer conditions and the expected future trajectory of the mass. The final results show that the target fill was achieved to within 5% under the maximum fill when tested using eight pilot scale batches, and over filling was avoided. The results were reproducible, unlike the reference experiments which show over 10% variation in the final tank fill, and this also includes over filling. The variance of the final tank fill is reduced by over 74%, meaning that it is possible to target the final maximum fill reproducibly. The product concentration achieved at a given set of process conditions was unaffected by the control strategy. Biotechnol. Bioeng. 2017;114: 1459-1468. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  3. [Calculating the optimum size of a hemodialysis unit based on infrastructure potential].

    PubMed

    Avila-Palomares, Paula; López-Cervantes, Malaquías; Durán-Arenas, Luis

    2010-01-01

    To estimate the optimum size for hemodialysis units to maximize production given capital constraints. A national study in Mexico was conducted in 2009. Three possible methods for estimating a units optimum size were analyzed: hemodialysis services production under monopolistic market, under a perfect competitive market and production maximization given capital constraints. The third method was considered best based on the assumptions made in this paper; an optimal size unit should have 16 dialyzers (15 active and one back up dialyzer) and a purifier system able to supply all. It also requires one nephrologist, five nurses per shift, considering four shifts per day. Empirical evidence shows serious inefficiencies in the operation of units throughout the country. Most units fail to maximize production due to not fully utilizing equipment and personnel, particularly their water purifier potential which happens to be the most expensive asset for these units.

  4. On Reverse Stackelberg Game and Optimal Mean Field Control for a Large Population of Thermostatically Controlled Loads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Sen; Zhang, Wei; Lian, Jianming

    This paper studies a multi-stage pricing problem for a large population of thermostatically controlled loads. The problem is formulated as a reverse Stackelberg game that involves a mean field game in the hierarchy of decision making. In particular, in the higher level, a coordinator needs to design a pricing function to motivate individual agents to maximize the social welfare. In the lower level, the individual utility maximization problem of each agent forms a mean field game coupled through the pricing function that depends on the average of the population control/state. We derive the solution to the reverse Stackelberg game bymore » connecting it to a team problem and the competitive equilibrium, and we show that this solution corresponds to the optimal mean field control that maximizes the social welfare. Realistic simulations are presented to validate the proposed methods.« less

  5. A research on service quality decision-making of Chinese communications industry based on quantum game

    NASA Astrophysics Data System (ADS)

    Zhang, Cuihua; Xing, Peng

    2015-08-01

    In recent years, Chinese service industry is developing rapidly. Compared with developed countries, service quality should be the bottleneck for Chinese service industry. On the background of three major telecommunications service providers in China, the functions of customer perceived utilities are established. With the goal of consumer's perceived utility maximization, the classic Nash equilibrium solution and quantum equilibrium solution are obtained. Then a numerical example is studied and the changing trend of service quality and customer perceived utility is further analyzed by the influence of the entanglement operator. Finally, it is proved that quantum game solution is better than Nash equilibrium solution.

  6. Racial and Ethnic Differences in the Utilization of Prayer and Clergy Counseling by Infertile US Women Desiring Pregnancy.

    PubMed

    Collins, Stephen C; Kim, Soorin; Chan, Esther

    2017-11-29

    Religion can have a significant influence on the experience of infertility. However, it is unclear how many US women turn to religion when facing infertility. Here, we examine the utilization of prayer and clergy counsel among a nationally representative sample of 1062 infertile US women. Prayer was used by 74.8% of the participants, and clergy counsel was the most common formal support system utilized. Both prayer and clergy counsel were significantly more common among black and Hispanic women. Healthcare providers should acknowledge the spiritual needs of their infertile patients and ally with clergy when possible to provide maximally effective care.

  7. A metabolic core model elucidates how enhanced utilization of glucose and glutamine, with enhanced glutamine-dependent lactate production, promotes cancer cell growth: The WarburQ effect

    PubMed Central

    Damiani, Chiara; Colombo, Riccardo; Gaglio, Daniela; Mastroianni, Fabrizia; Westerhoff, Hans Victor; Vanoni, Marco; Alberghina, Lilia

    2017-01-01

    Cancer cells share several metabolic traits, including aerobic production of lactate from glucose (Warburg effect), extensive glutamine utilization and impaired mitochondrial electron flow. It is still unclear how these metabolic rearrangements, which may involve different molecular events in different cells, contribute to a selective advantage for cancer cell proliferation. To ascertain which metabolic pathways are used to convert glucose and glutamine to balanced energy and biomass production, we performed systematic constraint-based simulations of a model of human central metabolism. Sampling of the feasible flux space allowed us to obtain a large number of randomly mutated cells simulated at different glutamine and glucose uptake rates. We observed that, in the limited subset of proliferating cells, most displayed fermentation of glucose to lactate in the presence of oxygen. At high utilization rates of glutamine, oxidative utilization of glucose was decreased, while the production of lactate from glutamine was enhanced. This emergent phenotype was observed only when the available carbon exceeded the amount that could be fully oxidized by the available oxygen. Under the latter conditions, standard Flux Balance Analysis indicated that: this metabolic pattern is optimal to maximize biomass and ATP production; it requires the activity of a branched TCA cycle, in which glutamine-dependent reductive carboxylation cooperates to the production of lipids and proteins; it is sustained by a variety of redox-controlled metabolic reactions. In a K-ras transformed cell line we experimentally assessed glutamine-induced metabolic changes. We validated computational results through an extension of Flux Balance Analysis that allows prediction of metabolite variations. Taken together these findings offer new understanding of the logic of the metabolic reprogramming that underlies cancer cell growth. PMID:28957320

  8. A Model and Simple Iterative Algorithm for Redundancy Analysis.

    ERIC Educational Resources Information Center

    Fornell, Claes; And Others

    1988-01-01

    This paper shows that redundancy maximization with J. K. Johansson's extension can be accomplished via a simple iterative algorithm based on H. Wold's Partial Least Squares. The model and the iterative algorithm for the least squares approach to redundancy maximization are presented. (TJH)

  9. Utilizing the School Health Index to Foster University and Community Engagement

    ERIC Educational Resources Information Center

    King, Kristi McClary

    2010-01-01

    A Coordinated School Health Program maximizes a school's positive interaction among health education, physical education, health services, nutrition services, counseling/psychological/social services, health school environment, health promotion for staff, and family and community involvement. The purpose of this semester project is for…

  10. Releases: Is There Still a Place for Their Use by Colleges and Universities?

    ERIC Educational Resources Information Center

    Connell, Mary Ann; Savage, Frederick G.

    2003-01-01

    Analyzes the legal principles, facts, and circumstances that govern decisions of courts regarding the validity of written releases, and provides practical advice to higher education lawyers and administrators as they evaluate the utility of releases and seek to maximize their benefit. (EV)

  11. The future of transportation planning : dynamic travel behavior analyses based on stochastic decision-making styles : final report.

    DOT National Transportation Integrated Search

    2003-08-01

    Over the past half-century, the progress of travel behavior research and travel demand forecasting has been spear : headed and continuously propelled by the micro-economic theories, specifically utility maximization. There is no : denial that the tra...

  12. On the Teaching of Portfolio Theory.

    ERIC Educational Resources Information Center

    Biederman, Daniel K.

    1992-01-01

    Demonstrates how a simple portfolio problem expressed explicitly as an expected utility maximization problem can be used to instruct students in portfolio theory. Discusses risk aversion, decision making under uncertainty, and the limitations of the traditional mean variance approach. Suggests students may develop a greater appreciation of general…

  13. Innovative Conference Curriculum: Maximizing Learning and Professionalism

    ERIC Educational Resources Information Center

    Hyland, Nancy; Kranzow, Jeannine

    2012-01-01

    This action research study evaluated the potential of an innovative curriculum to move 73 graduate students toward professional development. The curriculum was grounded in the professional conference and utilized the motivation and expertise of conference presenters. This innovation required students to be more independent, act as a critical…

  14. Method of optimizing performance of Rankine cycle power plants

    DOEpatents

    Pope, William L.; Pines, Howard S.; Doyle, Padraic A.; Silvester, Lenard F.

    1982-01-01

    A method for efficiently operating a Rankine cycle power plant (10) to maximize fuel utilization efficiency or energy conversion efficiency or minimize costs by selecting a turbine (22) fluid inlet state which is substantially in the area adjacent and including the transposed critical temperature line (46).

  15. Making It Stick

    ERIC Educational Resources Information Center

    Ewers, Justin

    2009-01-01

    It seems to happen every day. A meeting is called to outline a new strategy or sales plan. Down go the lights and up goes the PowerPoint. Strange phrases appear--"unlocking shareholder value," "technology-focused innovation," "maximizing utility." Lists of numbers come and go. Bullet point by bullet point, the…

  16. Planning and managing market research: Electric utility market research monograph series: Monograph 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitelaw, R.W.

    1987-01-01

    The market research techniques available now to the electric utility industry have evolved over the last thirty years into a set of sophisticated tools that permit complex behavioral analyses that earlier had been impossible. The marketing questions facing the electric utility industry now are commensurately more complex than ever before. This document was undertaken to present the tools and techniques needed to start or improve the usefulness of market research activities within electric utilities. It describes proven planning and management techniques as well as decision criteria for structuring effective market research functions for each utility's particular needs. The monograph establishesmore » the parameters of sound utility market research given trade-offs between highly centralized or decentralized organizations, research focus, involvement in decision making, and personnel and management skills necessary to maximize the effectiveness of the structure chosen.« less

  17. Spectrum sensing and resource allocation for multicarrier cognitive radio systems under interference and power constraints

    NASA Astrophysics Data System (ADS)

    Dikmese, Sener; Srinivasan, Sudharsan; Shaat, Musbah; Bader, Faouzi; Renfors, Markku

    2014-12-01

    Multicarrier waveforms have been commonly recognized as strong candidates for cognitive radio. In this paper, we study the dynamics of spectrum sensing and spectrum allocation functions in cognitive radio context using very practical signal models for the primary users (PUs), including the effects of power amplifier nonlinearities. We start by sensing the spectrum with energy detection-based wideband multichannel spectrum sensing algorithm and continue by investigating optimal resource allocation methods. Along the way, we examine the effects of spectral regrowth due to the inevitable power amplifier nonlinearities of the PU transmitters. The signal model includes frequency selective block-fading channel models for both secondary and primary transmissions. Filter bank-based wideband spectrum sensing techniques are applied for detecting spectral holes and filter bank-based multicarrier (FBMC) modulation is selected for transmission as an alternative multicarrier waveform to avoid the disadvantage of limited spectral containment of orthogonal frequency-division multiplexing (OFDM)-based multicarrier systems. The optimization technique used for the resource allocation approach considered in this study utilizes the information obtained through spectrum sensing and knowledge of spectrum leakage effects of the underlying waveforms, including a practical power amplifier model for the PU transmitter. This study utilizes a computationally efficient algorithm to maximize the SU link capacity with power and interference constraints. It is seen that the SU transmission capacity depends critically on the spectral containment of the PU waveform, and these effects are quantified in a case study using an 802.11-g WLAN scenario.

  18. Mesenchymal stem cells protective effect in adriamycin model of nephropathy.

    PubMed

    Magnasco, Alberto; Corselli, Mirko; Bertelli, Roberta; Ibatici, Adalberto; Peresi, Monica; Gaggero, Gabriele; Cappiello, Valentina; Chiavarina, Barbara; Mattioli, Girolamo; Gusmano, Rosanna; Ravetti, Jean Louis; Frassoni, Francesco; Ghiggeri, Gian Marco

    2008-01-01

    Mesenchymal stem cells (MSCs) may be of value in regeneration of renal tissue after damage; however, lack of biological knowledge and variability of results in animal models limit their utilization. We studied the effects of MSCs on podocytes in vitro and in vivo utilizing adriamycin (ADR) as a model of renal toxicity. The in vivo experimental approach was carried out in male Sprague-Dawley rats (overall 60 animals) treated with different ADR schemes to induce acute and chronic nephrosis. MSCs were given a) concomitantly to ADR in tail vein or b) in aorta and c) in tail vein 60 days after ADR. Homing was assessed with PKH26-MSCs. MSCs rescued podocytes from apoptosis induced by ADR in vitro. The maximal effect (80% rescue) was obtained with MSCs/podocytes coculture ratio of 1:1 for 72 h. All rats treated with ADR developed nephrosis. MSCs did not modify the clinical parameters (i.e., proteinuria, serum creatinine, lipids) but protected the kidney from severe glomerulosclerosis when given concomitantly to ADR. Rats given MSCs 60 days after ADR developed the same severe renal damage. Only a few MSCs were found in renal tubule-interstitial areas 1-24 h after injection and no MSCs were detected in glomeruli. MSCs reduced apoptosis of podocytes treated with ADR in vitro. Early and repeated MSCs infusion blunted glomerular damage in chronic ADR-induced nephropathy. MSCs did not modify proteinuria and progression to renal failure, which implies lack of regenerative potential in this model.

  19. Improved design method of a rotating spool compressor using a comprehensive model and comparison to experimental results

    NASA Astrophysics Data System (ADS)

    Bradshaw, Craig R.; Kemp, Greg; Orosz, Joe; Groll, Eckhard A.

    2017-08-01

    An improvement to the design process of the rotating spool compressor is presented. This improvement utilizes a comprehensive model to explore two working uids (R410A and R134a), various displaced volumes, at a variety of geometric parameters. The geometric parameters explored consists of eccentricity ratio and length-to-diameter ratio. The eccentricity ratio is varied between 0.81 and 0.92 and the length-to-diameter ratio is varied between 0.4 and 3. The key tradeoffs are evaluated and the results show that there is an optimum eccentricity and length-to-diameter ratio, which will maximize the model predicted performance, that is unique to a particular uid and displaced volume. For R410A, the modeling tool predicts that the overall isentropic efficiency will optimize at a length-to-diameter ratio that is lower than for R134a. Additionally, the tool predicts that as the displaced volume increases the overall isentropic efficiency will increase and the ideal length-to-diameter ratio will shift. The result from this study are utilized to develop a basic design for a 141 kW (40 tonsR) capacity prototype spool compressor for light-commercial air-conditioning applications. Results from a prototype compressor constructed based on these efforts is presented. The volumetric efficiency predictions are found to be very accurate with the overall isentropic efficiency predictions shown to be slightly over-predicted.

  20. Real-time topic-aware influence maximization using preprocessing.

    PubMed

    Chen, Wei; Lin, Tian; Yang, Cheng

    2016-01-01

    Influence maximization is the task of finding a set of seed nodes in a social network such that the influence spread of these seed nodes based on certain influence diffusion model is maximized. Topic-aware influence diffusion models have been recently proposed to address the issue that influence between a pair of users are often topic-dependent and information, ideas, innovations etc. being propagated in networks are typically mixtures of topics. In this paper, we focus on the topic-aware influence maximization task. In particular, we study preprocessing methods to avoid redoing influence maximization for each mixture from scratch. We explore two preprocessing algorithms with theoretical justifications. Our empirical results on data obtained in a couple of existing studies demonstrate that one of our algorithms stands out as a strong candidate providing microsecond online response time and competitive influence spread, with reasonable preprocessing effort.

  1. Maximizing profitability in a hospital outpatient pharmacy.

    PubMed

    Jorgenson, J A; Kilarski, J W; Malatestinic, W N; Rudy, T A

    1989-07-01

    This paper describes the strategies employed to increase the profitability of an existing ambulatory pharmacy operated by the hospital. Methods to generate new revenue including implementation of a home parenteral therapy program, a home enteral therapy program, a durable medical equipment service, and home care disposable sales are described. Programs to maximize existing revenue sources such as increasing the capture rate on discharge prescriptions, increasing "walk-in" prescription traffic and increasing HMO prescription volumes are discussed. A method utilized to reduce drug expenditures is also presented. By minimizing expenses and increasing the revenues for the ambulatory pharmacy operation, net profit increased from +26,000 to over +140,000 in one year.

  2. Optimum Sensors Integration for Multi-Sensor Multi-Target Environment for Ballistic Missile Defense Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imam, Neena; Barhen, Jacob; Glover, Charles Wayne

    2012-01-01

    Multi-sensor networks may face resource limitations in a dynamically evolving multiple target tracking scenario. It is necessary to task the sensors efficiently so that the overall system performance is maximized within the system constraints. The central sensor resource manager may control the sensors to meet objective functions that are formulated to meet system goals such as minimization of track loss, maximization of probability of target detection, and minimization of track error. This paper discusses the variety of techniques that may be utilized to optimize sensor performance for either near term gain or future reward over a longer time horizon.

  3. Tripartite-to-Bipartite Entanglement Transformation by Stochastic Local Operations and Classical Communication and the Structure of Matrix Spaces

    NASA Astrophysics Data System (ADS)

    Li, Yinan; Qiao, Youming; Wang, Xin; Duan, Runyao

    2018-03-01

    We study the problem of transforming a tripartite pure state to a bipartite one using stochastic local operations and classical communication (SLOCC). It is known that the tripartite-to-bipartite SLOCC convertibility is characterized by the maximal Schmidt rank of the given tripartite state, i.e. the largest Schmidt rank over those bipartite states lying in the support of the reduced density operator. In this paper, we further study this problem and exhibit novel results in both multi-copy and asymptotic settings, utilizing powerful results from the structure of matrix spaces. In the multi-copy regime, we observe that the maximal Schmidt rank is strictly super-multiplicative, i.e. the maximal Schmidt rank of the tensor product of two tripartite pure states can be strictly larger than the product of their maximal Schmidt ranks. We then provide a full characterization of those tripartite states whose maximal Schmidt rank is strictly super-multiplicative when taking tensor product with itself. Notice that such tripartite states admit strict advantages in tripartite-to-bipartite SLOCC transformation when multiple copies are provided. In the asymptotic setting, we focus on determining the tripartite-to-bipartite SLOCC entanglement transformation rate. Computing this rate turns out to be equivalent to computing the asymptotic maximal Schmidt rank of the tripartite state, defined as the regularization of its maximal Schmidt rank. Despite the difficulty caused by the super-multiplicative property, we provide explicit formulas for evaluating the asymptotic maximal Schmidt ranks of two important families of tripartite pure states by resorting to certain results of the structure of matrix spaces, including the study of matrix semi-invariants. These formulas turn out to be powerful enough to give a sufficient and necessary condition to determine whether a given tripartite pure state can be transformed to the bipartite maximally entangled state under SLOCC, in the asymptotic setting. Applying the recent progress on the non-commutative rank problem, we can verify this condition in deterministic polynomial time.

  4. Utility franchises reconsidered

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weidner, B.

    It is easier to obtain a public utility franchise than one for a fast food store because companies like Burger King value the profit share and control available with a franchise arrangement. The investor-owned utilities (IOUs) in Chicago and elsewhere gets little financial or regulatory benefit, although they do have an alternative because the franchise can be taken over by the city with a one-year notice. As IOUs evolved, the annual franchise fee has been incorporated into the rate in a move that taxes ratepayers and maximizes profits. Cities that found franchising unsatisfactory are looking for ways to terminate themore » franchise and finance a takeover, but limited-term and indeterminate franchises may offer a better mechanism when public needs and utility aims diverge. A directory lists franchised utilities by state and comments on their legal status. (DCK)« less

  5. The Bordetella bronchiseptica Type III Secretion System Is Required for Persistence and Disease Severity but Not Transmission in Swine

    PubMed Central

    Brockmeier, Susan L.; Loving, Crystal L.; Register, Karen B.; Kehrli, Marcus E.; Shore, Sarah M.

    2014-01-01

    Bordetella bronchiseptica is pervasive in swine populations and plays multiple roles in respiratory disease. Most studies addressing virulence factors of B. bronchiseptica utilize isolates derived from hosts other than pigs in conjunction with rodent infection models. Based on previous in vivo mouse studies, we hypothesized that the B. bronchiseptica type III secretion system (T3SS) would be required for maximal disease severity and persistence in the swine lower respiratory tract. To examine the contribution of the T3SS to the pathogenesis of B. bronchiseptica in swine, we compared the abilities of a virulent swine isolate and an isogenic T3SS mutant to colonize, cause disease, and be transmitted from host to host. We found that the T3SS is required for maximal persistence throughout the lower swine respiratory tract and contributed significantly to the development of nasal lesions and pneumonia. However, the T3SS mutant and the wild-type parent are equally capable of transmission among swine by both direct and indirect routes, demonstrating that transmission can occur even with attenuated disease. Our data further suggest that the T3SS skews the adaptive immune response in swine by hindering the development of serum anti-Bordetella antibody levels and inducing an interleukin-10 (IL-10) cell-mediated response, likely contributing to the persistence of B. bronchiseptica in the respiratory tract. Overall, our results demonstrate that the Bordetella T3SS is required for maximal persistence and disease severity in pigs, but not for transmission. PMID:24366249

  6. Efficient Wideband Spectrum Sensing with Maximal Spectral Efficiency for LEO Mobile Satellite Systems

    PubMed Central

    Li, Feilong; Li, Zhiqiang; Li, Guangxia; Dong, Feihong; Zhang, Wei

    2017-01-01

    The usable satellite spectrum is becoming scarce due to static spectrum allocation policies. Cognitive radio approaches have already demonstrated their potential towards spectral efficiency for providing more spectrum access opportunities to secondary user (SU) with sufficient protection to licensed primary user (PU). Hence, recent scientific literature has been focused on the tradeoff between spectrum reuse and PU protection within narrowband spectrum sensing (SS) in terrestrial wireless sensing networks. However, those narrowband SS techniques investigated in the context of terrestrial CR may not be applicable for detecting wideband satellite signals. In this paper, we mainly investigate the problem of joint designing sensing time and hard fusion scheme to maximize SU spectral efficiency in the scenario of low earth orbit (LEO) mobile satellite services based on wideband spectrum sensing. Compressed detection model is established to prove that there indeed exists one optimal sensing time achieving maximal spectral efficiency. Moreover, we propose novel wideband cooperative spectrum sensing (CSS) framework where each SU reporting duration can be utilized for its following SU sensing. The sensing performance benefits from the novel CSS framework because the equivalent sensing time is extended by making full use of reporting slot. Furthermore, in respect of time-varying channel, the spatiotemporal CSS (ST-CSS) is presented to attain space and time diversity gain simultaneously under hard decision fusion rule. Computer simulations show that the optimal sensing settings algorithm of joint optimization of sensing time, hard fusion rule and scheduling strategy achieves significant improvement in spectral efficiency. Additionally, the novel ST-CSS scheme performs much higher spectral efficiency than that of general CSS framework. PMID:28117712

  7. Grassland biodiversity can pay.

    PubMed

    Binder, Seth; Isbell, Forest; Polasky, Stephen; Catford, Jane A; Tilman, David

    2018-04-10

    The biodiversity-ecosystem functioning (BEF) literature provides strong evidence of the biophysical basis for the potential profitability of greater diversity but does not address questions of optimal management. BEF studies typically focus on the ecosystem outputs produced by randomly assembled communities that only differ in their biodiversity levels, measured by indices such as species richness. Landholders, however, do not randomly select species to plant; they choose particular species that collectively maximize profits. As such, their interest is not in comparing the average performance of randomly assembled communities at each level of biodiversity but rather comparing the best-performing communities at each diversity level. Assessing the best-performing mixture requires detailed accounting of species' identities and relative abundances. It also requires accounting for the financial cost of individual species' seeds, and the economic value of changes in the quality, quantity, and variability of the species' collective output-something that existing multifunctionality indices fail to do. This study presents an assessment approach that integrates the relevant factors into a single, coherent framework. It uses ecological production functions to inform an economic model consistent with the utility-maximizing decisions of a potentially risk-averse private landowner. We demonstrate the salience and applicability of the framework using data from an experimental grassland to estimate production relationships for hay and carbon storage. For that case, our results suggest that even a risk-neutral, profit-maximizing landowner would favor a highly diverse mix of species, with optimal species richness falling between the low levels currently found in commercial grasslands and the high levels found in natural grasslands.

  8. Reserve design to maximize species persistence

    Treesearch

    Robert G. Haight; Laurel E. Travis

    2008-01-01

    We develop a reserve design strategy to maximize the probability of species persistence predicted by a stochastic, individual-based, metapopulation model. Because the population model does not fit exact optimization procedures, our strategy involves deriving promising solutions from theory, obtaining promising solutions from a simulation optimization heuristic, and...

  9. Impact of air temperature on physically-based maximum precipitation estimation through change in moisture holding capacity of air

    NASA Astrophysics Data System (ADS)

    Ishida, K.; Ohara, N.; Kavvas, M. L.; Chen, Z. Q.; Anderson, M. L.

    2018-01-01

    Impact of air temperature on the Maximum Precipitation (MP) estimation through change in moisture holding capacity of air was investigated. A series of previous studies have estimated the MP of 72-h basin-average precipitation over the American River watershed (ARW) in Northern California by means of the Maximum Precipitation (MP) estimation approach, which utilizes a physically-based regional atmospheric model. For the MP estimation, they have selected 61 severe storm events for the ARW, and have maximized them by means of the atmospheric boundary condition shifting (ABCS) and relative humidity maximization (RHM) methods. This study conducted two types of numerical experiments in addition to the MP estimation by the previous studies. First, the air temperature on the entire lateral boundaries of the outer model domain was increased uniformly by 0.0-8.0 °C with 0.5 °C increments for the two severest maximized historical storm events in addition to application of the ABCS + RHM method to investigate the sensitivity of the basin-average precipitation over the ARW to air temperature rise. In this investigation, a monotonous increase was found in the maximum 72-h basin-average precipitation over the ARW with air temperature rise for both of the storm events. The second numerical experiment used specific amounts of air temperature rise that is assumed to happen under future climate change conditions. Air temperature was increased by those specified amounts uniformly on the entire lateral boundaries in addition to application of the ABCS + RHM method to investigate the impact of air temperature on the MP estimate over the ARW under changing climate. The results in the second numerical experiment show that temperature increases in the future climate may amplify the MP estimate over the ARW. The MP estimate may increase by 14.6% in the middle of the 21st century and by 27.3% in the end of the 21st century compared to the historical period.

  10. Acceptable regret in medical decision making.

    PubMed

    Djulbegovic, B; Hozo, I; Schwartz, A; McMasters, K M

    1999-09-01

    When faced with medical decisions involving uncertain outcomes, the principles of decision theory hold that we should select the option with the highest expected utility to maximize health over time. Whether a decision proves right or wrong can be learned only in retrospect, when it may become apparent that another course of action would have been preferable. This realization may bring a sense of loss, or regret. When anticipated regret is compelling, a decision maker may choose to violate expected utility theory to avoid regret. We formulate a concept of acceptable regret in medical decision making that explicitly introduces the patient's attitude toward loss of health due to a mistaken decision into decision making. In most cases, minimizing expected regret results in the same decision as maximizing expected utility. However, when acceptable regret is taken into consideration, the threshold probability below which we can comfortably withhold treatment is a function only of the net benefit of the treatment, and the threshold probability above which we can comfortably administer the treatment depends only on the magnitude of the risks associated with the therapy. By considering acceptable regret, we develop new conceptual relations that can help decide whether treatment should be withheld or administered, especially when the diagnosis is uncertain. This may be particularly beneficial in deciding what constitutes futile medical care.

  11. Action Being Character: A Promising Perspective on the Solution Concept of Game Theory

    PubMed Central

    Deng, Kuiying; Chu, Tianguang

    2011-01-01

    The inconsistency of predictions from solution concepts of conventional game theory with experimental observations is an enduring question. These solution concepts are based on the canonical rationality assumption that people are exclusively self-regarding utility maximizers. In this article, we think this assumption is problematic and, instead, assume that rational economic agents act as if they were maximizing their implicit utilities, which turns out to be a natural extension of the canonical rationality assumption. Implicit utility is defined by a player's character to reflect his personal weighting between cooperative, individualistic, and competitive social value orientations. The player who actually faces an implicit game chooses his strategy based on the common belief about the character distribution for a general player and the self-estimation of his own character, and he is not concerned about which strategies other players will choose and will never feel regret about his decision. It is shown by solving five paradigmatic games, the Dictator game, the Ultimatum game, the Prisoner's Dilemma game, the Public Goods game, and the Battle of the Sexes game, that the framework of implicit game and its corresponding solution concept, implicit equilibrium, based on this alternative assumption have potential for better explaining people's actual behaviors in social decision making situations. PMID:21573055

  12. Action being character: a promising perspective on the solution concept of game theory.

    PubMed

    Deng, Kuiying; Chu, Tianguang

    2011-05-09

    The inconsistency of predictions from solution concepts of conventional game theory with experimental observations is an enduring question. These solution concepts are based on the canonical rationality assumption that people are exclusively self-regarding utility maximizers. In this article, we think this assumption is problematic and, instead, assume that rational economic agents act as if they were maximizing their implicit utilities, which turns out to be a natural extension of the canonical rationality assumption. Implicit utility is defined by a player's character to reflect his personal weighting between cooperative, individualistic, and competitive social value orientations. The player who actually faces an implicit game chooses his strategy based on the common belief about the character distribution for a general player and the self-estimation of his own character, and he is not concerned about which strategies other players will choose and will never feel regret about his decision. It is shown by solving five paradigmatic games, the Dictator game, the Ultimatum game, the Prisoner's Dilemma game, the Public Goods game, and the Battle of the Sexes game, that the framework of implicit game and its corresponding solution concept, implicit equilibrium, based on this alternative assumption have potential for better explaining people's actual behaviors in social decision making situations.

  13. Kinetics based reaction optimization of enzyme catalyzed reduction of formaldehyde to methanol with synchronous cofactor regeneration.

    PubMed

    Marpani, Fauziah; Sárossy, Zsuzsa; Pinelo, Manuel; Meyer, Anne S

    2017-12-01

    Enzymatic reduction of carbon dioxide (CO 2 ) to methanol (CH 3 OH) can be accomplished using a designed set-up of three oxidoreductases utilizing reduced pyridine nucleotide (NADH) as cofactor for the reducing equivalents electron supply. For this enzyme system to function efficiently a balanced regeneration of the reducing equivalents during reaction is required. Herein, we report the optimization of the enzymatic conversion of formaldehyde (CHOH) to CH 3 OH by alcohol dehydrogenase, the final step of the enzymatic redox reaction of CO 2 to CH 3 OH, with kinetically synchronous enzymatic cofactor regeneration using either glucose dehydrogenase (System I) or xylose dehydrogenase (System II). A mathematical model of the enzyme kinetics was employed to identify the best reaction set-up for attaining optimal cofactor recycling rate and enzyme utilization efficiency. Targeted process optimization experiments were conducted to verify the kinetically modeled results. Repetitive reaction cycles were shown to enhance the yield of CH 3 OH, increase the total turnover number (TTN) and the biocatalytic productivity rate (BPR) value for both system I and II whilst minimizing the exposure of the enzymes to high concentrations of CHOH. System II was found to be superior to System I with a yield of 8 mM CH 3 OH, a TTN of 160 and BPR of 24 μmol CH 3 OH/U · h during 6 hr of reaction. The study demonstrates that an optimal reaction set-up could be designed from rational kinetics modeling to maximize the yield of CH 3 OH, whilst simultaneously optimizing cofactor recycling and enzyme utilization efficiency. © 2017 Wiley Periodicals, Inc.

  14. Myofibrillar and collagen protein synthesis in human skeletal muscle in young men after maximal shortening and lengthening contractions.

    PubMed

    Moore, Daniel R; Phillips, Stuart M; Babraj, John A; Smith, Kenneth; Rennie, Michael J

    2005-06-01

    We aimed to determine whether there were differences in the extent and time course of skeletal muscle myofibrillar protein synthesis (MPS) and muscle collagen protein synthesis (CPS) in human skeletal muscle in an 8.5-h period after bouts of maximal muscle shortening (SC; average peak torque = 225 +/- 7 N.m, means +/- SE) or lengthening contractions (LC; average peak torque = 299 +/- 18 N.m) with equivalent work performed in each mode. Eight healthy young men (21.9 +/- 0.6 yr, body mass index 24.9 +/- 1.3 kg/m2) performed 6 sets of 10 maximal unilateral LC of the knee extensors on an isokinetic dynamometer. With the contralateral leg, they then performed 6 sets of maximal unilateral SC with work matched to the total work performed during LC (10.9 +/- 0.7 vs. 10.9 +/- 0.8 kJ, P = 0.83). After exercise, the participants consumed small intermittent meals to provide 0.1 g.kg(-1).h(-1) of protein and carbohydrate. Prior exercise elevated MPS above rest in both conditions, but there was a more rapid rise after LC (P < 0.01). The increases (P < 0.001) in CPS above rest were identical for both SC and LC and likely represent a remodeling of the myofibrillar basement membrane. Therefore, a more rapid rise in MPS after maximal LC could translate into greater protein accretion and muscle hypertrophy during chronic resistance training utilizing maximal LC.

  15. Parallel State Space Construction for a Model Checking Based on Maximality Semantics

    NASA Astrophysics Data System (ADS)

    El Abidine Bouneb, Zine; Saīdouni, Djamel Eddine

    2009-03-01

    The main limiting factor of the model checker integrated in the concurrency verification environment FOCOVE [1, 2], which use the maximality based labeled transition system (noted MLTS) as a true concurrency model[3, 4], is currently the amount of available physical memory. Many techniques have been developed to reduce the size of a state space. An interesting technique among them is the alpha equivalence reduction. Distributed memory execution environment offers yet another choice. The main contribution of the paper is to show that the parallel state space construction algorithm proposed in [5], which is based on interleaving semantics using LTS as semantic model, may be adapted easily to the distributed implementation of the alpha equivalence reduction for the maximality based labeled transition systems.

  16. Optimal Battery Utilization Over Lifetime for Parallel Hybrid Electric Vehicle to Maximize Fuel Economy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patil, Chinmaya; Naghshtabrizi, Payam; Verma, Rajeev

    This paper presents a control strategy to maximize fuel economy of a parallel hybrid electric vehicle over a target life of the battery. Many approaches to maximizing fuel economy of parallel hybrid electric vehicle do not consider the effect of control strategy on the life of the battery. This leads to an oversized and underutilized battery. There is a trade-off between how aggressively to use and 'consume' the battery versus to use the engine and consume fuel. The proposed approach addresses this trade-off by exploiting the differences in the fast dynamics of vehicle power management and slow dynamics of batterymore » aging. The control strategy is separated into two parts, (1) Predictive Battery Management (PBM), and (2) Predictive Power Management (PPM). PBM is the higher level control with slow update rate, e.g. once per month, responsible for generating optimal set points for PPM. The considered set points in this paper are the battery power limits and State Of Charge (SOC). The problem of finding the optimal set points over the target battery life that minimize engine fuel consumption is solved using dynamic programming. PPM is the lower level control with high update rate, e.g. a second, responsible for generating the optimal HEV energy management controls and is implemented using model predictive control approach. The PPM objective is to find the engine and battery power commands to achieve the best fuel economy given the battery power and SOC constraints imposed by PBM. Simulation results with a medium duty commercial hybrid electric vehicle and the proposed two-level hierarchical control strategy show that the HEV fuel economy is maximized while meeting a specified target battery life. On the other hand, the optimal unconstrained control strategy achieves marginally higher fuel economy, but fails to meet the target battery life.« less

  17. Multi-period equilibrium/near-equilibrium in electricity markets based on locational marginal prices

    NASA Astrophysics Data System (ADS)

    Garcia Bertrand, Raquel

    In this dissertation we propose an equilibrium procedure that coordinates the point of view of every market agent resulting in an equilibrium that simultaneously maximizes the independent objective of every market agent and satisfies network constraints. Therefore, the activities of the generating companies, consumers and an independent system operator are modeled: (1) The generating companies seek to maximize profits by specifying hourly step functions of productions and minimum selling prices, and bounds on productions. (2) The goals of the consumers are to maximize their economic utilities by specifying hourly step functions of demands and maximum buying prices, and bounds on demands. (3) The independent system operator then clears the market taking into account consistency conditions as well as capacity and line losses so as to achieve maximum social welfare. Then, we approach this equilibrium problem using complementarity theory in order to have the capability of imposing constraints on dual variables, i.e., on prices, such as minimum profit conditions for the generating units or maximum cost conditions for the consumers. In this way, given the form of the individual optimization problems, the Karush-Kuhn-Tucker conditions for the generating companies, the consumers and the independent system operator are both necessary and sufficient. The simultaneous solution to all these conditions constitutes a mixed linear complementarity problem. We include minimum profit constraints imposed by the units in the market equilibrium model. These constraints are added as additional constraints to the equivalent quadratic programming problem of the mixed linear complementarity problem previously described. For the sake of clarity, the proposed equilibrium or near-equilibrium is first developed for the particular case considering only one time period. Afterwards, we consider an equilibrium or near-equilibrium applied to a multi-period framework. This model embodies binary decisions, i.e., on/off status for the units, and therefore optimality conditions cannot be directly applied. To avoid limitations provoked by binary variables, while retaining the advantages of using optimality conditions, we define the multi-period market equilibrium using Benders decomposition, which allows computing binary variables through the master problem and continuous variables through the subproblem. Finally, we illustrate these market equilibrium concepts through several case studies.

  18. Application of soil block without burning process and calcium silicate panels as building wall in mountainous area

    NASA Astrophysics Data System (ADS)

    Noerwasito, Vincentius Totok; Nasution, Tanti Satriana Rosary

    2017-11-01

    Utilization of local building materials in a residential location in mountainous area is very important, considering local material as a low-energy building material because of low transport energy. The local building materials used in this study are walls made from soil blocks. The material was made by the surrounding community from compacted soil without burning process. To maximize the potential of soil block to the outdoor temperature in the mountains, it is necessary to add non-local building materials as an insulator from the influence of the outside air. The insulator was calcium silicate panel. The location of the research is Trawas sub-district, Mojokerto regency, which is a mountainous area. The research problem is on applying the composition of local materials and calcium silicate panels that it will be able to meet the requirements as a wall building material and finding to what extent the impact of the wall against indoor temperature. The result from this research was the application of soil block walls insulated by calcium silicate panels in a building model. Besides, because of the utilization of those materials, the building has a specific difference between indoor and outdoor temperature. Thus, this model can be applied in mountainous areas in Indonesia.

  19. Analysis of integrated photovoltaic-thermal systems using solar concentrators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yusoff, M.B.

    1983-01-01

    An integrated photovoltaic-thermal system using solar concentrators utilizes the solar radiation spectrum in the production of electrical and thermal energy. The electrical conversion efficiency of this system decreases with increasing solar cell temperature. Since a high operating temperature is desirable to maximize the quality of thermal output of the planned integrated system, a proper choice of the operating temperature for the unit cell is of vital importance. The analysis predicts performance characteristics of the unit cell by considering the dependence of the heat generation, the heat absorption and the heat transmission on the material properties of the unit cell structure.more » An analytical model has been developed to describe the heat transport phenomena occurring in the unit cell structure. The range of applicability of the one-dimensional and the two-dimensional models, which have closed-form solutions, has been demonstrated. Parametric and design studies point out the requirements for necessary good electrical and thermal performance. A procedure utilizing functional forms of component characteristics in the form of partial coefficients of the dependent variable has been developed to design and operate the integrated system to have a desirable value of the thermal to electrical output ratio both at design and operating modes.« less

  20. Maximizing the Information and Validity of a Linear Composite in the Factor Analysis Model for Continuous Item Responses

    ERIC Educational Resources Information Center

    Ferrando, Pere J.

    2008-01-01

    This paper develops results and procedures for obtaining linear composites of factor scores that maximize: (a) test information, and (b) validity with respect to external variables in the multiple factor analysis (FA) model. I treat FA as a multidimensional item response theory model, and use Ackerman's multidimensional information approach based…

  1. A History of Educational Facilities Laboratories (EFL)

    ERIC Educational Resources Information Center

    Marks, Judy

    2009-01-01

    The Educational Facilities Laboratories (EFL), an independent research organization established by the Ford Foundation, opened its doors in 1958 under the direction of Harold B. Gores, a distinguished educator. Its purpose was to help schools and colleges maximize the quality and utility of their facilities, stimulate research, and disseminate…

  2. Fusing corn nitrogen recommendation tools for an improved canopy reflectance sensor performance

    USDA-ARS?s Scientific Manuscript database

    Nitrogen (N) rate recommendation tools are utilized to help producers maximize corn grain yield production. Many of these tools provide recommendations at field scales but often fail when corn N requirements are variable across the field. Canopy reflectance sensors are capable of capturing within-fi...

  3. "Bundling" in Learning.

    ERIC Educational Resources Information Center

    Spiegel, U.; Templeman, J.

    1996-01-01

    Applies the literature of bundling, tie-in sales, and vertical integration to higher education. Students are often required to purchase a package of courses, some of which are unrelated to their major. This kind of bundling policy can be utilized as a profit-maximizing strategy for universities exercising a degree of monopolistic power. (12…

  4. Method of optimizing performance of Rankine cycle power plants. [US DOE Patent

    DOEpatents

    Pope, W.L.; Pines, H.S.; Doyle, P.A.; Silvester, L.F.

    1980-06-23

    A method is described for efficiently operating a Rankine cycle power plant to maximize fuel utilization efficiency or energy conversion efficiency or minimize costs by selecting a turbine fluid inlet state which is substantially on the area adjacent and including the transposed critical temperature line.

  5. A Pragmatic Study of Barak Obama's Political Propaganda

    ERIC Educational Resources Information Center

    Al-Ameedi, Riyadh Tariq Kadhim; Khudhier, Zina Abdul Hussein

    2015-01-01

    This study investigates, pragmatically, the language of five electoral political propaganda texts delivered by Barak Obama. It attempts to achieve the following aims: (1) identifying the speech acts used in political propaganda, (2) showing how politicians utilize Grice's maxims and the politeness principle in issuing their propaganda, (3)…

  6. 42 CFR 51c.305 - Grant evaluation and award.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... potential of the center for the development of new and effective methods for health services delivery and management; (d) The soundness of the fiscal plan for assuring effective utilization of grant funds and maximizing non-grant revenue; (e) The administrative and management capability of the applicant; (f) The...

  7. Supporting Content Learning for English Learners

    ERIC Educational Resources Information Center

    Bauer, Eurydice B.; Manyak, Patrick C.; Cook, Crystal

    2010-01-01

    In this column, the three authors address the teaching of ELs within the content areas. Specifically, they highlight the difference between having language and content objectives, utilizing small-group work to maximize involvement, and inclusion of beginning English speakers into the learning process. Currently there is a gap of 36 points between…

  8. The Probabilistic Nature of Preferential Choice

    ERIC Educational Resources Information Center

    Rieskamp, Jorg

    2008-01-01

    Previous research has developed a variety of theories explaining when and why people's decisions under risk deviate from the standard economic view of expected utility maximization. These theories are limited in their predictive accuracy in that they do not explain the probabilistic nature of preferential choice, that is, why an individual makes…

  9. Effective Teaching of Economics: A Constrained Optimization Problem?

    ERIC Educational Resources Information Center

    Hultberg, Patrik T.; Calonge, David Santandreu

    2017-01-01

    One of the fundamental tenets of economics is that decisions are often the result of optimization problems subject to resource constraints. Consumers optimize utility, subject to constraints imposed by prices and income. As economics faculty, instructors attempt to maximize student learning while being constrained by their own and students'…

  10. Negotiator's checklist: success through preparation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilds, L.J.

    In gaining environmental approval for a hydro project, negotiation is a necessity. Strategies for maximizing the success of negotiations are presented. The case study of successful negotiations by the Kodiak Electric Association concerning the Terror Lake project on Kodiak Island, Alaska, is presented. The utility of bringing in professional mediators is discussed.

  11. PLANNING THE ELEMENTARY SCHOOL PLANT. SCHOOL PLANT PLANNING SERIES.

    ERIC Educational Resources Information Center

    Utah State Board of Education, Salt Lake City.

    CAREFUL PLANNING FOR THE ELEMENTARY SCHOOL MAXIMIZES THE USE OF SPACE TO PROVIDE CHILDREN WITH FREQUENT CHANGES IN ACTIVITY AND A WIDE VARIETY OF EXPERIENCES. IN THE PLANNING PROCESS, SPECIAL CONSIDERATION IS GIVEN TO LONG RANGE DEVELOPMENT THUS PREVENTING OVERBUILDING AND UNDERBUILDING. THE PLANT SHOULD FIT, THROUGH INCREASING UTILITY BY…

  12. Request from the Phthalate Esters Panel of the American Chemistry Council for correction of EPA's Action Plan for Phthalate Esters

    EPA Pesticide Factsheets

    The Phthalate Esters Panel (Panel) of the American Chemistry Council submits this Request for Correction to EPA under the Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility, and Integrity, of Information Disseminated by the Environmental Protection Agency

  13. A Human Capital Approach to Career Advising

    ERIC Educational Resources Information Center

    Shaffer, Leigh S.; Zalewski, Jacqueline M.

    2011-01-01

    We began this series by addressing the challenges of career advising in a volatile, uncertain, complex, and ambiguous environment. In this article, we define human capital and suggest that advisors encourage students to utilize the principle of maximizing human capital when making decisions. We describe the personal traits and attitudes needed to…

  14. Process optimization for maximizing the rheology modifying properties of pectic hydrocolloids recovered from steam exploded biomass

    USDA-ARS?s Scientific Manuscript database

    Pectic hydrocolloids from citrus peel waste are highly functional molecules whose utility and application have expanded well beyond their traditional use in jams and jellies. They are now finding applications in health, pharmaceutical and personal care products as well as functioning as emulsifiers,...

  15. Context Matters: Using Qualitative Inquiry to Inform Departmental Effectiveness and Student Success

    ERIC Educational Resources Information Center

    Williams, Elizabeth A.; Stassen, Martha L. A.

    2017-01-01

    This chapter describes efforts to gather and utilize qualitative data to maximize contextual knowledge at one university. The examples provided focus on how academic departments use qualitative evidence to enhance their students' success as well as how qualitative evidence supports the institution's broader strategic planning goals.

  16. New Paradigm for Translational Modeling to Predict Long‐term Tuberculosis Treatment Response

    PubMed Central

    Bartelink, IH; Zhang, N; Keizer, RJ; Strydom, N; Converse, PJ; Dooley, KE; Nuermberger, EL

    2017-01-01

    Abstract Disappointing results of recent tuberculosis chemotherapy trials suggest that knowledge gained from preclinical investigations was not utilized to maximal effect. A mouse‐to‐human translational pharmacokinetics (PKs) – pharmacodynamics (PDs) model built on a rich mouse database may improve clinical trial outcome predictions. The model included Mycobacterium tuberculosis growth function in mice, adaptive immune response effect on bacterial growth, relationships among moxifloxacin, rifapentine, and rifampin concentrations accelerating bacterial death, clinical PK data, species‐specific protein binding, drug‐drug interactions, and patient‐specific pathology. Simulations of recent trials testing 4‐month regimens predicted 65% (95% confidence interval [CI], 55–74) relapse‐free patients vs. 80% observed in the REMox‐TB trial, and 79% (95% CI, 72–87) vs. 82% observed in the Rifaquin trial. Simulation of 6‐month regimens predicted 97% (95% CI, 93–99) vs. 92% and 95% observed in 2RHZE/4RH control arms, and 100% predicted and observed in the 35 mg/kg rifampin arm of PanACEA MAMS. These results suggest that the model can inform regimen optimization and predict outcomes of ongoing trials. PMID:28561946

  17. Model of personal consumption under conditions of modern economy

    NASA Astrophysics Data System (ADS)

    Rakhmatullina, D. K.; Akhmetshina, E. R.; Ignatjeva, O. A.

    2017-12-01

    In the conditions of the modern economy, in connection with the development of production, the expansion of the market for goods and services, its differentiation, active use of marketing tools in the sphere of sales, changes occur in the system of values and consumer needs. Motives that drive the consumer are transformed, stimulating it to activity. The article presents a model of personal consumption that takes into account modern trends in consumer behavior. The consumer, making a choice, seeks to maximize the overall utility from consumption, physiological and socio-psychological satisfaction, in accordance with his expectations, preferences and conditions of consumption. The system of his preferences is formed under the influence of factors of a different nature. It is also shown that the structure of consumer spending allows us to characterize and predict its further behavior in the market. Based on the proposed model and analysis of current trends in consumer behavior, conclusions and recommendations have been made that can be used by legislative and executive government bodies, business organizations, research centres and other structures to form a methodological and analytical tool for preparing a forecast model of consumption.

  18. A Model of College Tuition Maximization

    ERIC Educational Resources Information Center

    Bosshardt, Donald I.; Lichtenstein, Larry; Zaporowski, Mark P.

    2009-01-01

    This paper develops a series of models for optimal tuition pricing for private colleges and universities. The university is assumed to be a profit maximizing, price discriminating monopolist. The enrollment decision of student's is stochastic in nature. The university offers an effective tuition rate, comprised of stipulated tuition less financial…

  19. Institutional Effects in a Simple Model of Educational Production

    ERIC Educational Resources Information Center

    Bishop, John H.; Wobmann, Ludger

    2004-01-01

    This paper presents a model of educational production that tries to make sense of recent evidence on effects of institutional arrangements on student performance. In a simple principal-agent framework, students choose their learning effort to maximize their net benefits, while the government chooses educational spending to maximize its net…

  20. Implementation of health information technology to maximize efficiency of resource utilization in a geographically dispersed prenatal care delivery system.

    PubMed

    Cochran, Marlo Baker; Snyder, Russell R; Thomas, Elizabeth; Freeman, Daniel H; Hankins, Gary D V

    2012-04-01

    This study investigated the utilization of health information technology (HIT) to enhance resource utilization in a geographically dispersed tertiary care system with extensive outpatient and delivery services. It was initiated as a result of a systems change implemented after Hurricane Ike devastated southeast Texas. A retrospective database and electronic medical record review was performed, which included data collection from all patients evaluated 18 months prior (epoch I) and 18 months following (epoch II) the landfall of Hurricane Ike. The months immediately following the storm were omitted from the analysis, allowing time to establish a new baseline. We analyzed a total of 21,201 patients evaluated in triage at the University of Texas Medical Branch. Epoch I consisted of 11,280 patients and epoch II consisted of 9922 patients. Using HIT, we were able to decrease the number of visits to triage while simultaneously managing more complex patients in the outpatient setting with no clinically significant change in maternal or fetal outcome. This study developed an innovated model of care using constrained resources while providing quality and safety to our patients without additional cost to the health care delivery system. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  1. Accumulating Evidence and Research Organization (AERO) model: a new tool for representing, analyzing, and planning a translational research program.

    PubMed

    Hey, Spencer Phillips; Heilig, Charles M; Weijer, Charles

    2013-05-30

    Maximizing efficiency in drug development is important for drug developers, policymakers, and human subjects. Limited funds and the ethical imperative of risk minimization demand that researchers maximize the knowledge gained per patient-subject enrolled. Yet, despite a common perception that the current system of drug development is beset by inefficiencies, there remain few approaches for systematically representing, analyzing, and communicating the efficiency and coordination of the research enterprise. In this paper, we present the first steps toward developing such an approach: a graph-theoretic tool for representing the Accumulating Evidence and Research Organization (AERO) across a translational trajectory. This initial version of the AERO model focuses on elucidating two dimensions of robustness: (1) the consistency of results among studies with an identical or similar outcome metric; and (2) the concordance of results among studies with qualitatively different outcome metrics. The visual structure of the model is a directed acyclic graph, designed to capture these two dimensions of robustness and their relationship to three basic questions that underlie the planning of a translational research program: What is the accumulating state of total evidence? What has been the translational trajectory? What studies should be done next? We demonstrate the utility of the AERO model with an application to a case study involving the antibacterial agent, moxifloxacin, for the treatment of drug-susceptible tuberculosis. We then consider some possible elaborations for the AERO model and propose a number of ways in which the tool could be used to enhance the planning, reporting, and analysis of clinical trials. The AERO model provides an immediate visual representation of the number of studies done at any stage of research, depicting both the robustness of evidence and the relationship of each study to the larger translational trajectory. In so doing, it makes some of the invisible or inchoate properties of the research system explicit - helping to elucidate judgments about the accumulating state of evidence and supporting decision-making for future research.

  2. Improving Embryonic Stem Cell Expansion through the Combination of Perfusion and Bioprocess Model Design

    PubMed Central

    Yeo, David; Kiparissides, Alexandros; Cha, Jae Min; Aguilar-Gallardo, Cristobal; Polak, Julia M.; Tsiridis, Elefterios; Pistikopoulos, Efstratios N.; Mantalaris, Athanasios

    2013-01-01

    Background High proliferative and differentiation capacity renders embryonic stem cells (ESCs) a promising cell source for tissue engineering and cell-based therapies. Harnessing their potential, however, requires well-designed, efficient and reproducible expansion and differentiation protocols as well as avoiding hazardous by-products, such as teratoma formation. Traditional, standard culture methodologies are fragmented and limited in their fed-batch feeding strategies that afford a sub-optimal environment for cellular metabolism. Herein, we investigate the impact of metabolic stress as a result of inefficient feeding utilizing a novel perfusion bioreactor and a mathematical model to achieve bioprocess improvement. Methodology/Principal Findings To characterize nutritional requirements, the expansion of undifferentiated murine ESCs (mESCs) encapsulated in hydrogels was performed in batch and perfusion cultures using bioreactors. Despite sufficient nutrient and growth factor provision, the accumulation of inhibitory metabolites resulted in the unscheduled differentiation of mESCs and a decline in their cell numbers in the batch cultures. In contrast, perfusion cultures maintained metabolite concentration below toxic levels, resulting in the robust expansion (>16-fold) of high quality ‘naïve’ mESCs within 4 days. A multi-scale mathematical model describing population segregated growth kinetics, metabolism and the expression of selected pluripotency (‘stemness’) genes was implemented to maximize information from available experimental data. A global sensitivity analysis (GSA) was employed that identified significant (6/29) model parameters and enabled model validation. Predicting the preferential propagation of undifferentiated ESCs in perfusion culture conditions demonstrates synchrony between theory and experiment. Conclusions/Significance The limitations of batch culture highlight the importance of cellular metabolism in maintaining pluripotency, which necessitates the design of suitable ESC bioprocesses. We propose a novel investigational framework that integrates a novel perfusion culture platform (controlled metabolic conditions) with mathematical modeling (information maximization) to enhance ESC bioprocess productivity and facilitate bioprocess optimization. PMID:24339957

  3. Power maximization of a point absorber wave energy converter using improved model predictive control

    NASA Astrophysics Data System (ADS)

    Milani, Farideh; Moghaddam, Reihaneh Kardehi

    2017-08-01

    This paper considers controlling and maximizing the absorbed power of wave energy converters for irregular waves. With respect to physical constraints of the system, a model predictive control is applied. Irregular waves' behavior is predicted by Kalman filter method. Owing to the great influence of controller parameters on the absorbed power, these parameters are optimized by imperialist competitive algorithm. The results illustrate the method's efficiency in maximizing the extracted power in the presence of unknown excitation force which should be predicted by Kalman filter.

  4. Computational rationality: linking mechanism and behavior through bounded utility maximization.

    PubMed

    Lewis, Richard L; Howes, Andrew; Singh, Satinder

    2014-04-01

    We propose a framework for including information-processing bounds in rational analyses. It is an application of bounded optimality (Russell & Subramanian, 1995) to the challenges of developing theories of mechanism and behavior. The framework is based on the idea that behaviors are generated by cognitive mechanisms that are adapted to the structure of not only the environment but also the mind and brain itself. We call the framework computational rationality to emphasize the incorporation of computational mechanism into the definition of rational action. Theories are specified as optimal program problems, defined by an adaptation environment, a bounded machine, and a utility function. Such theories yield different classes of explanation, depending on the extent to which they emphasize adaptation to bounds, and adaptation to some ecology that differs from the immediate local environment. We illustrate this variation with examples from three domains: visual attention in a linguistic task, manual response ordering, and reasoning. We explore the relation of this framework to existing "levels" approaches to explanation, and to other optimality-based modeling approaches. Copyright © 2014 Cognitive Science Society, Inc.

  5. Marginal Contribution-Based Distributed Subchannel Allocation in Small Cell Networks.

    PubMed

    Shah, Shashi; Kittipiyakul, Somsak; Lim, Yuto; Tan, Yasuo

    2018-05-10

    The paper presents a game theoretic solution for distributed subchannel allocation problem in small cell networks (SCNs) analyzed under the physical interference model. The objective is to find a distributed solution that maximizes the welfare of the SCNs, defined as the total system capacity. Although the problem can be addressed through best-response (BR) dynamics, the existence of a steady-state solution, i.e., a pure strategy Nash equilibrium (NE), cannot be guaranteed. Potential games (PGs) ensure convergence to a pure strategy NE when players rationally play according to some specified learning rules. However, such a performance guarantee comes at the expense of complete knowledge of the SCNs. To overcome such requirements, properties of PGs are exploited for scalable implementations, where we utilize the concept of marginal contribution (MC) as a tool to design learning rules of players’ utility and propose the marginal contribution-based best-response (MCBR) algorithm of low computational complexity for the distributed subchannel allocation problem. Finally, we validate and evaluate the proposed scheme through simulations for various performance metrics.

  6. Utility of simultaneous interventional radiology and operative surgery in a dedicated suite for seriously injured patients.

    PubMed

    D'Amours, Scott K; Rastogi, Pratik; Ball, Chad G

    2013-12-01

    In recent years, combined interventional radiology and operative suites have been proposed and are now becoming operational in select trauma centres. Given the infancy of this technology, this review aims to review the rationale, benefits and challenges of hybrid suites in the management of seriously injured patients. No specific studies exist that investigate outcomes within hybrid trauma suites. Endovascular and interventional radiology techniques have been successfully employed in thoracic, abdominal, pelvic and extremity trauma. Although the association between delayed haemorrhage control and poorer patient outcomes is intuitive, most supporting scientific data are outdated. The hybrid suite model offers the potential to expedite haemorrhage control through synergistic operative, interventional radiology and resuscitative platforms. Maximizing the utility of these suites requires trained multidisciplinary teams, ergonomic and workplace considerations, as well as a fundamental paradigm shift of trauma care. This often translates into a more damage-control orientated philosophy. Hybrid suites offer tremendous potential to expedite haemorrhage control in trauma patients. Outcome evaluations from trauma units that currently have operational hybrid suites are required to establish clearer guidelines and criteria for patient management.

  7. Multi-Topic Tracking Model for dynamic social network

    NASA Astrophysics Data System (ADS)

    Li, Yuhua; Liu, Changzheng; Zhao, Ming; Li, Ruixuan; Xiao, Hailing; Wang, Kai; Zhang, Jun

    2016-07-01

    The topic tracking problem has attracted much attention in the last decades. However, existing approaches rarely consider network structures and textual topics together. In this paper, we propose a novel statistical model based on dynamic bayesian network, namely Multi-Topic Tracking Model for Dynamic Social Network (MTTD). It takes influence phenomenon, selection phenomenon, document generative process and the evolution of textual topics into account. Specifically, in our MTTD model, Gibbs Random Field is defined to model the influence of historical status of users in the network and the interdependency between them in order to consider the influence phenomenon. To address the selection phenomenon, a stochastic block model is used to model the link generation process based on the users' interests to topics. Probabilistic Latent Semantic Analysis (PLSA) is used to describe the document generative process according to the users' interests. Finally, the dependence on the historical topic status is also considered to ensure the continuity of the topic itself in topic evolution model. Expectation Maximization (EM) algorithm is utilized to estimate parameters in the proposed MTTD model. Empirical experiments on real datasets show that the MTTD model performs better than Popular Event Tracking (PET) and Dynamic Topic Model (DTM) in generalization performance, topic interpretability performance, topic content evolution and topic popularity evolution performance.

  8. Maximal Oxygen Uptake, Sweating and Tolerance to Exercise in the Heat

    NASA Technical Reports Server (NTRS)

    Greenleaf, J. E.; Castle, B. L.; Ruff, W. K.

    1972-01-01

    The physiological mechanisms that facilitate acute acclimation to heat have not been fully elucidated, but the result is the establishment of a more efficient cardiovascular system to increase heat dissipation via increased sweating that allows the acclimated man to function with a cooler internal environment and to extend his performance. Men in good physical condition with high maximal oxygen uptakes generally acclimate to heat more rapidly and retain it longer than men in poorer condition. Also, upon first exposure trained men tolerate exercise in the heat better than untrained men. Both resting in heat and physical training in a cool environment confer only partial acclimation when first exposed to work in the heat. These observations suggest separate additive stimuli of metabolic heat from exercise and environmental heat to increase sweating during the acclimation process. However, the necessity of utilizing physical exercise during acclimation has been questioned. Bradbury et al. (1964) have concluded exercise has no effect on the course of heat acclimation since increased sweating can be induced by merely heating resting subjects. Preliminary evidence suggests there is a direct relationship between the maximal oxygen uptake and the capacity to maintain thermal regulation, particularly through the control of sweating. Since increased sweating is an important mechanism for the development of heat acclimation, and fit men have high sweat rates, it follows that upon initial exposure to exercise in the heat, men with high maximal oxygen uptakes should exhibit less strain than men with lower maximal oxygen uptakes. The purpose of this study was: (1) to determine if men with higher maximal oxygen uptakes exhibit greater tolerance than men with lower oxygen uptakes during early exposure to exercise in the heat, and (2) to investigate further the mechanism of the relationship between sweating and maximal work capacity.

  9. A clustering-based fuzzy wavelet neural network model for short-term load forecasting.

    PubMed

    Kodogiannis, Vassilis S; Amina, Mahdi; Petrounias, Ilias

    2013-10-01

    Load forecasting is a critical element of power system operation, involving prediction of the future level of demand to serve as the basis for supply and demand planning. This paper presents the development of a novel clustering-based fuzzy wavelet neural network (CB-FWNN) model and validates its prediction on the short-term electric load forecasting of the Power System of the Greek Island of Crete. The proposed model is obtained from the traditional Takagi-Sugeno-Kang fuzzy system by replacing the THEN part of fuzzy rules with a "multiplication" wavelet neural network (MWNN). Multidimensional Gaussian type of activation functions have been used in the IF part of the fuzzyrules. A Fuzzy Subtractive Clustering scheme is employed as a pre-processing technique to find out the initial set and adequate number of clusters and ultimately the number of multiplication nodes in MWNN, while Gaussian Mixture Models with the Expectation Maximization algorithm are utilized for the definition of the multidimensional Gaussians. The results corresponding to the minimum and maximum power load indicate that the proposed load forecasting model provides significantly accurate forecasts, compared to conventional neural networks models.

  10. A Coordinated and Comprehensive School-Based Career Placement Model: Volume III of a Research Project to Develop a Coordinated Comprehensive Placement System.

    ERIC Educational Resources Information Center

    Wisconsin Univ., Madison. Center for Studies in Vocational and Technical Education.

    Volume 3 presents a descriptive outline of the Wisconsin school-based career placement model. The two major objectives for the model are: (1) to maximize the individual student's competencies for independent career functioning and (2) to maximize the availability of career placement options. For orderly transition, each student must receive the…

  11. Disaggregating reserve-to-production ratios: An algorithm for United States oil and gas reserve development

    NASA Astrophysics Data System (ADS)

    Williams, Charles William

    Reserve-to-production ratios for oil and gas development are utilized by oil and gas producing states to monitor oil and gas reserve and production dynamics. These ratios are used to determine production levels for the manipulation of oil and gas prices while maintaining adequate reserves for future development. These aggregate reserve-to-production ratios do not provide information concerning development cost and the best time necessary to develop newly discovered reserves. Oil and gas reserves are a semi-finished inventory because development of the reserves must take place in order to implement production. These reserves are considered semi-finished in that they are not counted unless it is economically profitable to produce them. The development of these reserves is encouraged by profit maximization economic variables which must consider the legal, political, and geological aspects of a project. This development is comprised of a myriad of incremental operational decisions, each of which influences profit maximization. The primary purpose of this study was to provide a model for characterizing a single product multi-period inventory/production optimization problem from an unconstrained quantity of raw material which was produced and stored as inventory reserve. This optimization was determined by evaluating dynamic changes in new additions to reserves and the subsequent depletion of these reserves with the maximization of production. A secondary purpose was to determine an equation for exponential depletion of proved reserves which presented a more comprehensive representation of reserve-to-production ratio values than an inadequate and frequently used aggregate historical method. The final purpose of this study was to determine the most accurate delay time for a proved reserve to achieve maximum production. This calculated time provided a measure of the discounted cost and calculation of net present value for developing new reserves. This study concluded that the theoretical model developed by this research may be used to provide a predictive equation for each major oil and gas state so that a net present value to undiscounted net cash flow ratio might be calculated in order to establish an investment signal for profit maximizers. This equation inferred how production decisions were influenced by exogenous factors, such as price, and how policies performed which lead to recommendations regarding effective policies and prudent planning.

  12. Bioeconomic of profit maximization of red tilapia (Oreochromis sp.) culture using polynomial growth model

    NASA Astrophysics Data System (ADS)

    Wijayanto, D.; Kurohman, F.; Nugroho, RA

    2018-03-01

    The research purpose was to develop a model bioeconomic of profit maximization that can be applied to red tilapia culture. The development of fish growth model used polynomial growth function. Profit maximization process used the first derivative of profit equation to time of culture equal to zero. This research has also developed the equations to estimate the culture time to reach the target size of the fish harvest. The research proved that this research model could be applied in the red tilapia culture. In the case of this study, red tilapia culture can achieve the maximum profit at 584 days and the profit of Rp. 28,605,731 per culture cycle. If used size target of 250 g, the culture of red tilapia need 82 days of culture time.

  13. New classes of Lorenz curves by maximizing Tsallis entropy under mean and Gini equality and inequality constraints

    NASA Astrophysics Data System (ADS)

    Preda, Vasile; Dedu, Silvia; Gheorghe, Carmen

    2015-10-01

    In this paper, by using the entropy maximization principle with Tsallis entropy, new distribution families for modeling the income distribution are derived. Also, new classes of Lorenz curves are obtained by applying the entropy maximization principle with Tsallis entropy, under mean and Gini index equality and inequality constraints.

  14. Methods for Maximizing the Learning Process: A Theoretical and Experimental Analysis.

    ERIC Educational Resources Information Center

    Atkinson, Richard C.

    This research deals with optimizing the instructional process. The approach adopted was to limit consideration to simple learning tasks for which adequate mathematical models could be developed. Optimal or suitable suboptimal instructional strategies were developed for the models. The basic idea was to solve for strategies that either maximize the…

  15. A modified NSGA-II solution for a new multi-objective hub maximal covering problem under uncertain shipments

    NASA Astrophysics Data System (ADS)

    Ebrahimi Zade, Amir; Sadegheih, Ahmad; Lotfi, Mohammad Mehdi

    2014-07-01

    Hubs are centers for collection, rearrangement, and redistribution of commodities in transportation networks. In this paper, non-linear multi-objective formulations for single and multiple allocation hub maximal covering problems as well as the linearized versions are proposed. The formulations substantially mitigate complexity of the existing models due to the fewer number of constraints and variables. Also, uncertain shipments are studied in the context of hub maximal covering problems. In many real-world applications, any link on the path from origin to destination may fail to work due to disruption. Therefore, in the proposed bi-objective model, maximizing safety of the weakest path in the network is considered as the second objective together with the traditional maximum coverage goal. Furthermore, to solve the bi-objective model, a modified version of NSGA-II with a new dynamic immigration operator is developed in which the accurate number of immigrants depends on the results of the other two common NSGA-II operators, i.e. mutation and crossover. Besides validating proposed models, computational results confirm a better performance of modified NSGA-II versus traditional one.

  16. Impacts of forest fragmentation on species richness: a hierarchical approach to community modelling

    USGS Publications Warehouse

    Zipkin, Elise F.; DeWan, Amielle; Royle, J. Andrew

    2009-01-01

    1. Species richness is often used as a tool for prioritizing conservation action. One method for predicting richness and other summaries of community structure is to develop species-specific models of occurrence probability based on habitat or landscape characteristics. However, this approach can be challenging for rare or elusive species for which survey data are often sparse. 2. Recent developments have allowed for improved inference about community structure based on species-specific models of occurrence probability, integrated within a hierarchical modelling framework. This framework offers advantages to inference about species richness over typical approaches by accounting for both species-level effects and the aggregated effects of landscape composition on a community as a whole, thus leading to increased precision in estimates of species richness by improving occupancy estimates for all species, including those that were observed infrequently. 3. We developed a hierarchical model to assess the community response of breeding birds in the Hudson River Valley, New York, to habitat fragmentation and analysed the model using a Bayesian approach. 4. The model was designed to estimate species-specific occurrence and the effects of fragment area and edge (as measured through the perimeter and the perimeter/area ratio, P/A), while accounting for imperfect detection of species. 5. We used the fitted model to make predictions of species richness within forest fragments of variable morphology. The model revealed that species richness of the observed bird community was maximized in small forest fragments with a high P/A. However, the number of forest interior species, a subset of the community with high conservation value, was maximized in large fragments with low P/A. 6. Synthesis and applications. Our results demonstrate the importance of understanding the responses of both individual, and groups of species, to environmental heterogeneity while illustrating the utility of hierarchical models for inference about species richness for conservation. This framework can be used to investigate the impacts of land-use change and fragmentation on species or assemblage richness, and to further understand trade-offs in species-specific occupancy probabilities associated with landscape variability.

  17. Resource Allocation Algorithms for the Next Generation Cellular Networks

    NASA Astrophysics Data System (ADS)

    Amzallag, David; Raz, Danny

    This chapter describes recent results addressing resource allocation problems in the context of current and future cellular technologies. We present models that capture several fundamental aspects of planning and operating these networks, and develop new approximation algorithms providing provable good solutions for the corresponding optimization problems. We mainly focus on two families of problems: cell planning and cell selection. Cell planning deals with choosing a network of base stations that can provide the required coverage of the service area with respect to the traffic requirements, available capacities, interference, and the desired QoS. Cell selection is the process of determining the cell(s) that provide service to each mobile station. Optimizing these processes is an important step towards maximizing the utilization of current and future cellular networks.

  18. The Design of Distributed Micro Grid Energy Storage System

    NASA Astrophysics Data System (ADS)

    Liang, Ya-feng; Wang, Yan-ping

    2018-03-01

    Distributed micro-grid runs in island mode, the energy storage system is the core to maintain the micro-grid stable operation. For the problems that it is poor to adjust at work and easy to cause the volatility of micro-grid caused by the existing energy storage structure of fixed connection. In this paper, an array type energy storage structure is proposed, and the array type energy storage system structure and working principle are analyzed. Finally, the array type energy storage structure model is established based on MATLAB, the simulation results show that the array type energy storage system has great flexibility, which can maximize the utilization of energy storage system, guarantee the reliable operation of distributed micro-grid and achieve the function of peak clipping and valley filling.

  19. Perfect Information vs Random Investigation: Safety Guidelines for a Consumer in the Jungle of Product Differentiation.

    PubMed

    Biondo, Alessio Emanuele; Giarlotta, Alfio; Pluchino, Alessandro; Rapisarda, Andrea

    2016-01-01

    We present a graph-theoretic model of consumer choice, where final decisions are shown to be influenced by information and knowledge, in the form of individual awareness, discriminating ability, and perception of market structure. Building upon the distance-based Hotelling's differentiation idea, we describe the behavioral experience of several prototypes of consumers, who walk a hypothetical cognitive path in an attempt to maximize their satisfaction. Our simulations show that even consumers endowed with a small amount of information and knowledge may reach a very high level of utility. On the other hand, complete ignorance negatively affects the whole consumption process. In addition, rather unexpectedly, a random walk on the graph reveals to be a winning strategy, below a minimal threshold of information and knowledge.

  20. Negative Correlation Learning for Customer Churn Prediction: A Comparison Study

    PubMed Central

    Faris, Hossam

    2015-01-01

    Recently, telecommunication companies have been paying more attention toward the problem of identification of customer churn behavior. In business, it is well known for service providers that attracting new customers is much more expensive than retaining existing ones. Therefore, adopting accurate models that are able to predict customer churn can effectively help in customer retention campaigns and maximizing the profit. In this paper we will utilize an ensemble of Multilayer perceptrons (MLP) whose training is obtained using negative correlation learning (NCL) for predicting customer churn in a telecommunication company. Experiments results confirm that NCL based MLP ensemble can achieve better generalization performance (high churn rate) compared with ensemble of MLP without NCL (flat ensemble) and other common data mining techniques used for churn analysis. PMID:25879060

Top