Active inference and epistemic value.
Friston, Karl; Rigoli, Francesco; Ognibene, Dimitri; Mathys, Christoph; Fitzgerald, Thomas; Pezzulo, Giovanni
2015-01-01
We offer a formal treatment of choice behavior based on the premise that agents minimize the expected free energy of future outcomes. Crucially, the negative free energy or quality of a policy can be decomposed into extrinsic and epistemic (or intrinsic) value. Minimizing expected free energy is therefore equivalent to maximizing extrinsic value or expected utility (defined in terms of prior preferences or goals), while maximizing information gain or intrinsic value (or reducing uncertainty about the causes of valuable outcomes). The resulting scheme resolves the exploration-exploitation dilemma: Epistemic value is maximized until there is no further information gain, after which exploitation is assured through maximization of extrinsic value. This is formally consistent with the Infomax principle, generalizing formulations of active vision based upon salience (Bayesian surprise) and optimal decisions based on expected utility and risk-sensitive (Kullback-Leibler) control. Furthermore, as with previous active inference formulations of discrete (Markovian) problems, ad hoc softmax parameters become the expected (Bayes-optimal) precision of beliefs about, or confidence in, policies. This article focuses on the basic theory, illustrating the ideas with simulations. A key aspect of these simulations is the similarity between precision updates and dopaminergic discharges observed in conditioning paradigms.
He, Xin; Frey, Eric C
2006-08-01
Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.
PEM-PCA: a parallel expectation-maximization PCA face recognition architecture.
Rujirakul, Kanokmon; So-In, Chakchai; Arnonkijpanich, Banchar
2014-01-01
Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.
Evidence for surprise minimization over value maximization in choice behavior
Schwartenbeck, Philipp; FitzGerald, Thomas H. B.; Mathys, Christoph; Dolan, Ray; Kronbichler, Martin; Friston, Karl
2015-01-01
Classical economic models are predicated on the idea that the ultimate aim of choice is to maximize utility or reward. In contrast, an alternative perspective highlights the fact that adaptive behavior requires agents’ to model their environment and minimize surprise about the states they frequent. We propose that choice behavior can be more accurately accounted for by surprise minimization compared to reward or utility maximization alone. Minimizing surprise makes a prediction at variance with expected utility models; namely, that in addition to attaining valuable states, agents attempt to maximize the entropy over outcomes and thus ‘keep their options open’. We tested this prediction using a simple binary choice paradigm and show that human decision-making is better explained by surprise minimization compared to utility maximization. Furthermore, we replicated this entropy-seeking behavior in a control task with no explicit utilities. These findings highlight a limitation of purely economic motivations in explaining choice behavior and instead emphasize the importance of belief-based motivations. PMID:26564686
Chen, Yi-Shin
2018-01-01
Conventional decision theory suggests that under risk, people choose option(s) by maximizing the expected utility. However, theories deal ambiguously with different options that have the same expected utility. A network approach is proposed by introducing ‘goal’ and ‘time’ factors to reduce the ambiguity in strategies for calculating the time-dependent probability of reaching a goal. As such, a mathematical foundation that explains the irrational behavior of choosing an option with a lower expected utility is revealed, which could imply that humans possess rationality in foresight. PMID:29702665
Pan, Wei; Chen, Yi-Shin
2018-01-01
Conventional decision theory suggests that under risk, people choose option(s) by maximizing the expected utility. However, theories deal ambiguously with different options that have the same expected utility. A network approach is proposed by introducing 'goal' and 'time' factors to reduce the ambiguity in strategies for calculating the time-dependent probability of reaching a goal. As such, a mathematical foundation that explains the irrational behavior of choosing an option with a lower expected utility is revealed, which could imply that humans possess rationality in foresight.
The Naïve Utility Calculus: Computational Principles Underlying Commonsense Psychology.
Jara-Ettinger, Julian; Gweon, Hyowon; Schulz, Laura E; Tenenbaum, Joshua B
2016-08-01
We propose that human social cognition is structured around a basic understanding of ourselves and others as intuitive utility maximizers: from a young age, humans implicitly assume that agents choose goals and actions to maximize the rewards they expect to obtain relative to the costs they expect to incur. This 'naïve utility calculus' allows both children and adults observe the behavior of others and infer their beliefs and desires, their longer-term knowledge and preferences, and even their character: who is knowledgeable or competent, who is praiseworthy or blameworthy, who is friendly, indifferent, or an enemy. We review studies providing support for the naïve utility calculus, and we show how it captures much of the rich social reasoning humans engage in from infancy. Copyright © 2016 Elsevier Ltd. All rights reserved.
Expected Power-Utility Maximization Under Incomplete Information and with Cox-Process Observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fujimoto, Kazufumi, E-mail: m_fuji@kvj.biglobe.ne.jp; Nagai, Hideo, E-mail: nagai@sigmath.es.osaka-u.ac.jp; Runggaldier, Wolfgang J., E-mail: runggal@math.unipd.it
2013-02-15
We consider the problem of maximization of expected terminal power utility (risk sensitive criterion). The underlying market model is a regime-switching diffusion model where the regime is determined by an unobservable factor process forming a finite state Markov process. The main novelty is due to the fact that prices are observed and the portfolio is rebalanced only at random times corresponding to a Cox process where the intensity is driven by the unobserved Markovian factor process as well. This leads to a more realistic modeling for many practical situations, like in markets with liquidity restrictions; on the other hand itmore » considerably complicates the problem to the point that traditional methodologies cannot be directly applied. The approach presented here is specific to the power-utility. For log-utilities a different approach is presented in Fujimoto et al. (Preprint, 2012).« less
Optimization of Multiple Related Negotiation through Multi-Negotiation Network
NASA Astrophysics Data System (ADS)
Ren, Fenghui; Zhang, Minjie; Miao, Chunyan; Shen, Zhiqi
In this paper, a Multi-Negotiation Network (MNN) and a Multi- Negotiation Influence Diagram (MNID) are proposed to optimally handle Multiple Related Negotiations (MRN) in a multi-agent system. Most popular, state-of-the-art approaches perform MRN sequentially. However, a sequential procedure may not optimally execute MRN in terms of maximizing the global outcome, and may even lead to unnecessary losses in some situations. The motivation of this research is to use a MNN to handle MRN concurrently so as to maximize the expected utility of MRN. Firstly, both the joint success rate and the joint utility by considering all related negotiations are dynamically calculated based on a MNN. Secondly, by employing a MNID, an agent's possible decision on each related negotiation is reflected by the value of expected utility. Lastly, through comparing expected utilities between all possible policies to conduct MRN, an optimal policy is generated to optimize the global outcome of MRN. The experimental results indicate that the proposed approach can improve the global outcome of MRN in a successful end scenario, and avoid unnecessary losses in an unsuccessful end scenario.
The Self in Decision Making and Decision Implementation.
ERIC Educational Resources Information Center
Beach, Lee Roy; Mitchell, Terence R.
Since the early 1950's the principal prescriptive model in the psychological study of decision making has been maximization of Subjective Expected Utility (SEU). This SEU maximization has come to be regarded as a description of how people go about making decisions. However, while observed decision processes sometimes resemble the SEU model,…
Can differences in breast cancer utilities explain disparities in breast cancer care?
Schleinitz, Mark D; DePalo, Dina; Blume, Jeffrey; Stein, Michael
2006-12-01
Black, older, and less affluent women are less likely to receive adjuvant breast cancer therapy than their counterparts. Whereas preference contributes to disparities in other health care scenarios, it is unclear if preference explains differential rates of breast cancer care. To ascertain utilities from women of diverse backgrounds for the different stages of, and treatments for, breast cancer and to determine whether a treatment decision modeled from utilities is associated with socio-demographic characteristics. A stratified sample (by age and race) of 156 English-speaking women over 25 years old not currently undergoing breast cancer treatment. We assessed utilities using standard gamble for 5 breast cancer stages, and time-tradeoff for 3 therapeutic modalities. We incorporated each subject's utilities into a Markov model to determine whether her quality-adjusted life expectancy would be maximized with chemotherapy for a hypothetical, current diagnosis of stage II breast cancer. We used logistic regression to determine whether socio-demographic variables were associated with this optimal strategy. Median utilities for the 8 health states were: stage I disease, 0.91 (interquartile range 0.50 to 1.00); stage II, 0.75 (0.26 to 0.99); stage III, 0.51 (0.25 to 0.94); stage IV (estrogen receptor positive), 0.36 (0 to 0.75); stage IV (estrogen receptor negative), 0.40 (0 to 0.79); chemotherapy 0.50 (0 to 0.92); hormonal therapy 0.58 (0 to 1); and radiation therapy 0.83 (0.10 to 1). Utilities for early stage disease and treatment modalities, but not metastatic disease, varied with socio-demographic characteristics. One hundred and twenty-two of 156 subjects had utilities that maximized quality-adjusted life expectancy given stage II breast cancer with chemotherapy. Age over 50, black race, and low household income were associated with at least 5-fold lower odds of maximizing quality-adjusted life expectancy with chemotherapy, whereas women who were married or had a significant other were 4-fold more likely to maximize quality-adjusted life expectancy with chemotherapy. Differences in utility for breast cancer health states may partially explain the lower rate of adjuvant therapy for black, older, and less affluent women. Further work must clarify whether these differences result from health preference alone or reflect women's perceptions of sources of disparity, such as access to care, poor communication with providers, limitations in health knowledge or in obtaining social and workplace support during therapy.
Optimal joint detection and estimation that maximizes ROC-type curves
Wunderlich, Adam; Goossens, Bart; Abbey, Craig K.
2017-01-01
Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation. PMID:27093544
Optimal Joint Detection and Estimation That Maximizes ROC-Type Curves.
Wunderlich, Adam; Goossens, Bart; Abbey, Craig K
2016-09-01
Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation.
Creating an Agent Based Framework to Maximize Information Utility
2008-03-01
information utility may be a qualitative description of the information, where one would expect the adjectives low value, fair value , high value. For...operations. Information in this category may have a fair value rating. Finally, many seemingly unrelated events, such as reports of snipers in buildings
Acceptable regret in medical decision making.
Djulbegovic, B; Hozo, I; Schwartz, A; McMasters, K M
1999-09-01
When faced with medical decisions involving uncertain outcomes, the principles of decision theory hold that we should select the option with the highest expected utility to maximize health over time. Whether a decision proves right or wrong can be learned only in retrospect, when it may become apparent that another course of action would have been preferable. This realization may bring a sense of loss, or regret. When anticipated regret is compelling, a decision maker may choose to violate expected utility theory to avoid regret. We formulate a concept of acceptable regret in medical decision making that explicitly introduces the patient's attitude toward loss of health due to a mistaken decision into decision making. In most cases, minimizing expected regret results in the same decision as maximizing expected utility. However, when acceptable regret is taken into consideration, the threshold probability below which we can comfortably withhold treatment is a function only of the net benefit of the treatment, and the threshold probability above which we can comfortably administer the treatment depends only on the magnitude of the risks associated with the therapy. By considering acceptable regret, we develop new conceptual relations that can help decide whether treatment should be withheld or administered, especially when the diagnosis is uncertain. This may be particularly beneficial in deciding what constitutes futile medical care.
Evaluating gambles using dynamics
NASA Astrophysics Data System (ADS)
Peters, O.; Gell-Mann, M.
2016-02-01
Gambles are random variables that model possible changes in wealth. Classic decision theory transforms money into utility through a utility function and defines the value of a gamble as the expectation value of utility changes. Utility functions aim to capture individual psychological characteristics, but their generality limits predictive power. Expectation value maximizers are defined as rational in economics, but expectation values are only meaningful in the presence of ensembles or in systems with ergodic properties, whereas decision-makers have no access to ensembles, and the variables representing wealth in the usual growth models do not have the relevant ergodic properties. Simultaneously addressing the shortcomings of utility and those of expectations, we propose to evaluate gambles by averaging wealth growth over time. No utility function is needed, but a dynamic must be specified to compute time averages. Linear and logarithmic "utility functions" appear as transformations that generate ergodic observables for purely additive and purely multiplicative dynamics, respectively. We highlight inconsistencies throughout the development of decision theory, whose correction clarifies that our perspective is legitimate. These invalidate a commonly cited argument for bounded utility functions.
Dantan, Etienne; Foucher, Yohann; Lorent, Marine; Giral, Magali; Tessier, Philippe
2018-06-01
Defining thresholds of prognostic markers is essential for stratified medicine. Such thresholds are mostly estimated from purely statistical measures regardless of patient preferences potentially leading to unacceptable medical decisions. Quality-Adjusted Life-Years are a widely used preferences-based measure of health outcomes. We develop a time-dependent Quality-Adjusted Life-Years-based expected utility function for censored data that should be maximized to estimate an optimal threshold. We performed a simulation study to compare estimated thresholds when using the proposed expected utility approach and purely statistical estimators. Two applications illustrate the usefulness of the proposed methodology which was implemented in the R package ROCt ( www.divat.fr ). First, by reanalysing data of a randomized clinical trial comparing the efficacy of prednisone vs. placebo in patients with chronic liver cirrhosis, we demonstrate the utility of treating patients with a prothrombin level higher than 89%. Second, we reanalyze the data of an observational cohort of kidney transplant recipients: we conclude to the uselessness of the Kidney Transplant Failure Score to adapt the frequency of clinical visits. Applying such a patient-centered methodology may improve future transfer of novel prognostic scoring systems or markers in clinical practice.
Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management. PMID:25538868
Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.
Optimal Resource Allocation in Library Systems
ERIC Educational Resources Information Center
Rouse, William B.
1975-01-01
Queueing theory is used to model processes as either waiting or balking processes. The optimal allocation of resources to these processes is defined as that which maximizes the expected value of the decision-maker's utility function. (Author)
Modeling Adversaries in Counterterrorism Decisions Using Prospect Theory.
Merrick, Jason R W; Leclerc, Philip
2016-04-01
Counterterrorism decisions have been an intense area of research in recent years. Both decision analysis and game theory have been used to model such decisions, and more recently approaches have been developed that combine the techniques of the two disciplines. However, each of these approaches assumes that the attacker is maximizing its utility. Experimental research shows that human beings do not make decisions by maximizing expected utility without aid, but instead deviate in specific ways such as loss aversion or likelihood insensitivity. In this article, we modify existing methods for counterterrorism decisions. We keep expected utility as the defender's paradigm to seek for the rational decision, but we use prospect theory to solve for the attacker's decision to descriptively model the attacker's loss aversion and likelihood insensitivity. We study the effects of this approach in a critical decision, whether to screen containers entering the United States for radioactive materials. We find that the defender's optimal decision is sensitive to the attacker's levels of loss aversion and likelihood insensitivity, meaning that understanding such descriptive decision effects is important in making such decisions. © 2014 Society for Risk Analysis.
Decision Making Analysis: Critical Factors-Based Methodology
2010-04-01
the pitfalls associated with current wargaming methods such as assuming a western view of rational values in decision - making regardless of the cultures...Utilization theory slightly expands the rational decision making model as it states that “actors try to maximize their expected utility by weighing the...items to categorize the decision - making behavior of political leaders which tend to demonstrate either a rational or cognitive leaning. Leaders
Kurnianingsih, Yoanna A; Sim, Sam K Y; Chee, Michael W L; Mullette-Gillman, O'Dhaniel A
2015-01-01
We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble) and choice strategies (what gamble information influences choices) within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk, and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning. We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61-80 years old) were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic decision-making for losses through changes in both individual preferences and the strategies individuals employ.
The Dynamics of Crime and Punishment
NASA Astrophysics Data System (ADS)
Hausken, Kjell; Moxnes, John F.
This article analyzes crime development which is one of the largest threats in today's world, frequently referred to as the war on crime. The criminal commits crimes in his free time (when not in jail) according to a non-stationary Poisson process which accounts for fluctuations. Expected values and variances for crime development are determined. The deterrent effect of imprisonment follows from the amount of time in imprisonment. Each criminal maximizes expected utility defined as expected benefit (from crime) minus expected cost (imprisonment). A first-order differential equation of the criminal's utility-maximizing response to the given punishment policy is then developed. The analysis shows that if imprisonment is absent, criminal activity grows substantially. All else being equal, any equilibrium is unstable (labile), implying growth of criminal activity, unless imprisonment increases sufficiently as a function of criminal activity. This dynamic approach or perspective is quite interesting and has to our knowledge not been presented earlier. The empirical data material for crime intensity and imprisonment for Norway, England and Wales, and the US supports the model. Future crime development is shown to depend strongly on the societally chosen imprisonment policy. The model is intended as a valuable tool for policy makers who can envision arbitrarily sophisticated imprisonment functions and foresee the impact they have on crime development.
Merton's problem for an investor with a benchmark in a Barndorff-Nielsen and Shephard market.
Lennartsson, Jan; Lindberg, Carl
2015-01-01
To try to outperform an externally given benchmark with known weights is the most common equity mandate in the financial industry. For quantitative investors, this task is predominantly approached by optimizing their portfolios consecutively over short time horizons with one-period models. We seek in this paper to provide a theoretical justification to this practice when the underlying market is of Barndorff-Nielsen and Shephard type. This is done by verifying that an investor who seeks to maximize her expected terminal exponential utility of wealth in excess of her benchmark will in fact use an optimal portfolio equivalent to the one-period Markowitz mean-variance problem in continuum under the corresponding Black-Scholes market. Further, we can represent the solution to the optimization problem as in Feynman-Kac form. Hence, the problem, and its solution, is analogous to Merton's classical portfolio problem, with the main difference that Merton maximizes expected utility of terminal wealth, not wealth in excess of a benchmark.
On the Teaching of Portfolio Theory.
ERIC Educational Resources Information Center
Biederman, Daniel K.
1992-01-01
Demonstrates how a simple portfolio problem expressed explicitly as an expected utility maximization problem can be used to instruct students in portfolio theory. Discusses risk aversion, decision making under uncertainty, and the limitations of the traditional mean variance approach. Suggests students may develop a greater appreciation of general…
Program Monitoring: Problems and Cases.
ERIC Educational Resources Information Center
Lundin, Edward; Welty, Gordon
Designed as the major component of a comprehensive model of educational management, a behavioral model of decision making is presented that approximates the synoptic model of neoclassical economic theory. The synoptic model defines all possible alternatives and provides a basis for choosing that alternative which maximizes expected utility. The…
A Bayesian Approach to Interactive Retrieval
ERIC Educational Resources Information Center
Tague, Jean M.
1973-01-01
A probabilistic model for interactive retrieval is presented. Bayesian statistical decision theory principles are applied: use of prior and sample information about the relationship of document descriptions to query relevance; maximization of expected value of a utility function, to the problem of optimally restructuring search strategies in an…
Collective states in social systems with interacting learning agents
NASA Astrophysics Data System (ADS)
Semeshenko, Viktoriya; Gordon, Mirta B.; Nadal, Jean-Pierre
2008-08-01
We study the implications of social interactions and individual learning features on consumer demand in a simple market model. We consider a social system of interacting heterogeneous agents with learning abilities. Given a fixed price, agents repeatedly decide whether or not to buy a unit of a good, so as to maximize their expected utilities. This model is close to Random Field Ising Models, where the random field corresponds to the idiosyncratic willingness to pay. We show that the equilibrium reached depends on the nature of the information agents use to estimate their expected utilities. It may be different from the systems’ Nash equilibria.
The Probabilistic Nature of Preferential Choice
ERIC Educational Resources Information Center
Rieskamp, Jorg
2008-01-01
Previous research has developed a variety of theories explaining when and why people's decisions under risk deviate from the standard economic view of expected utility maximization. These theories are limited in their predictive accuracy in that they do not explain the probabilistic nature of preferential choice, that is, why an individual makes…
Relevance of a Managerial Decision-Model to Educational Administration.
ERIC Educational Resources Information Center
Lundin, Edward.; Welty, Gordon
The rational model of classical economic theory assumes that the decision maker has complete information on alternatives and consequences, and that he chooses the alternative that maximizes expected utility. This model does not allow for constraints placed on the decision maker resulting from lack of information, organizational pressures,…
Expectation maximization for hard X-ray count modulation profiles
NASA Astrophysics Data System (ADS)
Benvenuto, F.; Schwartz, R.; Piana, M.; Massone, A. M.
2013-07-01
Context. This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) instrument. Aims: Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized to analyze count modulation profiles in solar hard X-ray imaging based on rotating modulation collimators. Methods: The algorithm described in this paper solves the maximum likelihood problem iteratively and encodes a positivity constraint into the iterative optimization scheme. The result is therefore a classical expectation maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, at the same time, a very satisfactory Cash-statistic (C-statistic). Results: The method is applied to both reproduce synthetic flaring configurations and reconstruct images from experimental data corresponding to three real events. In this second case, the performance of expectation maximization, when compared to Pixon image reconstruction, shows a comparable accuracy and a notably reduced computational burden; when compared to CLEAN, shows a better fidelity with respect to the measurements with a comparable computational effectiveness. Conclusions: If optimally stopped, expectation maximization represents a very reliable method for image reconstruction in the RHESSI context when count modulation profiles are used as input data.
Kurnianingsih, Yoanna A.; Sim, Sam K. Y.; Chee, Michael W. L.; Mullette-Gillman, O’Dhaniel A.
2015-01-01
We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble) and choice strategies (what gamble information influences choices) within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk, and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning. We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61–80 years old) were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic decision-making for losses through changes in both individual preferences and the strategies individuals employ. PMID:26029092
The value of foresight: how prospection affects decision-making.
Pezzulo, Giovanni; Rigoli, Francesco
2011-01-01
Traditional theories of decision-making assume that utilities are based on the intrinsic value of outcomes; in turn, these values depend on associations between expected outcomes and the current motivational state of the decision-maker. This view disregards the fact that humans (and possibly other animals) have prospection abilities, which permit anticipating future mental processes and motivational and emotional states. For instance, we can evaluate future outcomes in light of the motivational state we expect to have when the outcome is collected, not (only) when we make a decision. Consequently, we can plan for the future and choose to store food to be consumed when we expect to be hungry, not immediately. Furthermore, similarly to any expected outcome, we can assign a value to our anticipated mental processes and emotions. It has been reported that (in some circumstances) human subjects prefer to receive an unavoidable punishment immediately, probably because they are anticipating the dread associated with the time spent waiting for the punishment. This article offers a formal framework to guide neuroeconomic research on how prospection affects decision-making. The model has two characteristics. First, it uses model-based Bayesian inference to describe anticipation of cognitive and motivational processes. Second, the utility-maximization process considers these anticipations in two ways: to evaluate outcomes (e.g., the pleasure of eating a pie is evaluated differently at the beginning of a dinner, when one is hungry, and at the end of the dinner, when one is satiated), and as outcomes having a value themselves (e.g., the case of dread as a cost of waiting for punishment). By explicitly accounting for the relationship between prospection and value, our model provides a framework to reconcile the utility-maximization approach with psychological phenomena such as planning for the future and dread.
The Value of Foresight: How Prospection Affects Decision-Making
Pezzulo, Giovanni; Rigoli, Francesco
2011-01-01
Traditional theories of decision-making assume that utilities are based on the intrinsic value of outcomes; in turn, these values depend on associations between expected outcomes and the current motivational state of the decision-maker. This view disregards the fact that humans (and possibly other animals) have prospection abilities, which permit anticipating future mental processes and motivational and emotional states. For instance, we can evaluate future outcomes in light of the motivational state we expect to have when the outcome is collected, not (only) when we make a decision. Consequently, we can plan for the future and choose to store food to be consumed when we expect to be hungry, not immediately. Furthermore, similarly to any expected outcome, we can assign a value to our anticipated mental processes and emotions. It has been reported that (in some circumstances) human subjects prefer to receive an unavoidable punishment immediately, probably because they are anticipating the dread associated with the time spent waiting for the punishment. This article offers a formal framework to guide neuroeconomic research on how prospection affects decision-making. The model has two characteristics. First, it uses model-based Bayesian inference to describe anticipation of cognitive and motivational processes. Second, the utility-maximization process considers these anticipations in two ways: to evaluate outcomes (e.g., the pleasure of eating a pie is evaluated differently at the beginning of a dinner, when one is hungry, and at the end of the dinner, when one is satiated), and as outcomes having a value themselves (e.g., the case of dread as a cost of waiting for punishment). By explicitly accounting for the relationship between prospection and value, our model provides a framework to reconcile the utility-maximization approach with psychological phenomena such as planning for the future and dread. PMID:21747755
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramachandran, Thiagarajan; Kundu, Soumya; Chen, Yan
This paper develops and utilizes an optimization based framework to investigate the maximal energy efficiency potentially attainable by HVAC system operation in a non-predictive context. Performance is evaluated relative to the existing state of the art set point reset strategies. The expected efficiency increase driven by operation constraints relaxations is evaluated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramachandran, Thiagarajan; Kundu, Soumya; Chen, Yan
This paper develops and utilizes an optimization based framework to investigate the maximal energy efficiency potentially attainable by HVAC system operation in a non-predictive context. Performance is evaluated relative to the existing state of the art set-point reset strategies. The expected efficiency increase driven by operation constraints relaxations is evaluated.
Chow, Sy-Miin; Lu, Zhaohua; Sherwood, Andrew; Zhu, Hongtu
2016-03-01
The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation-maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed.
Defender-Attacker Decision Tree Analysis to Combat Terrorism.
Garcia, Ryan J B; von Winterfeldt, Detlof
2016-12-01
We propose a methodology, called defender-attacker decision tree analysis, to evaluate defensive actions against terrorist attacks in a dynamic and hostile environment. Like most game-theoretic formulations of this problem, we assume that the defenders act rationally by maximizing their expected utility or minimizing their expected costs. However, we do not assume that attackers maximize their expected utilities. Instead, we encode the defender's limited knowledge about the attacker's motivations and capabilities as a conditional probability distribution over the attacker's decisions. We apply this methodology to the problem of defending against possible terrorist attacks on commercial airplanes, using one of three weapons: infrared-guided MANPADS (man-portable air defense systems), laser-guided MANPADS, or visually targeted RPGs (rocket propelled grenades). We also evaluate three countermeasures against these weapons: DIRCMs (directional infrared countermeasures), perimeter control around the airport, and hardening airplanes. The model includes deterrence effects, the effectiveness of the countermeasures, and the substitution of weapons and targets once a specific countermeasure is selected. It also includes a second stage of defensive decisions after an attack occurs. Key findings are: (1) due to the high cost of the countermeasures, not implementing countermeasures is the preferred defensive alternative for a large range of parameters; (2) if the probability of an attack and the associated consequences are large, a combination of DIRCMs and ground perimeter control are preferred over any single countermeasure. © 2016 Society for Risk Analysis.
Influencing Busy People in a Social Network
Sarkar, Kaushik; Sundaram, Hari
2016-01-01
We identify influential early adopters in a social network, where individuals are resource constrained, to maximize the spread of multiple, costly behaviors. A solution to this problem is especially important for viral marketing. The problem of maximizing influence in a social network is challenging since it is computationally intractable. We make three contributions. First, we propose a new model of collective behavior that incorporates individual intent, knowledge of neighbors actions and resource constraints. Second, we show that the multiple behavior influence maximization is NP-hard. Furthermore, we show that the problem is submodular, implying the existence of a greedy solution that approximates the optimal solution to within a constant. However, since the greedy algorithm is expensive for large networks, we propose efficient heuristics to identify the influential individuals, including heuristics to assign behaviors to the different early adopters. We test our approach on synthetic and real-world topologies with excellent results. We evaluate the effectiveness under three metrics: unique number of participants, total number of active behaviors and network resource utilization. Our heuristics produce 15-51% increase in expected resource utilization over the naïve approach. PMID:27711127
Influencing Busy People in a Social Network.
Sarkar, Kaushik; Sundaram, Hari
2016-01-01
We identify influential early adopters in a social network, where individuals are resource constrained, to maximize the spread of multiple, costly behaviors. A solution to this problem is especially important for viral marketing. The problem of maximizing influence in a social network is challenging since it is computationally intractable. We make three contributions. First, we propose a new model of collective behavior that incorporates individual intent, knowledge of neighbors actions and resource constraints. Second, we show that the multiple behavior influence maximization is NP-hard. Furthermore, we show that the problem is submodular, implying the existence of a greedy solution that approximates the optimal solution to within a constant. However, since the greedy algorithm is expensive for large networks, we propose efficient heuristics to identify the influential individuals, including heuristics to assign behaviors to the different early adopters. We test our approach on synthetic and real-world topologies with excellent results. We evaluate the effectiveness under three metrics: unique number of participants, total number of active behaviors and network resource utilization. Our heuristics produce 15-51% increase in expected resource utilization over the naïve approach.
To kill a kangaroo: understanding the decision to pursue high-risk/high-gain resources.
Jones, James Holland; Bird, Rebecca Bliege; Bird, Douglas W
2013-09-22
In this paper, we attempt to understand hunter-gatherer foraging decisions about prey that vary in both the mean and variance of energy return using an expected utility framework. We show that for skewed distributions of energetic returns, the standard linear variance discounting (LVD) model for risk-sensitive foraging can produce quite misleading results. In addition to creating difficulties for the LVD model, the skewed distributions characteristic of hunting returns create challenges for estimating probability distribution functions required for expected utility. We present a solution using a two-component finite mixture model for foraging returns. We then use detailed foraging returns data based on focal follows of individual hunters in Western Australia hunting for high-risk/high-gain (hill kangaroo) and relatively low-risk/low-gain (sand monitor) prey. Using probability densities for the two resources estimated from the mixture models, combined with theoretically sensible utility curves characterized by diminishing marginal utility for the highest returns, we find that the expected utility of the sand monitors greatly exceeds that of kangaroos despite the fact that the mean energy return for kangaroos is nearly twice as large as that for sand monitors. We conclude that the decision to hunt hill kangaroos does not arise simply as part of an energetic utility-maximization strategy and that additional social, political or symbolic benefits must accrue to hunters of this highly variable prey.
Dishonest Academic Conduct: From the Perspective of the Utility Function.
Sun, Ying; Tian, Rui
Dishonest academic conduct has aroused extensive attention in academic circles. To explore how scholars make decisions according to the principle of maximal utility, the author has constructed the general utility function based on the expected utility theory. The concrete utility functions of different types of scholars were deduced. They are as follows: risk neutral, risk averse, and risk preference. Following this, the assignment method was adopted to analyze and compare the scholars' utilities of academic conduct. It was concluded that changing the values of risk costs, internal condemnation costs, academic benefits, and the subjective estimation of penalties following dishonest academic conduct can lead to changes in the utility of academic dishonesty. The results of the current study suggest that within scientific research, measures to prevent and govern dishonest academic conduct should be formulated according to the various effects of the above four variables.
Research priorities and plans for the International Space Station-results of the 'REMAP' Task Force
NASA Technical Reports Server (NTRS)
Kicza, M.; Erickson, K.; Trinh, E.
2003-01-01
Recent events in the International Space Station (ISS) Program have resulted in the necessity to re-examine the research priorities and research plans for future years. Due to both technical and fiscal resource constraints expected on the International Space Station, it is imperative that research priorities be carefully reviewed and clearly articulated. In consultation with OSTP and the Office of Management and budget (OMB), NASA's Office of Biological and Physical Research (OBPR) assembled an ad-hoc external advisory committee, the Biological and Physical Research Maximization and Prioritization (REMAP) Task Force. This paper describes the outcome of the Task Force and how it is being used to define a roadmap for near and long-term Biological and Physical Research objectives that supports NASA's Vision and Mission. Additionally, the paper discusses further prioritizations that were necessitated by budget and ISS resource constraints in order to maximize utilization of the International Space Station. Finally, a process has been developed to integrate the requirements for this prioritized research with other agency requirements to develop an integrated ISS assembly and utilization plan that maximizes scientific output. c2003 American Institute of Aeronautics and Astronautics. Published by Elsevier Science Ltd. All rights reserved.
Noisy Preferences in Risky Choice: A Cautionary Note
2017-01-01
We examine the effects of multiple sources of noise in risky decision making. Noise in the parameters that characterize an individual’s preferences can combine with noise in the response process to distort observed choice proportions. Thus, underlying preferences that conform to expected value maximization can appear to show systematic risk aversion or risk seeking. Similarly, core preferences that are consistent with expected utility theory, when perturbed by such noise, can appear to display nonlinear probability weighting. For this reason, modal choices cannot be used simplistically to infer underlying preferences. Quantitative model fits that do not allow for both sorts of noise can lead to wrong conclusions. PMID:28569526
NASA Astrophysics Data System (ADS)
Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.
2018-04-01
Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.
Why Contextual Preference Reversals Maximize Expected Value
2016-01-01
Contextual preference reversals occur when a preference for one option over another is reversed by the addition of further options. It has been argued that the occurrence of preference reversals in human behavior shows that people violate the axioms of rational choice and that people are not, therefore, expected value maximizers. In contrast, we demonstrate that if a person is only able to make noisy calculations of expected value and noisy observations of the ordinal relations among option features, then the expected value maximizing choice is influenced by the addition of new options and does give rise to apparent preference reversals. We explore the implications of expected value maximizing choice, conditioned on noisy observations, for a range of contextual preference reversal types—including attraction, compromise, similarity, and phantom effects. These preference reversal types have played a key role in the development of models of human choice. We conclude that experiments demonstrating contextual preference reversals are not evidence for irrationality. They are, however, a consequence of expected value maximization given noisy observations. PMID:27337391
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doyle, T W; Shugart, H H; West, D C
1981-01-01
This study examines the utilization and management of natural forest lands to meet growing wood-energy demands. An application of a forest simulation model is described for assessing energy returns and long-term ecological impacts of wood-energy harvesting under four general silvicultural practices. Results indicate that moderate energy yields could be expected from mild cutting operations which would significantly effect neither the commercial timber market nor the composition, structure, or diversity of these forests. Forest models can provide an effective tool for determining optimal management strategies that maximize energy returns, minimize environmental detriment, and complement existing land-use plans.
The futility of utility: how market dynamics marginalize Adam Smith
NASA Astrophysics Data System (ADS)
McCauley, Joseph L.
2000-10-01
Economic theorizing is based on the postulated, nonempiric notion of utility. Economists assume that prices, dynamics, and market equilibria are supposed to be derived from utility. The results are supposed to represent mathematically the stabilizing action of Adam Smith's invisible hand. In deterministic excess demand dynamics I show the following. A utility function generally does not exist mathematically due to nonintegrable dynamics when production/investment are accounted for, resolving Mirowski's thesis. Price as a function of demand does not exist mathematically either. All equilibria are unstable. I then explain how deterministic chaos can be distinguished from random noise at short times. In the generalization to liquid markets and finance theory described by stochastic excess demand dynamics, I also show the following. Market price distributions cannot be rescaled to describe price movements as ‘equilibrium’ fluctuations about a systematic drift in price. Utility maximization does not describe equilibrium. Maximization of the Gibbs entropy of the observed price distribution of an asset would describe equilibrium, if equilibrium could be achieved, but equilibrium does not describe real, liquid markets (stocks, bonds, foreign exchange). There are three inconsistent definitions of equilibrium used in economics and finance, only one of which is correct. Prices in unregulated free markets are unstable against both noise and rising or falling expectations: Adam Smith's stabilizing invisible hand does not exist, either in mathematical models of liquid market data, or in real market data.
An analysis of competitive bidding by providers for indigent medical care contracts.
Kirkman-Liff, B L; Christianson, J B; Hillman, D G
1985-01-01
This article develops a model of behavior in bidding for indigent medical care contracts in which bidders set bid prices to maximize their expected utility, conditional on estimates of variables which affect the payoff associated with winning or losing a contract. The hypotheses generated by this model are tested empirically using data from the first round of bidding in the Arizona indigent health care experiment. The behavior of bidding organizations in Arizona is found to be consistent in most respects with the predictions of the model. Bid prices appear to have been influenced by estimated costs and by expectations concerning the potential loss from not securing a contract, the initial wealth of the bidding organization, and the expected number of competitors in the bidding process. PMID:4086301
Maintaining homeostasis by decision-making.
Korn, Christoph W; Bach, Dominik R
2015-05-01
Living organisms need to maintain energetic homeostasis. For many species, this implies taking actions with delayed consequences. For example, humans may have to decide between foraging for high-calorie but hard-to-get, and low-calorie but easy-to-get food, under threat of starvation. Homeostatic principles prescribe decisions that maximize the probability of sustaining appropriate energy levels across the entire foraging trajectory. Here, predictions from biological principles contrast with predictions from economic decision-making models based on maximizing the utility of the endpoint outcome of a choice. To empirically arbitrate between the predictions of biological and economic models for individual human decision-making, we devised a virtual foraging task in which players chose repeatedly between two foraging environments, lost energy by the passage of time, and gained energy probabilistically according to the statistics of the environment they chose. Reaching zero energy was framed as starvation. We used the mathematics of random walks to derive endpoint outcome distributions of the choices. This also furnished equivalent lotteries, presented in a purely economic, casino-like frame, in which starvation corresponded to winning nothing. Bayesian model comparison showed that--in both the foraging and the casino frames--participants' choices depended jointly on the probability of starvation and the expected endpoint value of the outcome, but could not be explained by economic models based on combinations of statistical moments or on rank-dependent utility. This implies that under precisely defined constraints biological principles are better suited to explain human decision-making than economic models based on endpoint utility maximization.
Maintaining Homeostasis by Decision-Making
Korn, Christoph W.; Bach, Dominik R.
2015-01-01
Living organisms need to maintain energetic homeostasis. For many species, this implies taking actions with delayed consequences. For example, humans may have to decide between foraging for high-calorie but hard-to-get, and low-calorie but easy-to-get food, under threat of starvation. Homeostatic principles prescribe decisions that maximize the probability of sustaining appropriate energy levels across the entire foraging trajectory. Here, predictions from biological principles contrast with predictions from economic decision-making models based on maximizing the utility of the endpoint outcome of a choice. To empirically arbitrate between the predictions of biological and economic models for individual human decision-making, we devised a virtual foraging task in which players chose repeatedly between two foraging environments, lost energy by the passage of time, and gained energy probabilistically according to the statistics of the environment they chose. Reaching zero energy was framed as starvation. We used the mathematics of random walks to derive endpoint outcome distributions of the choices. This also furnished equivalent lotteries, presented in a purely economic, casino-like frame, in which starvation corresponded to winning nothing. Bayesian model comparison showed that—in both the foraging and the casino frames—participants’ choices depended jointly on the probability of starvation and the expected endpoint value of the outcome, but could not be explained by economic models based on combinations of statistical moments or on rank-dependent utility. This implies that under precisely defined constraints biological principles are better suited to explain human decision-making than economic models based on endpoint utility maximization. PMID:26024504
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Vengerov, David
1999-01-01
Successful operations of future multi-agent intelligent systems require efficient cooperation schemes between agents sharing learning experiences. We consider a pseudo-realistic world in which one or more opportunities appear and disappear in random locations. Agents use fuzzy reinforcement learning to learn which opportunities are most worthy of pursuing based on their promise rewards, expected lifetimes, path lengths and expected path costs. We show that this world is partially observable because the history of an agent influences the distribution of its future states. We consider a cooperation mechanism in which agents share experience by using and-updating one joint behavior policy. We also implement a coordination mechanism for allocating opportunities to different agents in the same world. Our results demonstrate that K cooperative agents each learning in a separate world over N time steps outperform K independent agents each learning in a separate world over K*N time steps, with this result becoming more pronounced as the degree of partial observability in the environment increases. We also show that cooperation between agents learning in the same world decreases performance with respect to independent agents. Since cooperation reduces diversity between agents, we conclude that diversity is a key parameter in the trade off between maximizing utility from cooperation when diversity is low and maximizing utility from competitive coordination when diversity is high.
Cooperation, psychological game theory, and limitations of rationality in social interaction.
Colman, Andrew M
2003-04-01
Rational choice theory enjoys unprecedented popularity and influence in the behavioral and social sciences, but it generates intractable problems when applied to socially interactive decisions. In individual decisions, instrumental rationality is defined in terms of expected utility maximization. This becomes problematic in interactive decisions, when individuals have only partial control over the outcomes, because expected utility maximization is undefined in the absence of assumptions about how the other participants will behave. Game theory therefore incorporates not only rationality but also common knowledge assumptions, enabling players to anticipate their co-players' strategies. Under these assumptions, disparate anomalies emerge. Instrumental rationality, conventionally interpreted, fails to explain intuitively obvious features of human interaction, yields predictions starkly at variance with experimental findings, and breaks down completely in certain cases. In particular, focal point selection in pure coordination games is inexplicable, though it is easily achieved in practice; the intuitively compelling payoff-dominance principle lacks rational justification; rationality in social dilemmas is self-defeating; a key solution concept for cooperative coalition games is frequently inapplicable; and rational choice in certain sequential games generates contradictions. In experiments, human players behave more cooperatively and receive higher payoffs than strict rationality would permit. Orthodox conceptions of rationality are evidently internally deficient and inadequate for explaining human interaction. Psychological game theory, based on nonstandard assumptions, is required to solve these problems, and some suggestions along these lines have already been put forward.
Constrained Fisher Scoring for a Mixture of Factor Analyzers
2016-09-01
expectation -maximization algorithm with similar computational requirements. Lastly, we demonstrate the efficacy of the proposed method for learning a... expectation maximization 44 Gene T Whipps 301 394 2372Unclassified Unclassified Unclassified UU ii Approved for public release; distribution is unlimited...14 3.6 Relationship with Expectation -Maximization 16 4. Simulation Examples 16 4.1 Synthetic MFA Example 17 4.2 Manifold Learning Example 22 5
Dexter, F; Macario, A; Lubarsky, D A
2001-05-01
We previously studied hospitals in the United States of America that are losing money despite limiting the hours that operating room (OR) staff are available to care for patients undergoing elective surgery. These hospitals routinely keep utilization relatively high to maximize revenue. We tested, using discrete-event computer simulation, whether increasing patient volume while being reimbursed less for each additional patient can reliably achieve an increase in revenue when initial adjusted OR utilization is 90%. We found that increasing the volume of referred patients by the amount expected to fill the surgical suite (100%/90%) would increase utilization by <1% for a hospital surgical suite (with longer duration cases) and 4% for an ambulatory surgery suite (with short cases). The increase in patient volume would result in longer patient waiting times for surgery and more patients leaving the surgical queue. With a 15% reduction in payment for the new patients, the increase in volume may not increase revenue and can even decrease the contribution margin for the hospital surgical suite. The implication is that for hospitals with a relatively high OR utilization, signing discounted contracts to increase patient volume by the amount expected to "fill" the OR can have the net effect of decreasing the contribution margin (i.e., profitability). Hospitals may try to attract new surgical volume by offering discounted rates. For hospitals with a relatively high operating room utilization (e.g., 90%), computer simulations predict that increasing patient volume by the amount expected to "fill" the operating room can have the net effect of decreasing contribution margin (i.e., profitability).
Insurance choice and tax-preferred health savings accounts.
Cardon, James H; Showalter, Mark H
2007-03-01
We develop an infinite horizon utility maximization model of the interaction between insurance choice and tax-preferred health savings accounts. The model can be used to examine a wide range of policy options, including flexible spending accounts, health savings accounts, and health reimbursement accounts. We also develop a 2-period model to simulate various implications of the model. Key results from the simulation analysis include the following: (1) with no adverse selection, use of unrestricted health savings accounts leads to modest welfare gains, after accounting for the tax revenue loss; (2) with adverse selection and an initial pooling equilibrium comprised of "sick" and "healthy" consumers, introducing HSAs can, but does not necessarily, lead to a new pooling equilibrium. The new equilibrium results in a higher coinsurance rate, an increase in expected utility for healthy consumers, and a decrease in expected utility for sick consumers; (3) with adverse selection and a separating equilibrium, both sick and healthy consumers are better off with a health savings account; (4) efficiency gains are possible when insurance contracts are explicitly linked to tax-preferred health savings accounts.
Noisy preferences in risky choice: A cautionary note.
Bhatia, Sudeep; Loomes, Graham
2017-10-01
We examine the effects of multiple sources of noise in risky decision making. Noise in the parameters that characterize an individual's preferences can combine with noise in the response process to distort observed choice proportions. Thus, underlying preferences that conform to expected value maximization can appear to show systematic risk aversion or risk seeking. Similarly, core preferences that are consistent with expected utility theory, when perturbed by such noise, can appear to display nonlinear probability weighting. For this reason, modal choices cannot be used simplistically to infer underlying preferences. Quantitative model fits that do not allow for both sorts of noise can lead to wrong conclusions. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Cardoso, T.; Oliveira, M. D.; Barbosa-Póvoa, A.; Nickel, S.
2015-05-01
Although the maximization of health is a key objective in health care systems, location-allocation literature has not yet considered this dimension. This study proposes a multi-objective stochastic mathematical programming approach to support the planning of a multi-service network of long-term care (LTC), both in terms of services location and capacity planning. This approach is based on a mixed integer linear programming model with two objectives - the maximization of expected health gains and the minimization of expected costs - with satisficing levels in several dimensions of equity - namely, equity of access, equity of utilization, socioeconomic equity and geographical equity - being imposed as constraints. The augmented ε-constraint method is used to explore the trade-off between these conflicting objectives, with uncertainty in the demand and delivery of care being accounted for. The model is applied to analyze the (re)organization of the LTC network currently operating in the Great Lisbon region in Portugal for the 2014-2016 period. Results show that extending the network of LTC is a cost-effective investment.
Effective return, risk aversion and drawdowns
NASA Astrophysics Data System (ADS)
Dacorogna, Michel M.; Gençay, Ramazan; Müller, Ulrich A.; Pictet, Olivier V.
2001-01-01
We derive two risk-adjusted performance measures for investors with risk averse preferences. Maximizing these measures is equivalent to maximizing the expected utility of an investor. The first measure, Xeff, is derived assuming a constant risk aversion while the second measure, Reff, is based on a stronger risk aversion to clustering of losses than of gains. The clustering of returns is captured through a multi-horizon framework. The empirical properties of Xeff, Reff are studied within the context of real-time trading models for foreign exchange rates and their properties are compared to those of more traditional measures like the annualized return, the Sharpe Ratio and the maximum drawdown. Our measures are shown to be more robust against clustering of losses and have the ability to fully characterize the dynamic behaviour of investment strategies.
What's wrong with hazard-ranking systems? An expository note.
Cox, Louis Anthony Tony
2009-07-01
Two commonly recommended principles for allocating risk management resources to remediate uncertain hazards are: (1) select a subset to maximize risk-reduction benefits (e.g., maximize the von Neumann-Morgenstern expected utility of the selected risk-reducing activities), and (2) assign priorities to risk-reducing opportunities and then select activities from the top of the priority list down until no more can be afforded. When different activities create uncertain but correlated risk reductions, as is often the case in practice, then these principles are inconsistent: priority scoring and ranking fails to maximize risk-reduction benefits. Real-world risk priority scoring systems used in homeland security and terrorism risk assessment, environmental risk management, information system vulnerability rating, business risk matrices, and many other important applications do not exploit correlations among risk-reducing opportunities or optimally diversify risk-reducing investments. As a result, they generally make suboptimal risk management recommendations. Applying portfolio optimization methods instead of risk prioritization ranking, rating, or scoring methods can achieve greater risk-reduction value for resources spent.
Alpha-Fair Resource Allocation under Incomplete Information and Presence of a Jammer
NASA Astrophysics Data System (ADS)
Altman, Eitan; Avrachenkov, Konstantin; Garnaev, Andrey
In the present work we deal with the concept of alpha-fair resource allocation in the situation where the decision maker (in our case, the base station) does not have complete information about the environment. Namely, we develop a concept of α-fairness under uncertainty to allocate power resource in the presence of a jammer under two types of uncertainty: (a) the decision maker does not have complete knowledge about the parameters of the environment, but knows only their distribution, (b) the jammer can come into the environment with some probability bringing extra background noise. The goal of the decision maker is to maximize the α-fairness utility function with respect to the SNIR (signal to noise-plus-interference ratio). Here we consider a concept of the expected α-fairness utility function (short-term fairness) as well as fairness of expectation (long-term fairness). In the scenario with the unknown parameters of the environment the most adequate approach is a zero-sum game since it can also be viewed as a minimax problem for the decision maker playing against the nature where the decision maker has to apply the best allocation under the worst circumstances. In the scenario with the uncertainty about jamming being in the system the Nash equilibrium concept is employed since the agents have non-zero sum payoffs: the decision maker would like to maximize either the expected fairness or the fairness of expectation while the jammer would like to minimize the fairness if he comes in on the scene. For all the plots the equilibrium strategies in closed form are found. We have shown that for all the scenarios the equilibrium has to be constructed into two steps. In the first step the equilibrium jamming strategy has to be constructed based on a solution of the corresponding modification of the water-filling equation. In the second step the decision maker equilibrium strategy has to be constructed equalizing the induced by jammer background noise.
From Wald to Savage: homo economicus becomes a Bayesian statistician.
Giocoli, Nicola
2013-01-01
Bayesian rationality is the paradigm of rational behavior in neoclassical economics. An economic agent is deemed rational when she maximizes her subjective expected utility and consistently revises her beliefs according to Bayes's rule. The paper raises the question of how, when and why this characterization of rationality came to be endorsed by mainstream economists. Though no definitive answer is provided, it is argued that the question is of great historiographic importance. The story begins with Abraham Wald's behaviorist approach to statistics and culminates with Leonard J. Savage's elaboration of subjective expected utility theory in his 1954 classic The Foundations of Statistics. The latter's acknowledged fiasco to achieve a reinterpretation of traditional inference techniques along subjectivist and behaviorist lines raises the puzzle of how a failed project in statistics could turn into such a big success in economics. Possible answers call into play the emphasis on consistency requirements in neoclassical theory and the impact of the postwar transformation of U.S. business schools. © 2012 Wiley Periodicals, Inc.
Optimal Energy Management for a Smart Grid using Resource-Aware Utility Maximization
NASA Astrophysics Data System (ADS)
Abegaz, Brook W.; Mahajan, Satish M.; Negeri, Ebisa O.
2016-06-01
Heterogeneous energy prosumers are aggregated to form a smart grid based energy community managed by a central controller which could maximize their collective energy resource utilization. Using the central controller and distributed energy management systems, various mechanisms that harness the power profile of the energy community are developed for optimal, multi-objective energy management. The proposed mechanisms include resource-aware, multi-variable energy utility maximization objectives, namely: (1) maximizing the net green energy utilization, (2) maximizing the prosumers' level of comfortable, high quality power usage, and (3) maximizing the economic dispatch of energy storage units that minimize the net energy cost of the energy community. Moreover, an optimal energy management solution that combines the three objectives has been implemented by developing novel techniques of optimally flexible (un)certainty projection and appliance based pricing decomposition in an IBM ILOG CPLEX studio. A real-world, per-minute data from an energy community consisting of forty prosumers in Amsterdam, Netherlands is used. Results show that each of the proposed mechanisms yields significant increases in the aggregate energy resource utilization and welfare of prosumers as compared to traditional peak-power reduction methods. Furthermore, the multi-objective, resource-aware utility maximization approach leads to an optimal energy equilibrium and provides a sustainable energy management solution as verified by the Lagrangian method. The proposed resource-aware mechanisms could directly benefit emerging energy communities in the world to attain their energy resource utilization targets.
NASA Astrophysics Data System (ADS)
Duffy, Ken; Lobunets, Olena; Suhov, Yuri
2007-05-01
We propose a model of a loss averse investor who aims to maximize his expected wealth under certain constraints. The constraints are that he avoids, with high probability, incurring an (suitably defined) unacceptable loss. The methodology employed comes from the theory of large deviations. We explore a number of fundamental properties of the model and illustrate its desirable features. We demonstrate its utility by analyzing assets that follow some commonly used financial return processes: Fractional Brownian Motion, Jump Diffusion, Variance Gamma and Truncated Lévy.
Impacts of Maximizing Tendencies on Experience-Based Decisions.
Rim, Hye Bin
2017-06-01
Previous research on risky decisions has suggested that people tend to make different choices depending on whether they acquire the information from personally repeated experiences or from statistical summary descriptions. This phenomenon, called as a description-experience gap, was expected to be moderated by the individual difference in maximizing tendencies, a desire towards maximizing decisional outcome. Specifically, it was hypothesized that maximizers' willingness to engage in extensive information searching would lead maximizers to make experience-based decisions as payoff distributions were given explicitly. A total of 262 participants completed four decision problems. Results showed that maximizers, compared to non-maximizers, drew more samples before making a choice but reported lower confidence levels on both the accuracy of knowledge gained from experiences and the likelihood of satisfactory outcomes. Additionally, maximizers exhibited smaller description-experience gaps than non-maximizers as expected. The implications of the findings and unanswered questions for future research were discussed.
Strategic Style in Pared-Down Poker
NASA Astrophysics Data System (ADS)
Burns, Kevin
This chapter deals with the manner of making diagnoses and decisions, called strategic style, in a gambling game called Pared-down Poker. The approach treats style as a mental mode in which choices are constrained by expected utilities. The focus is on two classes of utility, i.e., money and effort, and how cognitive styles compare to normative strategies in optimizing these utilities. The insights are applied to real-world concerns like managing the war against terror networks and assessing the risks of system failures. After "Introducing the Interactions" involved in playing poker, the contents are arranged in four sections, as follows. "Underpinnings of Utility" outlines four classes of utility and highlights the differences between them: economic utility (money), ergonomic utility (effort), informatic utility (knowledge), and aesthetic utility (pleasure). "Inference and Investment" dissects the cognitive challenges of playing poker and relates them to real-world situations of business and war, where the key tasks are inference (of cards in poker, or strength in war) and investment (of chips in poker, or force in war) to maximize expected utility. "Strategies and Styles" presents normative (optimal) approaches to inference and investment, and compares them to cognitive heuristics by which people play poker--focusing on Bayesian methods and how they differ from human styles. The normative strategy is then pitted against cognitive styles in head-to-head tournaments, and tournaments are also held between different styles. The results show that style is ergonomically efficient and economically effective, i.e., style is smart. "Applying the Analysis" explores how style spaces, of the sort used to model individual behavior in Pared-down Poker, might also be applied to real-world problems where organizations evolve in terror networks and accidents arise from system failures.
Reinforcement Learning for Constrained Energy Trading Games With Incomplete Information.
Wang, Huiwei; Huang, Tingwen; Liao, Xiaofeng; Abu-Rub, Haitham; Chen, Guo
2017-10-01
This paper considers the problem of designing adaptive learning algorithms to seek the Nash equilibrium (NE) of the constrained energy trading game among individually strategic players with incomplete information. In this game, each player uses the learning automaton scheme to generate the action probability distribution based on his/her private information for maximizing his own averaged utility. It is shown that if one of admissible mixed-strategies converges to the NE with probability one, then the averaged utility and trading quantity almost surely converge to their expected ones, respectively. For the given discontinuous pricing function, the utility function has already been proved to be upper semicontinuous and payoff secure which guarantee the existence of the mixed-strategy NE. By the strict diagonal concavity of the regularized Lagrange function, the uniqueness of NE is also guaranteed. Finally, an adaptive learning algorithm is provided to generate the strategy probability distribution for seeking the mixed-strategy NE.
Intergenerational redistribution in a small open economy with endogenous fertility.
Kolmar, M
1997-08-01
The literature comparing fully funded (FF) and pay-as-you-go (PAYG) financed public pension systems in small, open economies stresses the importance of the Aaron condition as an empirical measure to decide which system can be expected to lead to a higher long-run welfare. A country with a PAYG system has a higher level of utility than a country with a FF system if the growth rate of total wage income exceeds the interest rate. Endogenizing population growth makes one determinant of the growth rate of wage incomes endogenous. The author demonstrates why the Aaron condition ceases to be a good indicator in this case. For PAYG-financed pension systems, claims can be calculated according to individual contributions or the number of children in a family. Analysis determined that for both structural determinants there is no interior solution of the problem of intergenerational utility maximization. Pure systems are therefore always welfare maximizing. Moreover, children-related pension claims induce a fiscal externality which tends to be positive. The determination of the optimal contribution rate shows that the Aaron condition is generally a misleading indicator for the comparison of FF and PAYG-financed pension systems.
Bringing the patient back in: behavioral decision-making and choice in medical economics.
Mendoza, Roger Lee
2018-04-01
We explore the behavioral methodology and "revolution" in economics through the lens of medical economics. We address two questions: (1) Are mainstream economic assumptions of utility-maximization realistic approximations of people's actual behavior? (2) Do people maximize subjective expected utility, particularly in choosing from among the available options? In doing so, we illustrate-in terms of a hypothetical experimental sample of patients with dry eye diagnosis-why and how utility in pharmacoeconomic assessments might be valued differently by patients when subjective psychological, social, cognitive, and emotional factors are considered. While experimentally-observed or surveyed behavior yields stated (rather than revealed) preferences, behaviorism offers a robust toolset in understanding drug, medical device, and treatment-related decisions compared to the optimizing calculus assumed by mainstream economists. It might also do so more perilously than economists have previously understood, in light of the intractable uncertainties, information asymmetries, insulated third-party agents, entry barriers, and externalities that characterize healthcare. Behavioral work has been carried out in many sub-fields of economics. Only recently has it been extended to healthcare. This offers medical economists both the challenge and opportunity of balancing efficiency presumptions with relatively autonomous patient choices, notwithstanding their predictable, yet seemingly consistent, irrationality. Despite its comparative youth and limitations, the scientific contributions of behaviorism are secure and its future in medical economics appears to be promising.
An entropic framework for modeling economies
NASA Astrophysics Data System (ADS)
Caticha, Ariel; Golan, Amos
2014-08-01
We develop an information-theoretic framework for economic modeling. This framework is based on principles of entropic inference that are designed for reasoning on the basis of incomplete information. We take the point of view of an external observer who has access to limited information about broad macroscopic economic features. We view this framework as complementary to more traditional methods. The economy is modeled as a collection of agents about whom we make no assumptions of rationality (in the sense of maximizing utility or profit). States of statistical equilibrium are introduced as those macrostates that maximize entropy subject to the relevant information codified into constraints. The basic assumption is that this information refers to supply and demand and is expressed in the form of the expected values of certain quantities (such as inputs, resources, goods, production functions, utility functions and budgets). The notion of economic entropy is introduced. It provides a measure of the uniformity of the distribution of goods and resources. It captures both the welfare state of the economy as well as the characteristics of the market (say, monopolistic, concentrated or competitive). Prices, which turn out to be the Lagrange multipliers, are endogenously generated by the economy. Further studies include the equilibrium between two economies and the conditions for stability. As an example, the case of the nonlinear economy that arises from linear production and utility functions is treated in some detail.
Voyager 1 Saturn targeting strategy
NASA Technical Reports Server (NTRS)
Cesarone, R. J.
1980-01-01
A trajectory targeting strategy for the Voyager 1 Saturn encounter has been designed to accomodate predicted uncertainties in Titan's ephemeris while maximizing spacecraft safety and science return. The encounter is characterized by a close Titan flyby 18 hours prior to Saturn periapse. Retargeting of the nominal trajectory to account for late updates in Titan's estimated position can disperse the ascending node location, which is nominally situated at a radius of low expected particle density in Saturn's ring plane. The strategy utilizes a floating Titan impact vector magnitude to minimize this dispersion. Encounter trajectory characteristics and optimal tradeoffs are presented.
Bequest Motives and the Annuity Puzzle.
Lockwood, Lee M
2012-04-01
Few retirees annuitize any wealth, a fact that has so far defied explanation within the standard framework of forward-looking, expected utility-maximizing agents. Bequest motives seem a natural explanation. Yet the prevailing view is that people with plausible bequest motives should annuitize part of their wealth, and thus that bequest motives cannot explain why most people do not annuitize any wealth. I show, however, that people with plausible bequest motives are likely to be better off not annuitizing any wealth at available rates. The evidence suggests that bequest motives play a central role in limiting the demand for annuities.
Bequest Motives and the Annuity Puzzle
Lockwood, Lee M.
2011-01-01
Few retirees annuitize any wealth, a fact that has so far defied explanation within the standard framework of forward-looking, expected utility-maximizing agents. Bequest motives seem a natural explanation. Yet the prevailing view is that people with plausible bequest motives should annuitize part of their wealth, and thus that bequest motives cannot explain why most people do not annuitize any wealth. I show, however, that people with plausible bequest motives are likely to be better off not annuitizing any wealth at available rates. The evidence suggests that bequest motives play a central role in limiting the demand for annuities. PMID:22822300
Deterministic quantum annealing expectation-maximization algorithm
NASA Astrophysics Data System (ADS)
Miyahara, Hideyuki; Tsumura, Koji; Sughiyama, Yuki
2017-11-01
Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization approach. Motivated by QA, we propose a quantum annealing extension of EM, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. We also discuss its advantage in terms of the path integral formulation. Furthermore, by employing numerical simulations, we illustrate how DQAEM works in MLE and show that DQAEM moderate the problem of local optima in EM.
NASA Astrophysics Data System (ADS)
Coogan, A.; Avanzi, F.; Akella, R.; Conklin, M. H.; Bales, R. C.; Glaser, S. D.
2017-12-01
Automatic meteorological and snow stations provide large amounts of information at dense temporal resolution, but data quality is often compromised by noise and missing values. We present a new gap-filling and cleaning procedure for networks of these stations based on Kalman filtering and expectation maximization. Our method utilizes a multi-sensor, regime-switching Kalman filter to learn a latent process that captures dependencies between nearby stations and handles sharp changes in snowfall rate. Since the latent process is inferred using observations across working stations in the network, it can be used to fill in large data gaps for a malfunctioning station. The procedure was tested on meteorological and snow data from Wireless Sensor Networks (WSN) in the American River basin of the Sierra Nevada. Data include air temperature, relative humidity, and snow depth from dense networks of 10 to 12 stations within 1 km2 swaths. Both wet and dry water years have similar data issues. Data with artificially created gaps was used to quantify the method's performance. Our multi-sensor approach performs better than a single-sensor one, especially with large data gaps, as it learns and exploits the dominant underlying processes in snowpack at each site.
Quality competition and uncertainty in a horizontally differentiated hospital market.
Montefiori, Marcello
2014-01-01
The chapter studies hospital competition in a spatially differentiated market in which patient demand reflects the quality/distance mix that maximizes their utility. Treatment is free at the point of use and patients freely choose the provider which best fits their expectations. Hospitals might have asymmetric objectives and costs, however they are reimbursed using a uniform prospective payment. The chapter provides different equilibrium outcomes, under perfect and asymmetric information. The results show that asymmetric costs, in the case where hospitals are profit maximizers, allow for a social welfare and quality improvement. On the other hand, the presence of a publicly managed hospital which pursues the objective of quality maximization is able to ensure a higher level of quality, patient surplus and welfare. However, the extent of this outcome might be considerably reduced when high levels of public hospital inefficiency are detectable. Finally, the negative consequences caused by the presence of asymmetric information are highlighted in the different scenarios of ownership/objectives and costs. The setting adopted in the model aims at describing the up-coming European market for secondary health care, focusing on hospital behavior and it is intended to help the policy-maker in understanding real world dynamics.
Coding for Parallel Links to Maximize the Expected Value of Decodable Messages
NASA Technical Reports Server (NTRS)
Klimesh, Matthew A.; Chang, Christopher S.
2011-01-01
When multiple parallel communication links are available, it is useful to consider link-utilization strategies that provide tradeoffs between reliability and throughput. Interesting cases arise when there are three or more available links. Under the model considered, the links have known probabilities of being in working order, and each link has a known capacity. The sender has a number of messages to send to the receiver. Each message has a size and a value (i.e., a worth or priority). Messages may be divided into pieces arbitrarily, and the value of each piece is proportional to its size. The goal is to choose combinations of messages to send on the links so that the expected value of the messages decodable by the receiver is maximized. There are three parts to the innovation: (1) Applying coding to parallel links under the model; (2) Linear programming formulation for finding the optimal combinations of messages to send on the links; and (3) Algorithms for assisting in finding feasible combinations of messages, as support for the linear programming formulation. There are similarities between this innovation and methods developed in the field of network coding. However, network coding has generally been concerned with either maximizing throughput in a fixed network, or robust communication of a fixed volume of data. In contrast, under this model, the throughput is expected to vary depending on the state of the network. Examples of error-correcting codes that are useful under this model but which are not needed under previous models have been found. This model can represent either a one-shot communication attempt, or a stream of communications. Under the one-shot model, message sizes and link capacities are quantities of information (e.g., measured in bits), while under the communications stream model, message sizes and link capacities are information rates (e.g., measured in bits/second). This work has the potential to increase the value of data returned from spacecraft under certain conditions.
Annotti, Lee A; Teglasi, Hedwig
2017-01-01
Real-world contexts differ in the clarity of expectations for desired responses, as do assessment procedures, ranging along a continuum from maximal conditions that provide well-defined expectations to typical conditions that provide ill-defined expectations. Executive functions guide effective social interactions, but relations between them have not been studied with measures that are matched in the clarity of response expectations. In predicting teacher-rated social competence (SC) from kindergarteners' performance on tasks of executive functions (EFs), we found better model-data fit indexes when both measures were similar in the clarity of response expectations for the child. The maximal EF measure, the Developmental Neuropsychological Assessment, presents well-defined response expectations, and the typical EF measure, 5 scales from the Thematic Apperception Test (TAT), presents ill-defined response expectations (i.e., Abstraction, Perceptual Integration, Cognitive-Experiential Integration, and Associative Thinking). To assess SC under maximal and typical conditions, we used 2 teacher-rated questionnaires, with items, respectively, that emphasize well-defined and ill-defined expectations: the Behavior Rating Inventory: Behavioral Regulation Index and the Social Skills Improvement System: Social Competence Scale. Findings suggest that matching clarity of expectations improves generalization across measures and highlight the usefulness of the TAT to measure EF.
Economic evaluation in the context of rare diseases: is it possible?
Silva, Everton Nunes da; Sousa, Tanara Rosângela Vieira
2015-03-01
This study analyzes the available evidence on the adequacy of economic evaluation for decision-making on the incorporation or exclusion of technologies for rare diseases. The authors conducted a structured literature review in MEDLINE via PubMed, CRD, LILACS, SciELO, and Google Scholar (gray literature). Economic evaluation studies had their origins in Welfare Economics, in which individuals maximize their utilities based on allocative efficiency. There is no widely accepted criterion in the literature to weigh the expected utilities, in the sense of assigning more weight to individuals with greater health needs. Thus, economic evaluation studies do not usually weigh utilities asymmetrically (that is, everyone is treated equally, which in Brazil is also a Constitutional principle). Healthcare systems have ratified the use of economic evaluation as the main tool to assist decision-making. However, this approach does not rule out the use of other methodologies to complement cost-effectiveness studies, such as Person Trade-Off and Rule of Rescue.
NASA Technical Reports Server (NTRS)
Edmunson, J.; Gaskin, J. A.; Doloboff, I. J.
2017-01-01
Development of a miniaturized scanning electron microscope that will utilize the martian atmosphere to dissipate charge during analysis continues. This instrument is expected to be used on a future rover or lander to answer fundamental Mars science questions. To identify the most important questions, a survey was taken at the 47th Lunar and Planetary Science Conference (LPSC). From the gathered information initial topics were identified for a SEM on the martian surface. These priorities are identified and discussed below. Additionally, a concept of operations is provided with the goal of maximizing the science obtained with the minimum amount of communication with the instrument.
Applying Probabilistic Decision Models to Clinical Trial Design
Smith, Wade P; Phillips, Mark H
2018-01-01
Clinical trial design most often focuses on a single or several related outcomes with corresponding calculations of statistical power. We consider a clinical trial to be a decision problem, often with competing outcomes. Using a current controversy in the treatment of HPV-positive head and neck cancer, we apply several different probabilistic methods to help define the range of outcomes given different possible trial designs. Our model incorporates the uncertainties in the disease process and treatment response and the inhomogeneities in the patient population. Instead of expected utility, we have used a Markov model to calculate quality adjusted life expectancy as a maximization objective. Monte Carlo simulations over realistic ranges of parameters are used to explore different trial scenarios given the possible ranges of parameters. This modeling approach can be used to better inform the initial trial design so that it will more likely achieve clinical relevance. PMID:29888075
An expectancy-value analysis of viewer interest in television prevention news stories.
Cooper, C P; Burgoon, M; Roter, D L
2001-01-01
Understanding what drives viewer interest in television news stories about prevention topics is vital to maximizing the effectiveness of interventions that utilize this medium. Guided by expectancy-value theory, this experiment used regression analysis to identify the salient beliefs associated with viewer attitudes towards these types of news stories. The 458 study participants were recruited over 30 days from a municipal jury pool in an eastern U.S. city. Out of the 22 beliefs included in the experiment, 6 demonstrated salience. Personal relevance, novelty, shock value, and the absence of exaggeration were the core values reflected in the identified salient beliefs. This study highlights the importance of explaining the relevance of prevention stories to viewers and framing these stories with a new spin or a surprising twist. However, such manipulations should be applied with savvy and restraint, as hyping prevention news was found to be counterproductive to educating the public.
Functional specialization of the primate frontal cortex during decision making.
Lee, Daeyeol; Rushworth, Matthew F S; Walton, Mark E; Watanabe, Masataka; Sakagami, Masamichi
2007-08-01
Economic theories of decision making are based on the principle of utility maximization, and reinforcement-learning theory provides computational algorithms that can be used to estimate the overall reward expected from alternative choices. These formal models not only account for a large range of behavioral observations in human and animal decision makers, but also provide useful tools for investigating the neural basis of decision making. Nevertheless, in reality, decision makers must combine different types of information about the costs and benefits associated with each available option, such as the quality and quantity of expected reward and required work. In this article, we put forward the hypothesis that different subdivisions of the primate frontal cortex may be specialized to focus on different aspects of dynamic decision-making processes. In this hypothesis, the lateral prefrontal cortex is primarily involved in maintaining the state representation necessary to identify optimal actions in a given environment. In contrast, the orbitofrontal cortex and the anterior cingulate cortex might be primarily involved in encoding and updating the utilities associated with different sensory stimuli and alternative actions, respectively. These cortical areas are also likely to contribute to decision making in a social context.
Patel, Nitin R; Ankolekar, Suresh
2007-11-30
Classical approaches to clinical trial design ignore economic factors that determine economic viability of a new drug. We address the choice of sample size in Phase III trials as a decision theory problem using a hybrid approach that takes a Bayesian view from the perspective of a drug company and a classical Neyman-Pearson view from the perspective of regulatory authorities. We incorporate relevant economic factors in the analysis to determine the optimal sample size to maximize the expected profit for the company. We extend the analysis to account for risk by using a 'satisficing' objective function that maximizes the chance of meeting a management-specified target level of profit. We extend the models for single drugs to a portfolio of clinical trials and optimize the sample sizes to maximize the expected profit subject to budget constraints. Further, we address the portfolio risk and optimize the sample sizes to maximize the probability of achieving a given target of expected profit.
Exercise in middle-aged adults: self-efficacy and self-presentational outcomes.
McAuley, E; Bane, S M; Mihalko, S L
1995-07-01
Whereas self-efficacy expectations have been identified as important determinants of exercise participation patterns, little empirical work that examines efficacy expectations as outcomes of exercise participation or their theoretical relationship to other psychological outcomes associated with exercise has been conducted. In the context of middle-aged males and females, the present study attempted to integrate social cognitive and impression management perspectives with respect to anxiety associated with exercise. Formerly sedentary subjects participated in a 5-month exercise program with assessments of physique anxiety, efficacy, outcome expectations, and anthropometric variables prior to and following the program. Both acute bouts and long-term participation in exercise resulted in significant increases in self-efficacy. In turn, these changes in efficacy and initial positive outcome expectations were significant predictors of reductions in physique anxiety, even when controlling for the influence of gender and reductions in body fat, weight, and circumferences. The findings are discussed in terms of the implications for structure and content of exercise environments and the utility of the proposed theoretical integration. Strategies for enhancing beliefs regarding health and fitness outcomes associated with exercise rather than appearance outcomes may be required to maximize reductions in negative body image.
The evolution of utility functions and psychological altruism.
Clavien, Christine; Chapuisat, Michel
2016-04-01
Numerous studies show that humans tend to be more cooperative than expected given the assumption that they are rational maximizers of personal gain. As a result, theoreticians have proposed elaborated formal representations of human decision-making, in which utility functions including "altruistic" or "moral" preferences replace the purely self-oriented "Homo economicus" function. Here we review mathematical approaches that provide insights into the mathematical stability of alternative utility functions. Candidate utility functions may be evaluated with help of game theory, classical modeling of social evolution that focuses on behavioral strategies, and modeling of social evolution that focuses directly on utility functions. We present the advantages of the latter form of investigation and discuss one surprisingly precise result: "Homo economicus" as well as "altruistic" utility functions are less stable than a function containing a preference for the common welfare that is only expressed in social contexts composed of individuals with similar preferences. We discuss the contribution of mathematical models to our understanding of human other-oriented behavior, with a focus on the classical debate over psychological altruism. We conclude that human can be psychologically altruistic, but that psychological altruism evolved because it was generally expressed towards individuals that contributed to the actor's fitness, such as own children, romantic partners and long term reciprocators. Copyright © 2015 Elsevier Ltd. All rights reserved.
Forecasting continuously increasing life expectancy: what implications?
Le Bourg, Eric
2012-04-01
It has been proposed that life expectancy could linearly increase in the next decades and that median longevity of the youngest birth cohorts could reach 105 years or more. These forecasts have been criticized but it seems that their implications for future maximal lifespan (i.e. the lifespan of the last survivors) have not been considered. These implications make these forecasts untenable and it is less risky to hypothesize that life expectancy and maximal lifespan will reach an asymptotic limit in some decades from now. Copyright © 2012 Elsevier B.V. All rights reserved.
Reward-prospect interacts with trial-by-trial preparation for potential distraction
Marini, Francesco; van den Berg, Berry; Woldorff, Marty G.
2015-01-01
When attending for impending visual stimuli, cognitive systems prepare to identify relevant information while ignoring irrelevant, potentially distracting input. Recent work (Marini et al., 2013) showed that a supramodal distracter-filtering mechanism is invoked in blocked designs involving expectation of possible distracter stimuli, although this entails a cost (distraction-filtering cost) on speeded performance when distracters are expected but not presented. Here we used an arrow-flanker task to study whether an analogous cost, potentially reflecting the recruitment of a specific distraction-filtering mechanism, occurs dynamically when potential distraction is cued trial-to-trial (cued distracter-expectation cost). In order to promote the maximal utilization of cue information by participants, in some experimental conditions the cue also signaled the possibility of earning a monetary reward for fast and accurate performance. This design also allowed us to investigate the interplay between anticipation for distracters and anticipation of reward, which is known to engender attentional preparation. Only in reward contexts did participants show a cued distracter-expectation cost, which was larger with higher reward prospect and when anticipation for both distracters and reward were manipulated trial-to-trial. Thus, these results indicate that reward prospect interacts with the distracter expectation during trial-by-trial preparatory processes for potential distraction. These findings highlight how reward guides cue-driven attentional preparation. PMID:26180506
Reward-prospect interacts with trial-by-trial preparation for potential distraction.
Marini, Francesco; van den Berg, Berry; Woldorff, Marty G
2015-02-01
When attending for impending visual stimuli, cognitive systems prepare to identify relevant information while ignoring irrelevant, potentially distracting input. Recent work (Marini et al., 2013) showed that a supramodal distracter-filtering mechanism is invoked in blocked designs involving expectation of possible distracter stimuli, although this entails a cost ( distraction-filtering cost ) on speeded performance when distracters are expected but not presented. Here we used an arrow-flanker task to study whether an analogous cost, potentially reflecting the recruitment of a specific distraction-filtering mechanism, occurs dynamically when potential distraction is cued trial-to-trial ( cued distracter-expectation cost ). In order to promote the maximal utilization of cue information by participants, in some experimental conditions the cue also signaled the possibility of earning a monetary reward for fast and accurate performance. This design also allowed us to investigate the interplay between anticipation for distracters and anticipation of reward, which is known to engender attentional preparation. Only in reward contexts did participants show a cued distracter-expectation cost, which was larger with higher reward prospect and when anticipation for both distracters and reward were manipulated trial-to-trial. Thus, these results indicate that reward prospect interacts with the distracter expectation during trial-by-trial preparatory processes for potential distraction. These findings highlight how reward guides cue-driven attentional preparation.
Maximizing Resource Utilization in Video Streaming Systems
ERIC Educational Resources Information Center
Alsmirat, Mohammad Abdullah
2013-01-01
Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…
Tappenden, Paul; Chilcott, Jim; Brennan, Alan; Squires, Hazel; Glynne-Jones, Rob; Tappenden, Janine
2013-06-01
To assess the feasibility and value of simulating whole disease and treatment pathways within a single model to provide a common economic basis for informing resource allocation decisions. A patient-level simulation model was developed with the intention of being capable of evaluating multiple topics within National Institute for Health and Clinical Excellence's colorectal cancer clinical guideline. The model simulates disease and treatment pathways from preclinical disease through to detection, diagnosis, adjuvant/neoadjuvant treatments, follow-up, curative/palliative treatments for metastases, supportive care, and eventual death. The model parameters were informed by meta-analyses, randomized trials, observational studies, health utility studies, audit data, costing sources, and expert opinion. Unobservable natural history parameters were calibrated against external data using Bayesian Markov chain Monte Carlo methods. Economic analysis was undertaken using conventional cost-utility decision rules within each guideline topic and constrained maximization rules across multiple topics. Under usual processes for guideline development, piecewise economic modeling would have been used to evaluate between one and three topics. The Whole Disease Model was capable of evaluating 11 of 15 guideline topics, ranging from alternative diagnostic technologies through to treatments for metastatic disease. The constrained maximization analysis identified a configuration of colorectal services that is expected to maximize quality-adjusted life-year gains without exceeding current expenditure levels. This study indicates that Whole Disease Model development is feasible and can allow for the economic analysis of most interventions across a disease service within a consistent conceptual and mathematical infrastructure. This disease-level modeling approach may be of particular value in providing an economic basis to support other clinical guidelines. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Hierarchical trie packet classification algorithm based on expectation-maximization clustering.
Bi, Xia-An; Zhao, Junxia
2017-01-01
With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm.
Assigning values to intermediate health states for cost-utility analysis: theory and practice.
Cohen, B J
1996-01-01
Cost-utility analysis (CUA) was developed to guide the allocation of health care resources under a budget constraint. As the generally stated goal of CUA is to maximize aggregate health benefits, the philosophical underpinning of this method is classic utilitarianism. Utilitarianism has been criticized as a basis for social choice because of its emphasis on the net sum of benefits without regard to the distribution of benefits. For example, it has been argued that absolute priority should be given to the worst off when making social choices affecting basic needs. Application of classic utilitarianism requires use of strength-of-preference utilities, assessed under conditions of certainty, to assign quality-adjustment factors to intermediate health states. The two methods commonly used to measure strength-of-preference utility, categorical scaling and time tradeoff, produce rankings that systematically give priority to those who are better off. Alternatively, von Neumann-Morgenstern utilities, assessed under conditions of uncertainty, could be used to assign values to intermediate health states. The theoretical basis for this would be Harsanyi's proposal that social choice be made under the hypothetical assumption that one had an equal chance of being anyone in society. If this proposal is accepted, as well as the expected-utility axioms applied to both individual choice and social choice, the preferred societal arrangement is that with the highest expected von Neumann-Morgenstern utility. In the presence of risk aversion, this will give some priority to the worst-off relative to classic utilitarianism. Another approach is to raise the values obtained by time-tradeoff assessments to a power a between 0 and 1. This would explicitly give priority to the worst off, with the degree of priority increasing as a decreases. Results could be presented over a range of a. The results of CUA would then provide useful information to those holding a range of philosophical points of view.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Fei; Huang, Yongxi
Here, we develop a multistage, stochastic mixed-integer model to support biofuel supply chain expansion under evolving uncertainties. By utilizing the block-separable recourse property, we reformulate the multistage program in an equivalent two-stage program and solve it using an enhanced nested decomposition method with maximal non-dominated cuts. We conduct extensive numerical experiments and demonstrate the application of the model and algorithm in a case study based on the South Carolina settings. The value of multistage stochastic programming method is also explored by comparing the model solution with the counterparts of an expected value based deterministic model and a two-stage stochastic model.
Xie, Fei; Huang, Yongxi
2018-02-04
Here, we develop a multistage, stochastic mixed-integer model to support biofuel supply chain expansion under evolving uncertainties. By utilizing the block-separable recourse property, we reformulate the multistage program in an equivalent two-stage program and solve it using an enhanced nested decomposition method with maximal non-dominated cuts. We conduct extensive numerical experiments and demonstrate the application of the model and algorithm in a case study based on the South Carolina settings. The value of multistage stochastic programming method is also explored by comparing the model solution with the counterparts of an expected value based deterministic model and a two-stage stochastic model.
Neural basis of quasi-rational decision making.
Lee, Daeyeol
2006-04-01
Standard economic theories conceive homo economicus as a rational decision maker capable of maximizing utility. In reality, however, people tend to approximate optimal decision-making strategies through a collection of heuristic routines. Some of these routines are driven by emotional processes, and others are adjusted iteratively through experience. In addition, routines specialized for social decision making, such as inference about the mental states of other decision makers, might share their origins and neural mechanisms with the ability to simulate or imagine outcomes expected from alternative actions that an individual can take. A recent surge of collaborations across economics, psychology and neuroscience has provided new insights into how such multiple elements of decision making interact in the brain.
Team reasoning and collective rationality: piercing the veil of obviousness.
Colman, Andrew M; Pulford, Briony D; Rose, Jo
2008-06-01
The experiments reported in our target article provide strong evidence of collective utility maximization, and the findings suggest that team reasoning should now be included among the social value orientations used in cognitive and social psychology. Evidential decision theory offers a possible alternative explanation for our results but fails to predict intuitively compelling strategy choices in simple games with asymmetric team-reasoning outcomes. Although many of our experimental participants evidently used team reasoning, some appear to have ignored the other players' expected strategy choices and used lower-level, nonstrategic forms of reasoning. Standard payoff transformations cannot explain the experimental findings, nor team reasoning in general, without an unrealistic assumption that players invariably reason nonstrategically.
Volume versus value maximization illustrated for Douglas-fir with thinning
Kurt H. Riitters; J. Douglas Brodie; Chiang Kao
1982-01-01
Economic and physical criteria for selecting even-aged rotation lengths are reviewed with examples of their optimizations. To demonstrate the trade-off between physical volume, economic return, and stand diameter, examples of thinning regimes for maximizing volume, forest rent, and soil expectation are compared with an example of maximizing volume without thinning. The...
Estimating the relative utility of screening mammography.
Abbey, Craig K; Eckstein, Miguel P; Boone, John M
2013-05-01
The concept of diagnostic utility is a fundamental component of signal detection theory, going back to some of its earliest works. Attaching utility values to the various possible outcomes of a diagnostic test should, in principle, lead to meaningful approaches to evaluating and comparing such systems. However, in many areas of medical imaging, utility is not used because it is presumed to be unknown. In this work, we estimate relative utility (the utility benefit of a detection relative to that of a correct rejection) for screening mammography using its known relation to the slope of a receiver operating characteristic (ROC) curve at the optimal operating point. The approach assumes that the clinical operating point is optimal for the goal of maximizing expected utility and therefore the slope at this point implies a value of relative utility for the diagnostic task, for known disease prevalence. We examine utility estimation in the context of screening mammography using the Digital Mammographic Imaging Screening Trials (DMIST) data. We show how various conditions can influence the estimated relative utility, including characteristics of the rating scale, verification time, probability model, and scope of the ROC curve fit. Relative utility estimates range from 66 to 227. We argue for one particular set of conditions that results in a relative utility estimate of 162 (±14%). This is broadly consistent with values in screening mammography determined previously by other means. At the disease prevalence found in the DMIST study (0.59% at 365-day verification), optimal ROC slopes are near unity, suggesting that utility-based assessments of screening mammography will be similar to those found using Youden's index.
Team Formation in Partially Observable Multi-Agent Systems
NASA Technical Reports Server (NTRS)
Agogino, Adrian K.; Tumer, Kagan
2004-01-01
Sets of multi-agent teams often need to maximize a global utility rating the performance of the entire system where a team cannot fully observe other teams agents. Such limited observability hinders team-members trying to pursue their team utilities to take actions that also help maximize the global utility. In this article, we show how team utilities can be used in partially observable systems. Furthermore, we show how team sizes can be manipulated to provide the best compromise between having easy to learn team utilities and having them aligned with the global utility, The results show that optimally sized teams in a partially observable environments outperform one team in a fully observable environment, by up to 30%.
Noise-enhanced clustering and competitive learning algorithms.
Osoba, Osonde; Kosko, Bart
2013-01-01
Noise can provably speed up convergence in many centroid-based clustering algorithms. This includes the popular k-means clustering algorithm. The clustering noise benefit follows from the general noise benefit for the expectation-maximization algorithm because many clustering algorithms are special cases of the expectation-maximization algorithm. Simulations show that noise also speeds up convergence in stochastic unsupervised competitive learning, supervised competitive learning, and differential competitive learning. Copyright © 2012 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kagie, Matthew J.; Lanterman, Aaron D.
2017-12-01
This paper addresses parameter estimation for an optical transient signal when the received data has been right-censored. We develop an expectation-maximization (EM) algorithm to estimate the amplitude of a Poisson intensity with a known shape in the presence of additive background counts, where the measurements are subject to saturation effects. We compare the results of our algorithm with those of an EM algorithm that is unaware of the censoring.
An application of prospect theory to a SHM-based decision problem
NASA Astrophysics Data System (ADS)
Bolognani, Denise; Verzobio, Andrea; Tonelli, Daniel; Cappello, Carlo; Glisic, Branko; Zonta, Daniele
2017-04-01
Decision making investigates choices that have uncertain consequences and that cannot be completely predicted. Rational behavior may be described by the so-called expected utility theory (EUT), whose aim is to help choosing among several solutions to maximize the expectation of the consequences. However, Kahneman and Tversky developed an alternative model, called prospect theory (PT), showing that the basic axioms of EUT are violated in several instances. In respect of EUT, PT takes into account irrational behaviors and heuristic biases. It suggests an alternative approach, in which probabilities are replaced by decision weights, which are strictly related to the decision maker's preferences and may change for different individuals. In particular, people underestimate the utility of uncertain scenarios compared to outcomes obtained with certainty, and show inconsistent preferences when the same choice is presented in different forms. The goal of this paper is precisely to analyze a real case study involving a decision problem regarding the Streicker Bridge, a pedestrian bridge on Princeton University campus. By modelling the manager of the bridge with the EUT first, and with PT later, we want to verify the differences between the two approaches and to investigate how the two models are sensitive to unpacking probabilities, which represent a common cognitive bias in irrational behaviors.
The predictive validity of prospect theory versus expected utility in health utility measurement.
Abellan-Perpiñan, Jose Maria; Bleichrodt, Han; Pinto-Prades, Jose Luis
2009-12-01
Most health care evaluations today still assume expected utility even though the descriptive deficiencies of expected utility are well known. Prospect theory is the dominant descriptive alternative for expected utility. This paper tests whether prospect theory leads to better health evaluations than expected utility. The approach is purely descriptive: we explore how simple measurements together with prospect theory and expected utility predict choices and rankings between more complex stimuli. For decisions involving risk prospect theory is significantly more consistent with rankings and choices than expected utility. This conclusion no longer holds when we use prospect theory utilities and expected utilities to predict intertemporal decisions. The latter finding cautions against the common assumption in health economics that health state utilities are transferable across decision contexts. Our results suggest that the standard gamble and algorithms based on, should not be used to value health.
Hierarchical trie packet classification algorithm based on expectation-maximization clustering
Bi, Xia-an; Zhao, Junxia
2017-01-01
With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm. PMID:28704476
Krishnamurthy, Dilip; Sumaria, Vaidish; Viswanathan, Venkatasubramanian
2018-02-01
Density functional theory (DFT) calculations are being routinely used to identify new material candidates that approach activity near fundamental limits imposed by thermodynamics or scaling relations. DFT calculations are associated with inherent uncertainty, which limits the ability to delineate materials (distinguishability) that possess high activity. Development of error-estimation capabilities in DFT has enabled uncertainty propagation through activity-prediction models. In this work, we demonstrate an approach to propagating uncertainty through thermodynamic activity models leading to a probability distribution of the computed activity and thereby its expectation value. A new metric, prediction efficiency, is defined, which provides a quantitative measure of the ability to distinguish activity of materials and can be used to identify the optimal descriptor(s) ΔG opt . We demonstrate the framework for four important electrochemical reactions: hydrogen evolution, chlorine evolution, oxygen reduction and oxygen evolution. Future studies could utilize expected activity and prediction efficiency to significantly improve the prediction accuracy of highly active material candidates.
Demand for private health insurance: how important is the quality gap?
Costa, Joan; García, Jaume
2003-07-01
Perceived quality of private and public health care, income and insurance premium are among the determinants of demand for private health insurance (PHI). In the context of a model in which individuals are expected utility maximizers, the non purchasing choice can result in consuming either public health care or private health care with full cost paid out-of-pocket. This paper empirically analyses the effect of the determinants of the demand for PHI on the probability of purchasing PHI by estimating a pseudo-structural model to deal with missing data and endogeneity issues. Our findings support the hypothesis that the demand for PHI is indeed driven by the quality gap between private and public health care. As expected, PHI is a normal good and a rise in the insurance premium reduces the probability of purchasing PHI albeit displaying price elasticities smaller than one in absolute value for different groups of individuals. Copyright 2002 John Wiley & Sons, Ltd.
Blood detection in wireless capsule endoscopy using expectation maximization clustering
NASA Astrophysics Data System (ADS)
Hwang, Sae; Oh, JungHwan; Cox, Jay; Tang, Shou Jiang; Tibbals, Harry F.
2006-03-01
Wireless Capsule Endoscopy (WCE) is a relatively new technology (FDA approved in 2002) allowing doctors to view most of the small intestine. Other endoscopies such as colonoscopy, upper gastrointestinal endoscopy, push enteroscopy, and intraoperative enteroscopy could be used to visualize up to the stomach, duodenum, colon, and terminal ileum, but there existed no method to view most of the small intestine without surgery. With the miniaturization of wireless and camera technologies came the ability to view the entire gestational track with little effort. A tiny disposable video capsule is swallowed, transmitting two images per second to a small data receiver worn by the patient on a belt. During an approximately 8-hour course, over 55,000 images are recorded to a worn device and then downloaded to a computer for later examination. Typically, a medical clinician spends more than two hours to analyze a WCE video. Research has been attempted to automatically find abnormal regions (especially bleeding) to reduce the time needed to analyze the videos. The manufacturers also provide the software tool to detect the bleeding called Suspected Blood Indicator (SBI), but its accuracy is not high enough to replace human examination. It was reported that the sensitivity and the specificity of SBI were about 72% and 85%, respectively. To address this problem, we propose a technique to detect the bleeding regions automatically utilizing the Expectation Maximization (EM) clustering algorithm. Our experimental results indicate that the proposed bleeding detection method achieves 92% and 98% of sensitivity and specificity, respectively.
Michael R. Vanderberg; Kevin Boston; John Bailey
2011-01-01
Accounting for the probability of loss due to disturbance events can influence the prediction of carbon flux over a planning horizon, and can affect the determination of optimal silvicultural regimes to maximize terrestrial carbon storage. A preliminary model that includes forest disturbance-related carbon loss was developed to maximize expected values of carbon stocks...
Planning Routes Across Economic Terrains: Maximizing Utility, Following Heuristics
Zhang, Hang; Maddula, Soumya V.; Maloney, Laurence T.
2010-01-01
We designed an economic task to investigate human planning of routes in landscapes where travel in different kinds of terrain incurs different costs. Participants moved their finger across a touch screen from a starting point to a destination. The screen was divided into distinct kinds of terrain and travel within each kind of terrain imposed a cost proportional to distance traveled. We varied costs and spatial configurations of terrains and participants received fixed bonuses minus the total cost of the routes they chose. We first compared performance to a model maximizing gain. All but one of 12 participants failed to adopt least-cost routes and their failure to do so reduced their winnings by about 30% (median value). We tested in detail whether participants’ choices of routes satisfied three necessary conditions (heuristics) for a route to maximize gain. We report failures of one heuristic for 7 out of 12 participants. Last of all, we modeled human performance with the assumption that participants assign subjective utilities to costs and maximize utility. For 7 out 12 participants, the fitted utility function was an accelerating power function of actual cost and for the remaining 5, a decelerating power function. We discuss connections between utility aggregation in route planning and decision under risk. Our task could be adapted to investigate human strategy and optimality of route planning in full-scale landscapes. PMID:21833269
NASA Technical Reports Server (NTRS)
Sutliff, Thomas J.; Otero, Angel M.; Urban, David L.
2002-01-01
The Physical Sciences Research Program of NASA sponsors a broad suite of peer-reviewed research investigating fundamental combustion phenomena and applied combustion research topics. This research is performed through both ground-based and on-orbit research capabilities. The International Space Station (ISS) and two facilities, the Combustion Integrated Rack and the Microgravity Science Glovebox, are key elements in the execution of microgravity combustion flight research planned for the foreseeable future. This paper reviews the Microgravity Combustion Science research planned for the International Space Station implemented from 2003 through 2012. Examples of selected research topics, expected outcomes, and potential benefits will be provided. This paper also summarizes a multi-user hardware development approach, recapping the progress made in preparing these research hardware systems. Within the description of this approach, an operational strategy is presented that illustrates how utilization of constrained ISS resources may be maximized dynamically to increase science through design decisions made during hardware development.
Toward a Responsibility-Catering Prioritarian Ethical Theory of Risk.
Wikman-Svahn, Per; Lindblom, Lars
2018-03-05
Standard tools used in societal risk management such as probabilistic risk analysis or cost-benefit analysis typically define risks in terms of only probabilities and consequences and assume a utilitarian approach to ethics that aims to maximize expected utility. The philosopher Carl F. Cranor has argued against this view by devising a list of plausible aspects of the acceptability of risks that points towards a non-consequentialist ethical theory of societal risk management. This paper revisits Cranor's list to argue that the alternative ethical theory responsibility-catering prioritarianism can accommodate the aspects identified by Cranor and that the elements in the list can be used to inform the details of how to view risks within this theory. An approach towards operationalizing the theory is proposed based on a prioritarian social welfare function that operates on responsibility-adjusted utilities. A responsibility-catering prioritarian ethical approach towards managing risks is a promising alternative to standard tools such as cost-benefit analysis.
Planning the FUSE Mission Using the SOVA Algorithm
NASA Technical Reports Server (NTRS)
Lanzi, James; Heatwole, Scott; Ward, Philip R.; Civeit, Thomas; Calvani, Humberto; Kruk, Jeffrey W.; Suchkov, Anatoly
2011-01-01
Three documents discuss the Sustainable Objective Valuation and Attainability (SOVA) algorithm and software as used to plan tasks (principally, scientific observations and associated maneuvers) for the Far Ultraviolet Spectroscopic Explorer (FUSE) satellite. SOVA is a means of managing risk in a complex system, based on a concept of computing the expected return value of a candidate ordered set of tasks as a product of pre-assigned task values and assessments of attainability made against qualitatively defined strategic objectives. For the FUSE mission, SOVA autonomously assembles a week-long schedule of target observations and associated maneuvers so as to maximize the expected scientific return value while keeping the satellite stable, managing the angular momentum of spacecraft attitude- control reaction wheels, and striving for other strategic objectives. A six-degree-of-freedom model of the spacecraft is used in simulating the tasks, and the attainability of a task is calculated at each step by use of strategic objectives as defined by use of fuzzy inference systems. SOVA utilizes a variant of a graph-search algorithm known as the A* search algorithm to assemble the tasks into a week-long target schedule, using the expected scientific return value to guide the search.
A decision theoretical approach for diffusion promotion
NASA Astrophysics Data System (ADS)
Ding, Fei; Liu, Yun
2009-09-01
In order to maximize cost efficiency from scarce marketing resources, marketers are facing the problem of which group of consumers to target for promotions. We propose to use a decision theoretical approach to model this strategic situation. According to one promotion model that we develop, marketers balance between probabilities of successful persuasion and the expected profits on a diffusion scale, before making their decisions. In the other promotion model, the cost for identifying influence information is considered, and marketers are allowed to ignore individual heterogeneity. We apply the proposed approach to two threshold influence models, evaluate the utility of each promotion action, and provide discussions about the best strategy. Our results show that efforts for targeting influentials or easily influenced people might be redundant under some conditions.
Towards a theory of tiered testing.
Hansson, Sven Ove; Rudén, Christina
2007-06-01
Tiered testing is an essential part of any resource-efficient strategy for the toxicity testing of a large number of chemicals, which is required for instance in the risk management of general (industrial) chemicals, In spite of this, no general theory seems to be available for the combination of single tests into efficient tiered testing systems. A first outline of such a theory is developed. It is argued that chemical, toxicological, and decision-theoretical knowledge should be combined in the construction of such a theory. A decision-theoretical approach for the optimization of test systems is introduced. It is based on expected utility maximization with simplified assumptions covering factual and value-related information that is usually missing in the development of test systems.
Balakrishnan, Narayanaswamy; Pal, Suvra
2016-08-01
Recently, a flexible cure rate survival model has been developed by assuming the number of competing causes of the event of interest to follow the Conway-Maxwell-Poisson distribution. This model includes some of the well-known cure rate models discussed in the literature as special cases. Data obtained from cancer clinical trials are often right censored and expectation maximization algorithm can be used in this case to efficiently estimate the model parameters based on right censored data. In this paper, we consider the competing cause scenario and assuming the time-to-event to follow the Weibull distribution, we derive the necessary steps of the expectation maximization algorithm for estimating the parameters of different cure rate survival models. The standard errors of the maximum likelihood estimates are obtained by inverting the observed information matrix. The method of inference developed here is examined by means of an extensive Monte Carlo simulation study. Finally, we illustrate the proposed methodology with a real data on cancer recurrence. © The Author(s) 2013.
Very Slow Search and Reach: Failure to Maximize Expected Gain in an Eye-Hand Coordination Task
Zhang, Hang; Morvan, Camille; Etezad-Heydari, Louis-Alexandre; Maloney, Laurence T.
2012-01-01
We examined an eye-hand coordination task where optimal visual search and hand movement strategies were inter-related. Observers were asked to find and touch a target among five distractors on a touch screen. Their reward for touching the target was reduced by an amount proportional to how long they took to locate and reach to it. Coordinating the eye and the hand appropriately would markedly reduce the search-reach time. Using statistical decision theory we derived the sequence of interrelated eye and hand movements that would maximize expected gain and we predicted how hand movements should change as the eye gathered further information about target location. We recorded human observers' eye movements and hand movements and compared them with the optimal strategy that would have maximized expected gain. We found that most observers failed to adopt the optimal search-reach strategy. We analyze and describe the strategies they did adopt. PMID:23071430
Dong, J; Hayakawa, Y; Kober, C
2014-01-01
When metallic prosthetic appliances and dental fillings exist in the oral cavity, the appearance of metal-induced streak artefacts is not avoidable in CT images. The aim of this study was to develop a method for artefact reduction using the statistical reconstruction on multidetector row CT images. Adjacent CT images often depict similar anatomical structures. Therefore, reconstructed images with weak artefacts were attempted using projection data of an artefact-free image in a neighbouring thin slice. Images with moderate and strong artefacts were continuously processed in sequence by successive iterative restoration where the projection data was generated from the adjacent reconstructed slice. First, the basic maximum likelihood-expectation maximization algorithm was applied. Next, the ordered subset-expectation maximization algorithm was examined. Alternatively, a small region of interest setting was designated. Finally, the general purpose graphic processing unit machine was applied in both situations. The algorithms reduced the metal-induced streak artefacts on multidetector row CT images when the sequential processing method was applied. The ordered subset-expectation maximization and small region of interest reduced the processing duration without apparent detriments. A general-purpose graphic processing unit realized the high performance. A statistical reconstruction method was applied for the streak artefact reduction. The alternative algorithms applied were effective. Both software and hardware tools, such as ordered subset-expectation maximization, small region of interest and general-purpose graphic processing unit achieved fast artefact correction.
Pal, Suvra; Balakrishnan, Narayanaswamy
2018-05-01
In this paper, we develop likelihood inference based on the expectation maximization algorithm for the Box-Cox transformation cure rate model assuming the lifetimes to follow a Weibull distribution. A simulation study is carried out to demonstrate the performance of the proposed estimation method. Through Monte Carlo simulations, we also study the effect of model misspecification on the estimate of cure rate. Finally, we analyze a well-known data on melanoma with the model and the inferential method developed here.
Landheer, Karl; Johns, Paul C
2012-09-01
Traditional projection x-ray imaging utilizes only the information from the primary photons. Low-angle coherent scatter images can be acquired simultaneous to the primary images and provide additional information. In medical applications scatter imaging can improve x-ray contrast or reduce dose using information that is currently discarded in radiological images to augment the transmitted radiation information. Other applications include non-destructive testing and security. A system at the Canadian Light Source synchrotron was configured which utilizes multiple pencil beams (up to five) to create both primary and coherent scatter projection images, simultaneously. The sample was scanned through the beams using an automated step-and-shoot setup. Pixels were acquired in a hexagonal lattice to maximize packing efficiency. The typical pitch was between 1.0 and 1.6 mm. A Maximum Likelihood-Expectation Maximization-based iterative method was used to disentangle the overlapping information from the flat panel digital x-ray detector. The pixel value of the coherent scatter image was generated by integrating the radial profile (scatter intensity versus scattering angle) over an angular range. Different angular ranges maximize the contrast between different materials of interest. A five-beam primary and scatter image set (which had a pixel beam time of 990 ms and total scan time of 56 min) of a porcine phantom is included. For comparison a single-beam coherent scatter image of the same phantom is included. The muscle-fat contrast was 0.10 ± 0.01 and 1.16 ± 0.03 for the five-beam primary and scatter images, respectively. The air kerma was measured free in air using aluminum oxide optically stimulated luminescent dosimeters. The total area-averaged air kerma for the scan was measured to be 7.2 ± 0.4 cGy although due to difficulties in small-beam dosimetry this number could be inaccurate.
Husak, Jerry F; Fox, Stanley F
2006-09-01
To understand how selection acts on performance capacity, the ecological role of the performance trait being measured must be determined. Knowing if and when an animal uses maximal performance capacity may give insight into what specific selective pressures may be acting on performance, because individuals are expected to use close to maximal capacity only in contexts important to survival or reproductive success. Furthermore, if an ecological context is important, poor performers are expected to compensate behaviorally. To understand the relative roles of natural and sexual selection on maximal sprint speed capacity we measured maximal sprint speed of collared lizards (Crotaphytus collaris) in the laboratory and field-realized sprint speed for the same individuals in three different contexts (foraging, escaping a predator, and responding to a rival intruder). Females used closer to maximal speed while escaping predators than in the other contexts. Adult males, on the other hand, used closer to maximal speed while responding to an unfamiliar male intruder tethered within their territory. Sprint speeds during foraging attempts were far below maximal capacity for all lizards. Yearlings appeared to compensate for having lower absolute maximal capacity by using a greater percentage of their maximal capacity while foraging and escaping predators than did adults of either sex. We also found evidence for compensation within age and sex classes, where slower individuals used a greater percentage of their maximal capacity than faster individuals. However, this was true only while foraging and escaping predators and not while responding to a rival. Collared lizards appeared to choose microhabitats near refugia such that maximal speed was not necessary to escape predators. Although natural selection for predator avoidance cannot be ruled out as a selective force acting on locomotor performance in collared lizards, intrasexual selection for territory maintenance may be more important for territorial males.
Confronting Diversity in the Community College Classroom: Six Maxims for Good Teaching.
ERIC Educational Resources Information Center
Gillett-Karam, Rosemary
1992-01-01
Emphasizes the leadership role of community college faculty in developing critical teaching strategies focusing attention on the needs of women and minorities. Describes six maxims of teaching excellence: engaging students' desire to learn, increasing opportunities, eliminating obstacles, empowering students through high expectations, offering…
A Reward-Maximizing Spiking Neuron as a Bounded Rational Decision Maker.
Leibfried, Felix; Braun, Daniel A
2015-08-01
Rate distortion theory describes how to communicate relevant information most efficiently over a channel with limited capacity. One of the many applications of rate distortion theory is bounded rational decision making, where decision makers are modeled as information channels that transform sensory input into motor output under the constraint that their channel capacity is limited. Such a bounded rational decision maker can be thought to optimize an objective function that trades off the decision maker's utility or cumulative reward against the information processing cost measured by the mutual information between sensory input and motor output. In this study, we interpret a spiking neuron as a bounded rational decision maker that aims to maximize its expected reward under the computational constraint that the mutual information between the neuron's input and output is upper bounded. This abstract computational constraint translates into a penalization of the deviation between the neuron's instantaneous and average firing behavior. We derive a synaptic weight update rule for such a rate distortion optimizing neuron and show in simulations that the neuron efficiently extracts reward-relevant information from the input by trading off its synaptic strengths against the collected reward.
Hartmann, Klaas; Steel, Mike
2006-08-01
The Noah's Ark Problem (NAP) is a comprehensive cost-effectiveness methodology for biodiversity conservation that was introduced by Weitzman (1998) and utilizes the phylogenetic tree containing the taxa of interest to assess biodiversity. Given a set of taxa, each of which has a particular survival probability that can be increased at some cost, the NAP seeks to allocate limited funds to conserving these taxa so that the future expected biodiversity is maximized. Finding optimal solutions using this framework is a computationally difficult problem to which a simple and efficient "greedy" algorithm has been proposed in the literature and applied to conservation problems. We show that, although algorithms of this type cannot produce optimal solutions for the general NAP, there are two restricted scenarios of the NAP for which a greedy algorithm is guaranteed to produce optimal solutions. The first scenario requires the taxa to have equal conservation cost; the second scenario requires an ultrametric tree. The NAP assumes a linear relationship between the funding allocated to conservation of a taxon and the increased survival probability of that taxon. This relationship is briefly investigated and one variation is suggested that can also be solved using a greedy algorithm.
Pittig, Andre; van den Berg, Linda; Vervliet, Bram
2016-01-01
Extinction learning is a major mechanism for fear reduction by means of exposure. Current research targets innovative strategies to enhance fear extinction and thereby optimize exposure-based treatments for anxiety disorders. This selective review updates novel behavioral strategies that may provide cutting-edge clinical implications. Recent studies provide further support for two types of enhancement strategies. Procedural enhancement strategies implemented during extinction training translate to how exposure exercises may be conducted to optimize fear extinction. These strategies mostly focus on a maximized violation of dysfunctional threat expectancies and on reducing context and stimulus specificity of extinction learning. Flanking enhancement strategies target periods before and after extinction training and inform optimal preparation and post-processing of exposure exercises. These flanking strategies focus on the enhancement of learning in general, memory (re-)consolidation, and memory retrieval. Behavioral strategies to enhance fear extinction may provide powerful clinical applications to further maximize the efficacy of exposure-based interventions. However, future replications, mechanistic examinations, and translational studies are warranted to verify long-term effects and naturalistic utility. Future directions also comprise the interplay of optimized fear extinction with (avoidance) behavior and motivational antecedents of exposure.
A case at last for age-phased reduction in equity.
Samuelson, P A
1989-01-01
Maximizing expected utility over a lifetime leads one who has constant relative risk aversion and faces random-walk securities returns to be "myopic" and hold the same fraction of portfolio in equities early and late in life--a defiance of folk wisdom and casual introspection. By assuming one needs to assure at retirement a minimum ("subsistence") level of wealth, the present analysis deduces a pattern of greater risk-taking when young than when old. When a subsistence minimum is needed at every period of life, the rentier paradoxically is least risk tolerant in youth--the Robert C. Merton paradox that traces to the decline with age of the present discounted value of the subsistence-consumption requirements. Conversely, the decline with age of capitalized human capital reverses the Merton effect. PMID:2813438
Vanness, David J
2003-09-01
This paper estimates a fully structural unitary household model of employment and health insurance decisions for dual wage-earner families with children in the United States, using data from the 1987 National Medical Expenditure Survey. Families choose hours of work and the breakdown of compensation between cash wages and health insurance benefits for each wage earner in order to maximize expected utility under uncertain need for medical care. Heterogeneous demand for the employer-sponsored health insurance is thus generated directly from variations in health status and earning potential. The paper concludes by discussing the benefits of using structural models for simulating welfare effects of insurance reform relative to the costly assumptions that must be imposed for identification. Copyright 2003 John Wiley & Sons, Ltd.
Why cognitive science needs philosophy and vice versa.
Thagard, Paul
2009-04-01
Contrary to common views that philosophy is extraneous to cognitive science, this paper argues that philosophy has a crucial role to play in cognitive science with respect to generality and normativity. General questions include the nature of theories and explanations, the role of computer simulation in cognitive theorizing, and the relations among the different fields of cognitive science. Normative questions include whether human thinking should be Bayesian, whether decision making should maximize expected utility, and how norms should be established. These kinds of general and normative questions make philosophical reflection an important part of progress in cognitive science. Philosophy operates best, however, not with a priori reasoning or conceptual analysis, but rather with empirically informed reflection on a wide range of findings in cognitive science. Copyright © 2009 Cognitive Science Society, Inc.
Tomasik, M
1982-01-01
Glucose utilization by the erythrocytes, lactic acid concentration in the blood and erythrocytes, and haematocrit value were determined before exercise and during one hour rest following maximal exercise in 97 individuals of either sex differing in physical efficiency. In the investigations reported by the author individuals with strikingly high physical fitness performed maximal work one-third greater than that performed by individuals with medium fitness. The serum concentration of lactic acid was in all individuals above the resting value still after 60 minutes of rest. On the other hand, this concentration returned to the normal level in the erythrocytes but only in individuals with strikingly high efficiency. Glucose utilization by the erythrocytes during the restitution period was highest immediately after the exercise in all studied individuals and showed a tendency for more rapid return to resting values again in individuals with highest efficiency. The investigation of very efficient individuals repeated twice demonstrated greater utilization of glucose by the erythrocytes at the time of greater maximal exercise. This was associated with greater lactic acid concentration in the serum and erythrocytes throughout the whole one-hour rest period. The observed facts suggest an active participation of erythrocytes in the process of adaptation of the organism to exercise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Wei-Chen; Maitra, Ranjan
2011-01-01
We propose a model-based approach for clustering time series regression data in an unsupervised machine learning framework to identify groups under the assumption that each mixture component follows a Gaussian autoregressive regression model of order p. Given the number of groups, the traditional maximum likelihood approach of estimating the parameters using the expectation-maximization (EM) algorithm can be employed, although it is computationally demanding. The somewhat fast tune to the EM folk song provided by the Alternating Expectation Conditional Maximization (AECM) algorithm can alleviate the problem to some extent. In this article, we develop an alternative partial expectation conditional maximization algorithmmore » (APECM) that uses an additional data augmentation storage step to efficiently implement AECM for finite mixture models. Results on our simulation experiments show improved performance in both fewer numbers of iterations and computation time. The methodology is applied to the problem of clustering mutual funds data on the basis of their average annual per cent returns and in the presence of economic indicators.« less
Bramley, Neil R; Lagnado, David A; Speekenbrink, Maarten
2015-05-01
Interacting with a system is key to uncovering its causal structure. A computational framework for interventional causal learning has been developed over the last decade, but how real causal learners might achieve or approximate the computations entailed by this framework is still poorly understood. Here we describe an interactive computer task in which participants were incentivized to learn the structure of probabilistic causal systems through free selection of multiple interventions. We develop models of participants' intervention choices and online structure judgments, using expected utility gain, probability gain, and information gain and introducing plausible memory and processing constraints. We find that successful participants are best described by a model that acts to maximize information (rather than expected score or probability of being correct); that forgets much of the evidence received in earlier trials; but that mitigates this by being conservative, preferring structures consistent with earlier stated beliefs. We explore 2 heuristics that partly explain how participants might be approximating these models without explicitly representing or updating a hypothesis space. (c) 2015 APA, all rights reserved).
Taksler, Glen B; Perzynski, Adam T; Kattan, Michael W
2017-04-01
Recommendations for colorectal cancer screening encourage patients to choose among various screening methods based on individual preferences for benefits, risks, screening frequency, and discomfort. We devised a model to illustrate how individuals with varying tolerance for screening complications risk might decide on their preferred screening strategy. We developed a discrete-time Markov mathematical model that allowed hypothetical individuals to maximize expected lifetime utility by selecting screening method, start age, stop age, and frequency. Individuals could choose from stool-based testing every 1 to 3 years, flexible sigmoidoscopy every 1 to 20 years with annual stool-based testing, colonoscopy every 1 to 20 years, or no screening. We compared the life expectancy gained from the chosen strategy with the life expectancy available from a benchmark strategy of decennial colonoscopy. For an individual at average risk of colorectal cancer who was risk neutral with respect to screening complications (and therefore was willing to undergo screening if it would actuarially increase life expectancy), the model predicted that he or she would choose colonoscopy every 10 years, from age 53 to 73 years, consistent with national guidelines. For a similar individual who was moderately averse to screening complications risk (and therefore required a greater increase in life expectancy to accept potential risks of colonoscopy), the model predicted that he or she would prefer flexible sigmoidoscopy every 12 years with annual stool-based testing, with 93% of the life expectancy benefit of decennial colonoscopy. For an individual with higher risk aversion, the model predicted that he or she would prefer 2 lifetime flexible sigmoidoscopies, 20 years apart, with 70% of the life expectancy benefit of decennial colonoscopy. Mathematical models may formalize how individuals with different risk attitudes choose between various guideline-recommended colorectal cancer screening strategies.
Equilibrium in a Production Economy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiarolla, Maria B., E-mail: maria.chiarolla@uniroma1.it; Haussmann, Ulrich G., E-mail: uhaus@math.ubc.ca
2011-06-15
Consider a closed production-consumption economy with multiple agents and multiple resources. The resources are used to produce the consumption good. The agents derive utility from holding resources as well as consuming the good produced. They aim to maximize their utility while the manager of the production facility aims to maximize profits. With the aid of a representative agent (who has a multivariable utility function) it is shown that an Arrow-Debreu equilibrium exists. In so doing we establish technical results that will be used to solve the stochastic dynamic problem (a case with infinite dimensional commodity space so the General Equilibriummore » Theory does not apply) elsewhere.« less
NASA Astrophysics Data System (ADS)
Lillo, F.
2007-02-01
I consider the problem of the optimal limit order price of a financial asset in the framework of the maximization of the utility function of the investor. The analytical solution of the problem gives insight on the origin of the recently empirically observed power law distribution of limit order prices. In the framework of the model, the most likely proximate cause of this power law is a power law heterogeneity of traders' investment time horizons.
AHP for Risk Management Based on Expected Utility Theory
NASA Astrophysics Data System (ADS)
Azuma, Rumiko; Miyagi, Hayao
This paper presents a model of decision-making considering the risk assessment. The conventional evaluation in AHP is considered to be a kind of utility. When dealing with the risk, however, it is necessary to consider the probability of damage. In order to take risk into decision-making problem, we construct AHP based on expected utility. The risk is considered as a related element of criterion rather than criterion itself. The expected utility is integrated, considering that satisfaction is positive utility and damage by risk is negative utility. Then, evaluation in AHP is executed using the expected utility.
Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib
2016-01-01
Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate Ki as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting Ki images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit Ki bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source Software for Tomographic Image Reconstruction (STIR) platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced Ki target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D vs. the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10–20 sub-iterations. Moreover, systematic reduction in Ki % bias and improved TBR were observed for gPatlak vs. sPatlak. Finally, validation on clinical WB dynamic data demonstrated the clinical feasibility and superior Ki CNR performance for the proposed 4D framework compared to indirect Patlak and SUV imaging. PMID:27383991
NASA Astrophysics Data System (ADS)
Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib
2016-08-01
Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10-20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were observed for gPatlak versus sPatlak. Finally, validation on clinical WB dynamic data demonstrated the clinical feasibility and superior K i CNR performance for the proposed 4D framework compared to indirect Patlak and SUV imaging.
Modeling cross-border care in the EU using a principal-agent framework.
Crivelli, L; Zweifel, P
1998-01-01
Cross-border care is likely to become a major issue among EU countries because patients have the option of obtaining treatment abroad under Community Regulations 1408/71. This paper develops a model formalizing both the patient's decision to apply for cross-border care and the authorizing physician's decision to admit a patient to the program. The patient is assumed to maximize expected utility, which depends on the quality of care and the length of waiting in the home country and the host country, respectively. Not all patients qualifying for the EU program present themselves to the authorizing physician because of the transaction cost involved. The physician in her turn shapes effective demand for authorization through her rate of refusal, which constitutes information to potential applicants about the probability of obtaining treatment abroad. The authorizing physician thus acts as an agent serving two principals, her patient and her national government, trading off the perceived utility loss of patients who are rejected against her commitment to domestic health policy. The model may be used to explain existing patient flows between EU countries.
Brand, Samuel P C; Keeling, Matt J
2017-03-01
It is a long recognized fact that climatic variations, especially temperature, affect the life history of biting insects. This is particularly important when considering vector-borne diseases, especially in temperate regions where climatic fluctuations are large. In general, it has been found that most biological processes occur at a faster rate at higher temperatures, although not all processes change in the same manner. This differential response to temperature, often considered as a trade-off between onward transmission and vector life expectancy, leads to the total transmission potential of an infected vector being maximized at intermediate temperatures. Here we go beyond the concept of a static optimal temperature, and mathematically model how realistic temperature variation impacts transmission dynamics. We use bluetongue virus (BTV), under UK temperatures and transmitted by Culicoides midges, as a well-studied example where temperature fluctuations play a major role. We first consider an optimal temperature profile that maximizes transmission, and show that this is characterized by a warm day to maximize biting followed by cooler weather to maximize vector life expectancy. This understanding can then be related to recorded representative temperature patterns for England, the UK region which has experienced BTV cases, allowing us to infer historical transmissibility of BTV, as well as using forecasts of climate change to predict future transmissibility. Our results show that when BTV first invaded northern Europe in 2006 the cumulative transmission intensity was higher than any point in the last 50 years, although with climate change such high risks are the expected norm by 2050. Such predictions would indicate that regular BTV epizootics should be expected in the UK in the future. © 2017 The Author(s).
On the role of budget sufficiency, cost efficiency, and uncertainty in species management
van der Burg, Max Post; Bly, Bartholomew B.; Vercauteren, Tammy; Grand, James B.; Tyre, Andrew J.
2014-01-01
Many conservation planning frameworks rely on the assumption that one should prioritize locations for management actions based on the highest predicted conservation value (i.e., abundance, occupancy). This strategy may underperform relative to the expected outcome if one is working with a limited budget or the predicted responses are uncertain. Yet, cost and tolerance to uncertainty rarely become part of species management plans. We used field data and predictive models to simulate a decision problem involving western burrowing owls (Athene cunicularia hypugaea) using prairie dog colonies (Cynomys ludovicianus) in western Nebraska. We considered 2 species management strategies: one maximized abundance and the other maximized abundance in a cost-efficient way. We then used heuristic decision algorithms to compare the 2 strategies in terms of how well they met a hypothetical conservation objective. Finally, we performed an info-gap decision analysis to determine how these strategies performed under different budget constraints and uncertainty about owl response. Our results suggested that when budgets were sufficient to manage all sites, the maximizing strategy was optimal and suggested investing more in expensive actions. This pattern persisted for restricted budgets up to approximately 50% of the sufficient budget. Below this budget, the cost-efficient strategy was optimal and suggested investing in cheaper actions. When uncertainty in the expected responses was introduced, the strategy that maximized abundance remained robust under a sufficient budget. Reducing the budget induced a slight trade-off between expected performance and robustness, which suggested that the most robust strategy depended both on one's budget and tolerance to uncertainty. Our results suggest that wildlife managers should explicitly account for budget limitations and be realistic about their expected levels of performance.
Computational medicinal chemistry in fragment-based drug discovery: what, how and when.
Rabal, Obdulia; Urbano-Cuadrado, Manuel; Oyarzabal, Julen
2011-01-01
The use of fragment-based drug discovery (FBDD) has increased in the last decade due to the encouraging results obtained to date. In this scenario, computational approaches, together with experimental information, play an important role to guide and speed up the process. By default, FBDD is generally considered as a constructive approach. However, such additive behavior is not always present, therefore, simple fragment maturation will not always deliver the expected results. In this review, computational approaches utilized in FBDD are reported together with real case studies, where applicability domains are exemplified, in order to analyze them, and then, maximize their performance and reliability. Thus, a proper use of these computational tools can minimize misleading conclusions, keeping the credit on FBDD strategy, as well as achieve higher impact in the drug-discovery process. FBDD goes one step beyond a simple constructive approach. A broad set of computational tools: docking, R group quantitative structure-activity relationship, fragmentation tools, fragments management tools, patents analysis and fragment-hopping, for example, can be utilized in FBDD, providing a clear positive impact if they are utilized in the proper scenario - what, how and when. An initial assessment of additive/non-additive behavior is a critical point to define the most convenient approach for fragments elaboration.
Expected Utility Distributions for Flexible, Contingent Execution
NASA Technical Reports Server (NTRS)
Bresina, John L.; Washington, Richard
2000-01-01
This paper presents a method for using expected utility distributions in the execution of flexible, contingent plans. A utility distribution maps the possible start times of an action to the expected utility of the plan suffix starting with that action. The contingent plan encodes a tree of possible courses of action and includes flexible temporal constraints and resource constraints. When execution reaches a branch point, the eligible option with the highest expected utility at that point in time is selected. The utility distributions make this selection sensitive to the runtime context, yet still efficient. Our approach uses predictions of action duration uncertainty as well as expectations of resource usage and availability to determine when an action can execute and with what probability. Execution windows and probabilities inevitably change as execution proceeds, but such changes do not invalidate the cached utility distributions, thus, dynamic updating of utility information is minimized.
Interval-based reconstruction for uncertainty quantification in PET
NASA Astrophysics Data System (ADS)
Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis
2018-02-01
A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghoudjehbaklou, H.; Puttgen, H.B.
This paper outlines an optimum spot price determination procedure in the general context of the Public Utility Regulatory Policies Act, PURPA, provisions. PURPA stipulates that local utilities must offer to purchase all available excess electric energy from Qualifying Facilities, QF, at fair market prices. As a direct consequence of these PURPA regulations, a growing number of owners are installing power producing facilities and optimize their operational schedules to minimize their utility related costs or, in some cases, actually maximize their revenues from energy sales to the local utility. In turn, the utility strives to use spot prices which maximize itsmore » revenues from any given Small Power Producing Facility, SPPF, a schedule while respecting the general regulatory and contractual framework. the proposed optimum spot price determination procedure fully models the SPPF operation, it enforces the contractual and regulatory restrictions, and it ensures the uniqueness of the optimum SPPF schedule.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghoudjehbaklou, H.; Puttgen, H.B.
The present paper outlines an optimum spot price determination procedure in the general context of the Public Utility Regulatory Policies Act, PURPA, provisions. PURPA stipulates that local utilities must offer to purchase all available excess electric energy from Qualifying Facilities, QF, at fair market prices. As a direct consequence of these PURPA regulations, a growing number of owners are installing power producing facilities and optimize their operational schedules to minimize their utility related costs or, in some cases, actually maximize their revenues from energy sales to the local utility. In turn, the utility will strive to use spot prices whichmore » maximize its revenues from any given Small Power Producing Facility, SPPF, schedule while respecting the general regulatory and contractual framework. The proposed optimum spot price determination procedure fully models the SPPF operation, it enforces the contractual and regulatory restrictions, and it ensures the uniqueness of the optimum SPPF schedule.« less
Designing Agent Collectives For Systems With Markovian Dynamics
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Lawson, John W.
2004-01-01
The Collective Intelligence (COIN) framework concerns the design of collectives of agents so that as those agents strive to maximize their individual utility functions, their interaction causes a provided world utility function concerning the entire collective to be also maximized. Here we show how to extend that framework to scenarios having Markovian dynamics when no re-evolution of the system from counter-factual initial conditions (an often expensive calculation) is permitted. Our approach transforms the (time-extended) argument of each agent's utility function before evaluating that function. This transformation has benefits in scenarios not involving Markovian dynamics of an agent's utility function are observable. We investigate this transformation in simulations involving both hear and quadratic (nonlinear) dynamics. In addition, we find that a certain subset of these transformations, which result in utilities that have low opacity (analogous to having high signal to noise) but are not factored (analogous to not being incentive compatible), reliably improve performance over that arising with factored utilities. We also present a Taylor Series method for the fully general nonlinear case.
The benefits of social influence in optimized cultural markets.
Abeliuk, Andrés; Berbeglia, Gerardo; Cebrian, Manuel; Van Hentenryck, Pascal
2015-01-01
Social influence has been shown to create significant unpredictability in cultural markets, providing one potential explanation why experts routinely fail at predicting commercial success of cultural products. As a result, social influence is often presented in a negative light. Here, we show the benefits of social influence for cultural markets. We present a policy that uses product quality, appeal, position bias and social influence to maximize expected profits in the market. Our computational experiments show that our profit-maximizing policy leverages social influence to produce significant performance benefits for the market, while our theoretical analysis proves that our policy outperforms in expectation any policy not displaying social signals. Our results contrast with earlier work which focused on showing the unpredictability and inequalities created by social influence. Not only do we show for the first time that, under our policy, dynamically showing consumers positive social signals increases the expected profit of the seller in cultural markets. We also show that, in reasonable settings, our profit-maximizing policy does not introduce significant unpredictability and identifies "blockbusters". Overall, these results shed new light on the nature of social influence and how it can be leveraged for the benefits of the market.
Optimal Investment Under Transaction Costs: A Threshold Rebalanced Portfolio Approach
NASA Astrophysics Data System (ADS)
Tunc, Sait; Donmez, Mehmet Ali; Kozat, Suleyman Serdar
2013-06-01
We study optimal investment in a financial market having a finite number of assets from a signal processing perspective. We investigate how an investor should distribute capital over these assets and when he should reallocate the distribution of the funds over these assets to maximize the cumulative wealth over any investment period. In particular, we introduce a portfolio selection algorithm that maximizes the expected cumulative wealth in i.i.d. two-asset discrete-time markets where the market levies proportional transaction costs in buying and selling stocks. We achieve this using "threshold rebalanced portfolios", where trading occurs only if the portfolio breaches certain thresholds. Under the assumption that the relative price sequences have log-normal distribution from the Black-Scholes model, we evaluate the expected wealth under proportional transaction costs and find the threshold rebalanced portfolio that achieves the maximal expected cumulative wealth over any investment period. Our derivations can be readily extended to markets having more than two stocks, where these extensions are pointed out in the paper. As predicted from our derivations, we significantly improve the achieved wealth over portfolio selection algorithms from the literature on historical data sets.
Matching Pupils and Teachers to Maximize Expected Outcomes.
ERIC Educational Resources Information Center
Ward, Joe H., Jr.; And Others
To achieve a good teacher-pupil match, it is necessary (1) to predict the learning outcomes that will result when each student is instructed by each teacher, (2) to use the predicted performance to compute an Optimality Index for each teacher-pupil combination to indicate the quality of each combination toward maximizing learning for all students,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fregosi, D.; Ravula, S.; Brhlik, D.
2015-04-22
Bosch has developed and demonstrated a novel DC microgrid system designed to maximize utilization efficiency for locally generated photovoltaic energy while offering high reliability, safety, redundancy, and reduced cost compared to equivalent AC systems. Several demonstration projects validating the system feasibility and expected efficiency gains have been completed and additional ones are in progress. This work gives an overview of the Bosch DC microgrid system and presents key results from a large simulation study done to estimate the energy savings of the Bosch DC microgrid over conventional AC systems. The study examined the system performance in locations across the Unitedmore » States for several commercial building types and operating profiles and found that the Bosch DC microgrid uses generated PV energy 6%–8% more efficiently than traditional AC systems.« less
Invisible hand effect in an evolutionary minority game model
NASA Astrophysics Data System (ADS)
Sysi-Aho, Marko; Saramäki, Jari; Kaski, Kimmo
2005-03-01
In this paper, we study the properties of a minority game with evolution realized by using genetic crossover to modify fixed-length decision-making strategies of agents. Although the agents in this evolutionary game act selfishly by trying to maximize their own performances only, it turns out that the whole society will eventually be rewarded optimally. This “invisible hand” effect is what Adam Smith over two centuries ago expected to take place in the context of free market mechanism. However, this behaviour of the society of agents is realized only under idealized conditions, where all agents are utilizing the same efficient evolutionary mechanism. If on the other hand part of the agents are adaptive, but not evolutionary, the system does not reach optimum performance, which is also the case if part of the evolutionary agents form a uniformly acting “cartel”.
Decision Making in Kidney Paired Donation Programs with Altruistic Donors*
Li, Yijiang; Song, Peter X.-K.; Leichtman, Alan B.; Rees, Michael A.; Kalbfleisch, John D.
2014-01-01
In recent years, kidney paired donation (KPD) has been extended to include living non-directed or altruistic donors, in which an altruistic donor donates to the candidate of an incompatible donor-candidate pair with the understanding that the donor in that pair will further donate to the candidate of a second pair, and so on; such a process continues and thus forms an altruistic donor-initiated chain. In this paper, we propose a novel strategy to sequentially allocate the altruistic donor (or bridge donor) so as to maximize the expected utility; analogous to the way a computer plays chess, the idea is to evaluate different allocations for each altruistic donor (or bridge donor) by looking several moves ahead in a derived look-ahead search tree. Simulation studies are provided to illustrate and evaluate our proposed method. PMID:25309603
An expected utility maximizer walks into a bar…
Burghart, Daniel R; Glimcher, Paul W; Lazzaro, Stephanie C
2013-06-01
We conducted field experiments at a bar to test whether blood alcohol concentration (BAC) correlates with violations of the generalized axiom of revealed preference (GARP) and the independence axiom. We found that individuals with BACs well above the legal limit for driving adhere to GARP and independence at rates similar to those who are sober. This finding led to the fielding of a third experiment to explore how risk preferences might vary as a function of BAC. We found gender-specific effects: Men did not exhibit variations in risk preferences across BACs. In contrast, women were more risk averse than men at low BACs but exhibited increasing tolerance towards risks as BAC increased. Based on our estimates, men and women's risk preferences are predicted to be identical at BACs nearly twice the legal limit for driving. We discuss the implications for policy-makers.
Asset Management for Water and Wastewater Utilities
Renewing and replacing the nation's public water infrastructure is an ongoing task. Asset management can help a utility maximize the value of its capital as well as its operations and maintenance dollars.
Ecological neighborhoods as a framework for umbrella species selection
Stuber, Erica F.; Fontaine, Joseph J.
2018-01-01
Umbrella species are typically chosen because they are expected to confer protection for other species assumed to have similar ecological requirements. Despite its popularity and substantial history, the value of the umbrella species concept has come into question because umbrella species chosen using heuristic methods, such as body or home range size, are not acting as adequate proxies for the metrics of interest: species richness or population abundance in a multi-species community for which protection is sought. How species associate with habitat across ecological scales has important implications for understanding population size and species richness, and therefore may be a better proxy for choosing an umbrella species. We determined the spatial scales of ecological neighborhoods important for predicting abundance of 8 potential umbrella species breeding in Nebraska using Bayesian latent indicator scale selection in N-mixture models accounting for imperfect detection. We compare the conservation value measured as collective avian abundance under different umbrella species selected following commonly used criteria and selected based on identifying spatial land cover characteristics within ecological neighborhoods that maximize collective abundance. Using traditional criteria to select an umbrella species resulted in sub-maximal expected collective abundance in 86% of cases compared to selecting an umbrella species based on land cover characteristics that maximized collective abundance directly. We conclude that directly assessing the expected quantitative outcomes, rather than ecological proxies, is likely the most efficient method to maximize the potential for conservation success under the umbrella species concept.
Power Converters Maximize Outputs Of Solar Cell Strings
NASA Technical Reports Server (NTRS)
Frederick, Martin E.; Jermakian, Joel B.
1993-01-01
Microprocessor-controlled dc-to-dc power converters devised to maximize power transferred from solar photovoltaic strings to storage batteries and other electrical loads. Converters help in utilizing large solar photovoltaic arrays most effectively with respect to cost, size, and weight. Main points of invention are: single controller used to control and optimize any number of "dumb" tracker units and strings independently; power maximized out of converters; and controller in system is microprocessor.
Thermoelectric Properties of SnS with Na-Doping.
Zhou, Binqiang; Li, Shuai; Li, Wen; Li, Juan; Zhang, Xinyue; Lin, Siqi; Chen, Zhiwei; Pei, Yanzhong
2017-10-04
Tin sulfide (SnS), a low-cost compound from the IV-VI semiconductors, has attracted particular attention due to its great potential for large-scale thermoelectric applications. However, pristine SnS shows a low carrier concentration, which leads to a low thermoelectric performance. In this work, sodium is utilized to substitute Sn to increase the hole concentration and consequently improve the thermoelectric power factor. The resultant Hall carrier concentration up to ∼10 19 cm -3 is the highest concentration reported so far for this compound. This further leads to the highest thermoelectric figure of merit, zT of 0.65, reported so far in polycrystalline SnS. The temperature-dependent Hall mobility shows a transition of carrier-scattering source from a grain boundary potential below 400 K to acoustic phonons at higher temperatures. The electronic transport properties can be well understood by a single parabolic band (SPB) model, enabling a quantitative guidance for maximizing the thermoelectric power factor. Using the experimental lattice thermal conductivity, a maximal zT of 0.8 at 850 K is expected when the carrier concentration is further increased to ∼1 × 10 20 cm -3 , according to the SPB model. This work not only demonstrates SnS as a promising low-cost thermoelectric material but also details the material parameters that fundamentally determine the thermoelectric properties.
Cognitive Somatic Behavioral Interventions for Maximizing Gymnastic Performance.
ERIC Educational Resources Information Center
Ravizza, Kenneth; Rotella, Robert
Psychological training programs developed and implemented for gymnasts of a wide range of age and varying ability levels are examined. The programs utilized strategies based on cognitive-behavioral intervention. The approach contends that mental training plays a crucial role in maximizing performance for most gymnasts. The object of the training…
Economics of Red Pine Management for Utility Pole Timber
Gerald H. Grossman; Karen Potter-Witter
1991-01-01
Including utility poles in red pine management regimes leads to distinctly different management recommendations. Where utility pole markets exist, managing for poles will maximize net returns. To do so, plantations should be maintained above 110 ft2/ac, higher than usually recommended. In Michigan's northern lower peninsula, approximately...
Text Classification for Intelligent Portfolio Management
2002-05-01
years including nearest neighbor classification [15], naive Bayes with EM (Ex- pectation Maximization) [11] [13], Winnow with active learning [10... Active Learning and Expectation Maximization (EM). In particular, active learning is used to actively select documents for labeling, then EM assigns...generalization with active learning . Machine Learning, 15(2):201–221, 1994. [3] I. Dagan and P. Engelson. Committee-based sampling for training
Do violations of the axioms of expected utility theory threaten decision analysis?
Nease, R F
1996-01-01
Research demonstrates that people violate the independence principle of expected utility theory, raising the question of whether expected utility theory is normative for medical decision making. The author provides three arguments that violations of the independence principle are less problematic than they might first appear. First, the independence principle follows from other more fundamental axioms whose appeal may be more readily apparent than that of the independence principle. Second, the axioms need not be descriptive to be normative, and they need not be attractive to all decision makers for expected utility theory to be useful for some. Finally, by providing a metaphor of decision analysis as a conversation between the actual decision maker and a model decision maker, the author argues that expected utility theory need not be purely normative for decision analysis to be useful. In short, violations of the independence principle do not necessarily represent direct violations of the axioms of expected utility theory; behavioral violations of the axioms of expected utility theory do not necessarily imply that decision analysis is not normative; and full normativeness is not necessary for decision analysis to generate valuable insights.
Replica analysis for the duality of the portfolio optimization problem
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2016-11-01
In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.
Replica analysis for the duality of the portfolio optimization problem.
Shinzato, Takashi
2016-11-01
In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.
Partitioning-based mechanisms under personalized differential privacy.
Li, Haoran; Xiong, Li; Ji, Zhanglong; Jiang, Xiaoqian
2017-05-01
Differential privacy has recently emerged in private statistical aggregate analysis as one of the strongest privacy guarantees. A limitation of the model is that it provides the same privacy protection for all individuals in the database. However, it is common that data owners may have different privacy preferences for their data. Consequently, a global differential privacy parameter may provide excessive privacy protection for some users, while insufficient for others. In this paper, we propose two partitioning-based mechanisms, privacy-aware and utility-based partitioning, to handle personalized differential privacy parameters for each individual in a dataset while maximizing utility of the differentially private computation. The privacy-aware partitioning is to minimize the privacy budget waste, while utility-based partitioning is to maximize the utility for a given aggregate analysis. We also develop a t -round partitioning to take full advantage of remaining privacy budgets. Extensive experiments using real datasets show the effectiveness of our partitioning mechanisms.
Partitioning-based mechanisms under personalized differential privacy
Li, Haoran; Xiong, Li; Ji, Zhanglong; Jiang, Xiaoqian
2017-01-01
Differential privacy has recently emerged in private statistical aggregate analysis as one of the strongest privacy guarantees. A limitation of the model is that it provides the same privacy protection for all individuals in the database. However, it is common that data owners may have different privacy preferences for their data. Consequently, a global differential privacy parameter may provide excessive privacy protection for some users, while insufficient for others. In this paper, we propose two partitioning-based mechanisms, privacy-aware and utility-based partitioning, to handle personalized differential privacy parameters for each individual in a dataset while maximizing utility of the differentially private computation. The privacy-aware partitioning is to minimize the privacy budget waste, while utility-based partitioning is to maximize the utility for a given aggregate analysis. We also develop a t-round partitioning to take full advantage of remaining privacy budgets. Extensive experiments using real datasets show the effectiveness of our partitioning mechanisms. PMID:28932827
When Does Reward Maximization Lead to Matching Law?
Sakai, Yutaka; Fukai, Tomoki
2008-01-01
What kind of strategies subjects follow in various behavioral circumstances has been a central issue in decision making. In particular, which behavioral strategy, maximizing or matching, is more fundamental to animal's decision behavior has been a matter of debate. Here, we prove that any algorithm to achieve the stationary condition for maximizing the average reward should lead to matching when it ignores the dependence of the expected outcome on subject's past choices. We may term this strategy of partial reward maximization “matching strategy”. Then, this strategy is applied to the case where the subject's decision system updates the information for making a decision. Such information includes subject's past actions or sensory stimuli, and the internal storage of this information is often called “state variables”. We demonstrate that the matching strategy provides an easy way to maximize reward when combined with the exploration of the state variables that correctly represent the crucial information for reward maximization. Our results reveal for the first time how a strategy to achieve matching behavior is beneficial to reward maximization, achieving a novel insight into the relationship between maximizing and matching. PMID:19030101
[Measures to reduce lighting-related energy use and costs at hospital nursing stations].
Su, Chiu-Ching; Chen, Chen-Hui; Chen, Shu-Hwa; Ping, Tsui-Chu
2011-06-01
Hospitals have long been expected to deliver medical services in an environment that is comfortable and bright. This expectation keeps hospital energy demand stubbornly high and energy costs spiraling due to escalating utility fees. Hospitals must identify appropriate strategies to control electricity usage in order to control operating costs effectively. This paper proposes several electricity saving measures that both support government policies aimed at reducing global warming and help reduce energy consumption at the authors' hospital. The authors held educational seminars, established a website teaching energy saving methods, maximized facility and equipment use effectiveness (e.g., adjusting lamp placements, power switch and computer saving modes), posted signs promoting electricity saving, and established a regularized energy saving review mechanism. After implementation, average nursing staff energy saving knowledge had risen from 71.8% to 100% and total nursing station electricity costs fell from NT$16,456 to NT$10,208 per month, representing an effective monthly savings of 37.9% (NT$6,248). This project demonstrated the ability of a program designed to slightly modify nursing staff behavior to achieve effective and meaningful results in reducing overall electricity use.
Using Classification Trees to Predict Alumni Giving for Higher Education
ERIC Educational Resources Information Center
Weerts, David J.; Ronca, Justin M.
2009-01-01
As the relative level of public support for higher education declines, colleges and universities aim to maximize alumni-giving to keep their programs competitive. Anchored in a utility maximization framework, this study employs the classification and regression tree methodology to examine characteristics of alumni donors and non-donors at a…
Designing Agent Collectives For Systems With Markovian Dynamics
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Lawson, John W.; Clancy, Daniel (Technical Monitor)
2001-01-01
The "Collective Intelligence" (COIN) framework concerns the design of collectives of agents so that as those agents strive to maximize their individual utility functions, their interaction causes a provided "world" utility function concerning the entire collective to be also maximized. Here we show how to extend that framework to scenarios having Markovian dynamics when no re-evolution of the system from counter-factual initial conditions (an often expensive calculation) is permitted. Our approach transforms the (time-extended) argument of each agent's utility function before evaluating that function. This transformation has benefits in scenarios not involving Markovian dynamics, in particular scenarios where not all of the arguments of an agent's utility function are observable. We investigate this transformation in simulations involving both linear and quadratic (nonlinear) dynamics. In addition, we find that a certain subset of these transformations, which result in utilities that have low "opacity (analogous to having high signal to noise) but are not "factored" (analogous to not being incentive compatible), reliably improve performance over that arising with factored utilities. We also present a Taylor Series method for the fully general nonlinear case.
Optimal Base Station Density of Dense Network: From the Viewpoint of Interference and Load.
Feng, Jianyuan; Feng, Zhiyong
2017-09-11
Network densification is attracting increasing attention recently due to its ability to improve network capacity by spatial reuse and relieve congestion by offloading. However, excessive densification and aggressive offloading can also cause the degradation of network performance due to problems of interference and load. In this paper, with consideration of load issues, we study the optimal base station density that maximizes the throughput of the network. The expected link rate and the utilization ratio of the contention-based channel are derived as the functions of base station density using the Poisson Point Process (PPP) and Markov Chain. They reveal the rules of deployment. Based on these results, we obtain the throughput of the network and indicate the optimal deployment density under different network conditions. Extensive simulations are conducted to validate our analysis and show the substantial performance gain obtained by the proposed deployment scheme. These results can provide guidance for the network densification.
NASA Astrophysics Data System (ADS)
Khambampati, A. K.; Rashid, A.; Kim, B. S.; Liu, Dong; Kim, S.; Kim, K. Y.
2010-04-01
EIT has been used for the dynamic estimation of organ boundaries. One specific application in this context is the estimation of lung boundaries during pulmonary circulation. This would help track the size and shape of lungs of the patients suffering from diseases like pulmonary edema and acute respiratory failure (ARF). The dynamic boundary estimation of the lungs can also be utilized to set and control the air volume and pressure delivered to the patients during artificial ventilation. In this paper, the expectation-maximization (EM) algorithm is used as an inverse algorithm to estimate the non-stationary lung boundary. The uncertainties caused in Kalman-type filters due to inaccurate selection of model parameters are overcome using EM algorithm. Numerical experiments using chest shaped geometry are carried out with proposed method and the performance is compared with extended Kalman filter (EKF). Results show superior performance of EM in estimation of the lung boundary.
Joint Segmentation and Deformable Registration of Brain Scans Guided by a Tumor Growth Model
Gooya, Ali; Pohl, Kilian M.; Bilello, Michel; Biros, George; Davatzikos, Christos
2011-01-01
This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR ) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth. PMID:21995070
Joint segmentation and deformable registration of brain scans guided by a tumor growth model.
Gooya, Ali; Pohl, Kilian M; Bilello, Michel; Biros, George; Davatzikos, Christos
2011-01-01
This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth.
Triangulating the neural, psychological, and economic bases of guilt aversion
Chang, Luke J.; Smith, Alec; Dufwenberg, Martin; Sanfey, Alan G.
2011-01-01
Why do people often choose to cooperate when they can better serve their interests by acting selfishly? One potential mechanism is that the anticipation of guilt can motivate cooperative behavior. We utilize a formal model of this process in conjunction with fMRI to identify brain regions that mediate cooperative behavior while participants decided whether or not to honor a partner’s trust. We observed increased activation in the insula, supplementary motor area, dorsolateral prefrontal cortex (PFC), and temporal parietal junction when participants were behaving consistent with our model, and found increased activity in the ventromedial PFC, dorsomedial PFC, and nucleus accumbens when they chose to abuse trust and maximize their financial reward. This study demonstrates that a neural system previously implicated in expectation processing plays a critical role in assessing moral sentiments that in turn can sustain human cooperation in the face of temptation. PMID:21555080
An expected utility maximizer walks into a bar…
Glimcher, Paul W.; Lazzaro, Stephanie C.
2013-01-01
We conducted field experiments at a bar to test whether blood alcohol concentration (BAC) correlates with violations of the generalized axiom of revealed preference (GARP) and the independence axiom. We found that individuals with BACs well above the legal limit for driving adhere to GARP and independence at rates similar to those who are sober. This finding led to the fielding of a third experiment to explore how risk preferences might vary as a function of BAC. We found gender-specific effects: Men did not exhibit variations in risk preferences across BACs. In contrast, women were more risk averse than men at low BACs but exhibited increasing tolerance towards risks as BAC increased. Based on our estimates, men and women’s risk preferences are predicted to be identical at BACs nearly twice the legal limit for driving. We discuss the implications for policy-makers. PMID:24244072
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pichara, Karim; Protopapas, Pavlos
We present an automatic classification method for astronomical catalogs with missing data. We use Bayesian networks and a probabilistic graphical model that allows us to perform inference to predict missing values given observed data and dependency relationships between variables. To learn a Bayesian network from incomplete data, we use an iterative algorithm that utilizes sampling methods and expectation maximization to estimate the distributions and probabilistic dependencies of variables from data with missing values. To test our model, we use three catalogs with missing data (SAGE, Two Micron All Sky Survey, and UBVI) and one complete catalog (MACHO). We examine howmore » classification accuracy changes when information from missing data catalogs is included, how our method compares to traditional missing data approaches, and at what computational cost. Integrating these catalogs with missing data, we find that classification of variable objects improves by a few percent and by 15% for quasar detection while keeping the computational cost the same.« less
Sparse Bayesian learning for DOA estimation with mutual coupling.
Dai, Jisheng; Hu, Nan; Xu, Weichao; Chang, Chunqi
2015-10-16
Sparse Bayesian learning (SBL) has given renewed interest to the problem of direction-of-arrival (DOA) estimation. It is generally assumed that the measurement matrix in SBL is precisely known. Unfortunately, this assumption may be invalid in practice due to the imperfect manifold caused by unknown or misspecified mutual coupling. This paper describes a modified SBL method for joint estimation of DOAs and mutual coupling coefficients with uniform linear arrays (ULAs). Unlike the existing method that only uses stationary priors, our new approach utilizes a hierarchical form of the Student t prior to enforce the sparsity of the unknown signal more heavily. We also provide a distinct Bayesian inference for the expectation-maximization (EM) algorithm, which can update the mutual coupling coefficients more efficiently. Another difference is that our method uses an additional singular value decomposition (SVD) to reduce the computational complexity of the signal reconstruction process and the sensitivity to the measurement noise.
Directing solar photons to sustainably meet food, energy, and water needs.
Gençer, Emre; Miskin, Caleb; Sun, Xingshu; Khan, M Ryyan; Bermel, Peter; Alam, M Ashraf; Agrawal, Rakesh
2017-06-09
As we approach a "Full Earth" of over ten billion people within the next century, unprecedented demands will be placed on food, energy and water (FEW) supplies. The grand challenge before us is to sustainably meet humanity's FEW needs using scarcer resources. To overcome this challenge, we propose the utilization of the entire solar spectrum by redirecting solar photons to maximize FEW production from a given land area. We present novel solar spectrum unbundling FEW systems (SUFEWS), which can meet FEW needs locally while reducing the overall environmental impact of meeting these needs. The ability to meet FEW needs locally is critical, as significant population growth is expected in less-developed areas of the world. The proposed system presents a solution to harness the same amount of solar products (crops, electricity, and purified water) that could otherwise require ~60% more land if SUFEWS were not used-a major step for Full Earth preparedness.
Kim, Hyun Suk; Choi, Hong Yeop; Lee, Gyemin; Ye, Sung-Joon; Smith, Martin B; Kim, Geehyun
2018-03-01
The aim of this work is to develop a gamma-ray/neutron dual-particle imager, based on rotational modulation collimators (RMCs) and pulse shape discrimination (PSD)-capable scintillators, for possible applications for radioactivity monitoring as well as nuclear security and safeguards. A Monte Carlo simulation study was performed to design an RMC system for the dual-particle imaging, and modulation patterns were obtained for gamma-ray and neutron sources in various configurations. We applied an image reconstruction algorithm utilizing the maximum-likelihood expectation-maximization method based on the analytical modeling of source-detector configurations, to the Monte Carlo simulation results. Both gamma-ray and neutron source distributions were reconstructed and evaluated in terms of signal-to-noise ratio, showing the viability of developing an RMC-based gamma-ray/neutron dual-particle imager using PSD-capable scintillators.
Public health care and private insurance demand: the waiting time as a link.
Jofre-Bonet, M
2000-01-01
This paper analyzes the effect of waiting times in the Spanish public health system on the demand for private health insurance. Expected utility maximization determines whether or not individuals buy a private health insurance. The decision depends not only on consumer's covariates such as income, socio-demographic characteristics and health status, but also on the quality of the treatment by the public provider. We interpret waiting time as a qualitative attribute of the health care provision. The empirical analysis uses the Spanish Health Survey of 1993. We cope with the absence of income data by using the Spanish Family Budget Survey of 1990-91 as a complementary data set, following the Arellano-Meghir method [4]. Results indicate that a reduction in the waiting time lowers the probability of buying private health insurance. This suggests the existence of a crowd-out in the health care provision market.
On the relative independence of thinking biases and cognitive ability.
Stanovich, Keith E; West, Richard F
2008-04-01
In 7 different studies, the authors observed that a large number of thinking biases are uncorrelated with cognitive ability. These thinking biases include some of the most classic and well-studied biases in the heuristics and biases literature, including the conjunction effect, framing effects, anchoring effects, outcome bias, base-rate neglect, "less is more" effects, affect biases, omission bias, myside bias, sunk-cost effect, and certainty effects that violate the axioms of expected utility theory. In a further experiment, the authors nonetheless showed that cognitive ability does correlate with the tendency to avoid some rational thinking biases, specifically the tendency to display denominator neglect, probability matching rather than maximizing, belief bias, and matching bias on the 4-card selection task. The authors present a framework for predicting when cognitive ability will and will not correlate with a rational thinking tendency. (c) 2008 APA, all rights reserved.
Optimal execution in high-frequency trading with Bayesian learning
NASA Astrophysics Data System (ADS)
Du, Bian; Zhu, Hongliang; Zhao, Jingdong
2016-11-01
We consider optimal trading strategies in which traders submit bid and ask quotes to maximize the expected quadratic utility of total terminal wealth in a limit order book. The trader's bid and ask quotes will be changed by the Poisson arrival of market orders. Meanwhile, the trader may update his estimate of other traders' target sizes and directions by Bayesian learning. The solution of optimal execution in the limit order book is a two-step procedure. First, we model an inactive trading with no limit order in the market. The dealer simply holds dollars and shares of stocks until terminal time. Second, he calibrates his bid and ask quotes to the limit order book. The optimal solutions are given by dynamic programming and in fact they are globally optimal. We also give numerical simulation to the value function and optimal quotes at the last part of the article.
Work Placement in UK Undergraduate Programmes. Student Expectations and Experiences.
ERIC Educational Resources Information Center
Leslie, David; Richardson, Anne
1999-01-01
A survey of 189 pre- and 106 post-sandwich work-experience students in tourism suggested that potential benefits were not being maximized. Students needed better preparation for the work experience, especially in terms of their expectations. The work experience needed better design, and the role of industry tutors needed clarification. (SK)
Career Preference among Universities' Faculty: Literature Review
ERIC Educational Resources Information Center
Alenzi, Faris Q.; Salem, Mohamed L.
2007-01-01
Why do people enter academic life? What are their expectations? How can they maximize their experience and achievements, both short- and long-term? How much should they move towards commercialization? What can they do to improve their career? How much autonomy can they reasonably expect? What are the key issues for academics and aspiring academics…
Picking battles wisely: plant behaviour under competition.
Novoplansky, Ariel
2009-06-01
Plants are limited in their ability to choose their neighbours, but they are able to orchestrate a wide spectrum of rational competitive behaviours that increase their prospects to prevail under various ecological settings. Through the perception of neighbours, plants are able to anticipate probable competitive interactions and modify their competitive behaviours to maximize their long-term gains. Specifically, plants can minimize competitive encounters by avoiding their neighbours; maximize their competitive effects by aggressively confronting their neighbours; or tolerate the competitive effects of their neighbours. However, the adaptive values of these non-mutually exclusive options are expected to depend strongly on the plants' evolutionary background and to change dynamically according to their past development, and relative sizes and vigour. Additionally, the magnitude of competitive responsiveness is expected to be positively correlated with the reliability of the environmental information regarding the expected competitive interactions and the expected time left for further plastic modifications. Concurrent competition over external and internal resources and morphogenetic signals may enable some plants to increase their efficiency and external competitive performance by discriminately allocating limited resources to their more promising organs at the expense of failing or less successful organs.
Simultaneous Mean and Covariance Correction Filter for Orbit Estimation.
Wang, Xiaoxu; Pan, Quan; Ding, Zhengtao; Ma, Zhengya
2018-05-05
This paper proposes a novel filtering design, from a viewpoint of identification instead of the conventional nonlinear estimation schemes (NESs), to improve the performance of orbit state estimation for a space target. First, a nonlinear perturbation is viewed or modeled as an unknown input (UI) coupled with the orbit state, to avoid the intractable nonlinear perturbation integral (INPI) required by NESs. Then, a simultaneous mean and covariance correction filter (SMCCF), based on a two-stage expectation maximization (EM) framework, is proposed to simply and analytically fit or identify the first two moments (FTM) of the perturbation (viewed as UI), instead of directly computing such the INPI in NESs. Orbit estimation performance is greatly improved by utilizing the fit UI-FTM to simultaneously correct the state estimation and its covariance. Third, depending on whether enough information is mined, SMCCF should outperform existing NESs or the standard identification algorithms (which view the UI as a constant independent of the state and only utilize the identified UI-mean to correct the state estimation, regardless of its covariance), since it further incorporates the useful covariance information in addition to the mean of the UI. Finally, our simulations demonstrate the superior performance of SMCCF via an orbit estimation example.
Potential Impact of Risk and Loss Aversion on the Process of Accepting Kidneys for Transplantation.
Heilman, Raymond L; Green, Ellen P; Reddy, Kunam S; Moss, Adyr; Kaplan, Bruce
2017-07-01
Behavioral economic theory suggests that people make decisions based on maximizing perceived value; however, this may be influenced more by the risk of loss rather than of potential gain. Additionally, individuals may seek certainty over uncertainty. These are termed loss aversion and risk aversion, respectively. Loss aversion is particularly sensitive to how the decision is "framed." Thus, labeling a kidney as high Kidney Donor Profile Index results in higher discard rates because this creates a nonlinearity in perceived risk. There is also evidence that the perceived loss due to regulatory sanction results in increased organ discard rates. This may be due to the overuse of terminology that stresses regulatory sanctions and thus perpetuates fear of loss through a form of nudging. Our goal is to point out how these concepts of behavioral economics may negatively influence the decision process to accept these suboptimal organs. We hope to make the community more aware of these powerful psychological influences and thus potentially increase the utilization of these suboptimal organs. Further, we would urge regulatory bodies to avoid utilizing strategies that frame outcomes in terms of loss due to flagging and build models that are less prone to uncertain expected versus observed outcomes.
Net reclassification index at event rate: properties and relationships.
Pencina, Michael J; Steyerberg, Ewout W; D'Agostino, Ralph B
2017-12-10
The net reclassification improvement (NRI) is an attractively simple summary measure quantifying improvement in performance because of addition of new risk marker(s) to a prediction model. Originally proposed for settings with well-established classification thresholds, it quickly extended into applications with no thresholds in common use. Here we aim to explore properties of the NRI at event rate. We express this NRI as a difference in performance measures for the new versus old model and show that the quantity underlying this difference is related to several global as well as decision analytic measures of model performance. It maximizes the relative utility (standardized net benefit) across all classification thresholds and can be viewed as the Kolmogorov-Smirnov distance between the distributions of risk among events and non-events. It can be expressed as a special case of the continuous NRI, measuring reclassification from the 'null' model with no predictors. It is also a criterion based on the value of information and quantifies the reduction in expected regret for a given regret function, casting the NRI at event rate as a measure of incremental reduction in expected regret. More generally, we find it informative to present plots of standardized net benefit/relative utility for the new versus old model across the domain of classification thresholds. Then, these plots can be summarized with their maximum values, and the increment in model performance can be described by the NRI at event rate. We provide theoretical examples and a clinical application on the evaluation of prognostic biomarkers for atrial fibrillation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Utilization of community pharmacy space to enhance privacy: a qualitative study.
Hattingh, H Laetitia; Emmerton, Lynne; Ng Cheong Tin, Pascale; Green, Catherine
2016-10-01
Community pharmacists require access to consumers' information about their medicines and health-related conditions to make informed decisions regarding treatment options. Open communication between consumers and pharmacists is ideal although consumers are only likely to disclose relevant information if they feel that their privacy requirements are being acknowledged and adhered to. This study sets out to explore community pharmacy privacy practices, experiences and expectations and the utilization of available space to achieve privacy. Qualitative methods were used, comprising a series of face-to-face interviews with 25 pharmacists and 55 pharmacy customers in Perth, Western Australia, between June and August 2013. The use of private consultation areas for certain services and sensitive discussions was supported by pharmacists and consumers although there was recognition that workflow processes in some pharmacies may need to change to maximize the use of private areas. Pharmacy staff adopted various strategies to overcome privacy obstacles such as taking consumers to a quieter part of the pharmacy, avoiding exposure of sensitive items through packaging, lowering of voices, interacting during pharmacy quiet times and telephoning consumers. Pharmacy staff and consumers regularly had to apply judgement to achieve the required level of privacy. Management of privacy can be challenging in the community pharmacy environment, and on-going work in this area is important. As community pharmacy practice is increasingly becoming more involved in advanced medication and disease state management services with unique privacy requirements, pharmacies' layouts and systems to address privacy challenges require a proactive approach. © 2015 The Authors. Health Expectations Published by John Wiley & Sons Ltd.
Oh, Pok-Ja; Kim, Il-Ok; Shin, Sung-Rae; Jung, Hoe-Kyung
2004-10-01
This study was to develop Web-based multimedia content for Physical Examination and Health Assessment. The multimedia content was developed based on Jung's teaching and learning structure plan model, using the following 5 processes : 1) Analysis Stage, 2) Planning Stage, 3) Storyboard Framing and Production Stage, 4) Program Operation Stage, and 5) Final Evaluation Stage. The web based multimedia content consisted of an intro movie, main page and sub pages. On the main page, there were 6 menu bars that consisted of Announcement center, Information of professors, Lecture guide, Cyber lecture, Q&A, and Data centers, and a site map which introduced 15 week lectures. In the operation of web based multimedia content, HTML, JavaScript, Flash, and multimedia technology (Audio and Video) were utilized and the content consisted of text content, interactive content, animation, and audio & video. Consultation with the experts in context, computer engineering, and educational technology was utilized in the development of these processes. Web-based multimedia content is expected to offer individualized and tailored learning opportunities to maximize and facilitate the effectiveness of the teaching and learning process. Therefore, multimedia content should be utilized concurrently with the lecture in the Physical Examination and Health Assessment classes as a vital teaching aid to make up for the weakness of the face-to- face teaching-learning method.
Operation of Power Grids with High Penetration of Wind Power
NASA Astrophysics Data System (ADS)
Al-Awami, Ali Taleb
The integration of wind power into the power grid poses many challenges due to its highly uncertain nature. This dissertation involves two main components related to the operation of power grids with high penetration of wind energy: wind-thermal stochastic dispatch and wind-thermal coordinated bidding in short-term electricity markets. In the first part, a stochastic dispatch (SD) algorithm is proposed that takes into account the stochastic nature of the wind power output. The uncertainty associated with wind power output given the forecast is characterized using conditional probability density functions (CPDF). Several functions are examined to characterize wind uncertainty including Beta, Weibull, Extreme Value, Generalized Extreme Value, and Mixed Gaussian distributions. The unique characteristics of the Mixed Gaussian distribution are then utilized to facilitate the speed of convergence of the SD algorithm. A case study is carried out to evaluate the effectiveness of the proposed algorithm. Then, the SD algorithm is extended to simultaneously optimize the system operating costs and emissions. A modified multi-objective particle swarm optimization algorithm is suggested to identify the Pareto-optimal solutions defined by the two conflicting objectives. A sensitivity analysis is carried out to study the effect of changing load level and imbalance cost factors on the Pareto front. In the second part of this dissertation, coordinated trading of wind and thermal energy is proposed to mitigate risks due to those uncertainties. The problem of wind-thermal coordinated trading is formulated as a mixed-integer stochastic linear program. The objective is to obtain the optimal tradeoff bidding strategy that maximizes the total expected profits while controlling trading risks. For risk control, a weighted term of the conditional value at risk (CVaR) is included in the objective function. The CVaR aims to maximize the expected profits of the least profitable scenarios, thus improving trading risk control. A case study comparing coordinated with uncoordinated bidding strategies depending on the trader's risk attitude is included. Simulation results show that coordinated bidding can improve the expected profits while significantly improving the CVaR.
ERIC Educational Resources Information Center
Jacobs, Paul G.; Brown, P. Margaret; Paatsch, Louise
2012-01-01
This article documents a strength-based understanding of how individuals who are deaf maximize their social and professional potential. This exploratory study was conducted with 49 adult participants who are deaf (n = 30) and who have typical hearing (n = 19) residing in America, Australia, England, and South Africa. The findings support a…
Using Debate to Maximize Learning Potential: A Case Study
ERIC Educational Resources Information Center
Firmin, Michael W.; Vaughn, Aaron; Dye, Amanda
2007-01-01
Following a review of the literature, an educational case study is provided for the benefit of faculty preparing college courses. In particular, we provide a transcribed debate utilized in a General Psychology course as a best practice example of how to craft a debate which maximizes student learning. The work is presented as a model for the…
NASA Astrophysics Data System (ADS)
Kim, Yusung
Currently, there is great interest in integrating biological information into intensity-modulated radiotherapy (IMRT) treatment planning with the aim of boosting high-risk tumor subvolumes. Selective boosting of tumor subvolumes can be accomplished without violating normal tissue complication constraints using information from functional imaging. In this work we have developed a risk-adaptive optimization-framework that utilizes a nonlinear biological objective function. Employing risk-adaptive radiotherapy for prostate cancer, it is possible to increase the equivalent uniform dose (EUD) by up to 35.4 Gy in tumor subvolumes having the highest risk classification without increasing normal tissue complications. Subsequently, we have studied the impact of functional imaging accuracy, and found on the one hand that loss in sensitivity had a large impact on expected local tumor control, which was maximal when a low-risk classification for the remaining low risk PTV was chosen. While on the other hand loss in specificity appeared to have a minimal impact on normal tissue sparing. Therefore, it appears that in order to improve the therapeutic ratio a functional imaging technique with a high sensitivity, rather than specificity, is needed. Last but not least a comparison study between selective boosting IMRT strategies and uniform-boosting IMRT strategies yielding the same EUD to the overall PTV was carried out, and found that selective boosting IMRT considerably improves expected TCP compared to uniform-boosting IMRT, especially when lack of control of the high-risk tumor subvolumes is the cause of expected therapy failure. Furthermore, while selective boosting IMRT, using physical dose-volume objectives, did yield similar rectal and bladder sparing when compared its equivalent uniform-boosting IMRT plan, risk-adaptive radiotherapy, utilizing biological objective functions, did yield a 5.3% reduction in NTCP for the rectum. Hence, in risk-adaptive radiotherapy the therapeutic ratio can be increased over that which can be achieved with conventional selective boosting IMRT using physical dose-volume objectives. In conclusion, a novel risk-adaptive radiotherapy strategy is proposed and promises increased expected local control for locoregionally advanced tumors with equivalent or better normal tissue sparing.
Kessler, Thomas; Neumann, Jörg; Mummendey, Amélie; Berthold, Anne; Schubert, Thomas; Waldzus, Sven
2010-09-01
To explain the determinants of negative behavior toward deviants (e.g., punishment), this article examines how people evaluate others on the basis of two types of standards: minimal and maximal. Minimal standards focus on an absolute cutoff point for appropriate behavior; accordingly, the evaluation of others varies dichotomously between acceptable or unacceptable. Maximal standards focus on the degree of deviation from that standard; accordingly, the evaluation of others varies gradually from positive to less positive. This framework leads to the prediction that violation of minimal standards should elicit punishment regardless of the degree of deviation, whereas punishment in response to violations of maximal standards should depend on the degree of deviation. Four studies assessed or manipulated the type of standard and degree of deviation displayed by a target. Results consistently showed the expected interaction between type of standard (minimal and maximal) and degree of deviation on punishment behavior.
Cost-effective cloud computing: a case study using the comparative genomics tool, roundup.
Kudtarkar, Parul; Deluca, Todd F; Fusaro, Vincent A; Tonellato, Peter J; Wall, Dennis P
2010-12-22
Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource-Roundup-using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon's Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon's computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure.
Skedgel, Chris; Wailoo, Allan; Akehurst, Ron
2015-01-01
Economic theory suggests that resources should be allocated in a way that produces the greatest outputs, on the grounds that maximizing output allows for a redistribution that could benefit everyone. In health care, this is known as QALY (quality-adjusted life-year) maximization. This justification for QALY maximization may not hold, though, as it is difficult to reallocate health. Therefore, the allocation of health care should be seen as a matter of distributive justice as well as efficiency. A discrete choice experiment was undertaken to test consistency with the principles of QALY maximization and to quantify the willingness to trade life-year gains for distributive justice. An empirical ethics process was used to identify attributes that appeared relevant and ethically justified: patient age, severity (decomposed into initial quality and life expectancy), final health state, duration of benefit, and distributional concerns. Only 3% of respondents maximized QALYs with every choice, but scenarios with larger aggregate QALY gains were chosen more often and a majority of respondents maximized QALYs in a majority of their choices. However, respondents also appeared willing to prioritize smaller gains to preferred groups over larger gains to less preferred groups. Marginal analyses found a statistically significant preference for younger patients and a wider distribution of gains, as well as an aversion to patients with the shortest life expectancy or a poor final health state. These results support the existence of an equity-efficiency tradeoff and suggest that well-being could be enhanced by giving priority to programs that best satisfy societal preferences. Societal preferences could be incorporated through the use of explicit equity weights, although more research is required before such weights can be used in priority setting. © The Author(s) 2014.
NASA Astrophysics Data System (ADS)
Baum, R.; Characklis, G. W.
2016-12-01
Financial hedging solutions have been examined as tools for effectively mitigating water scarcity related financial risks for water utilities, and have become more prevalent as conservation (resulting in reduced revenues) and water transfers (resulting in increased costs) play larger roles in drought management. Individualized financial contracts (i.e. designed for a single utility) provide evidence of the potential benefits of financial hedging. However, individualized contracts require substantial time and information to develop, limiting their widespread implementation. More generalized contracts have also shown promise, and would allow the benefits of risk pooling to be more effectively realized, resulting in less expensive contracts. Risk pooling reduces the probability of an insurer making payouts that deviate significantly from the mean, but given that the financial risks of drought are spatially correlated amongst utilities, these more extreme "fat tail" risks remain. Any group offering these hedging contracts, whether a third-party insurer or a "mutual" comprised of many utilities, will need to balance the costs (i.e. additional risk) and benefits (i.e. returns) of alternative approaches to managing the extreme risks (e.g. through insurance layers). The balance of these different approaches will vary depending on the risk pool being considered, including the number, size and exposure of the participating utilities. This work first establishes a baseline of the tradeoffs between risk and expected return in insuring against the financial risks of water scarcity without alternative hedging approaches for water utilities across all climate divisions of the United States. Then various scenarios are analyzed to provide insight into how to maximize returns for risk pooling portfolios at various risk levels through balancing different insurance layers and hedging approaches. This analysis will provide valuable information for designing optimal financial risk management strategies for water utilities across the United States.
Expected Utility Illustrated: A Graphical Analysis of Gambles with More than Two Possible Outcomes
ERIC Educational Resources Information Center
Chen, Frederick H.
2010-01-01
The author presents a simple geometric method to graphically illustrate the expected utility from a gamble with more than two possible outcomes. This geometric result gives economics students a simple visual aid for studying expected utility theory and enables them to analyze a richer set of decision problems under uncertainty compared to what…
Optimal weight based on energy imbalance and utility maximization
NASA Astrophysics Data System (ADS)
Sun, Ruoyan
2016-01-01
This paper investigates the optimal weight for both male and female using energy imbalance and utility maximization. Based on the difference of energy intake and expenditure, we develop a state equation that reveals the weight gain from this energy gap. We construct an objective function considering food consumption, eating habits and survival rate to measure utility. Through applying mathematical tools from optimal control methods and qualitative theory of differential equations, we obtain some results. For both male and female, the optimal weight is larger than the physiologically optimal weight calculated by the Body Mass Index (BMI). We also study the corresponding trajectories to steady state weight respectively. Depending on the value of a few parameters, the steady state can either be a saddle point with a monotonic trajectory or a focus with dampened oscillations.
Budget Allocation in a Competitive Communication Spectrum Economy
NASA Astrophysics Data System (ADS)
Lin, Ming-Hua; Tsai, Jung-Fa; Ye, Yinyu
2009-12-01
This study discusses how to adjust "monetary budget" to meet each user's physical power demand, or balance all individual utilities in a competitive "spectrum market" of a communication system. In the market, multiple users share a common frequency or tone band and each of them uses the budget to purchase its own transmit power spectra (taking others as given) in maximizing its Shannon utility or pay-off function that includes the effect of interferences. A market equilibrium is a budget allocation, price spectrum, and tone power distribution that independently and simultaneously maximizes each user's utility. The equilibrium conditions of the market are formulated and analyzed, and the existence of an equilibrium is proved. Computational results and comparisons between the competitive equilibrium and Nash equilibrium solutions are also presented, which show that the competitive market equilibrium solution often provides more efficient power distribution.
Miller, Gabriel A.; Clissold, Fiona J.; Mayntz, David; Simpson, Stephen J.
2009-01-01
Ectotherms have evolved preferences for particular body temperatures, but the nutritional and life-history consequences of such temperature preferences are not well understood. We measured thermal preferences in Locusta migratoria (migratory locusts) and used a multi-factorial experimental design to investigate relationships between growth/development and macronutrient utilization (conversion of ingesta to body mass) as a function of temperature. A range of macronutrient intake values for insects at 26, 32 and 38°C was achieved by offering individuals high-protein diets, high-carbohydrate diets or a choice between both. Locusts placed in a thermal gradient selected temperatures near 38°C, maximizing rates of weight gain; however, this enhanced growth rate came at the cost of poor protein and carbohydrate utilization. Protein and carbohydrate were equally digested across temperature treatments, but once digested both macronutrients were converted to growth most efficiently at the intermediate temperature (32°C). Body temperature preference thus yielded maximal growth rates at the expense of efficient nutrient utilization. PMID:19625322
NASA Astrophysics Data System (ADS)
Aslan, Serdar; Taylan Cemgil, Ali; Akın, Ata
2016-08-01
Objective. In this paper, we aimed for the robust estimation of the parameters and states of the hemodynamic model by using blood oxygen level dependent signal. Approach. In the fMRI literature, there are only a few successful methods that are able to make a joint estimation of the states and parameters of the hemodynamic model. In this paper, we implemented a maximum likelihood based method called the particle smoother expectation maximization (PSEM) algorithm for the joint state and parameter estimation. Main results. Former sequential Monte Carlo methods were only reliable in the hemodynamic state estimates. They were claimed to outperform the local linearization (LL) filter and the extended Kalman filter (EKF). The PSEM algorithm is compared with the most successful method called square-root cubature Kalman smoother (SCKS) for both state and parameter estimation. SCKS was found to be better than the dynamic expectation maximization (DEM) algorithm, which was shown to be a better estimator than EKF, LL and particle filters. Significance. PSEM was more accurate than SCKS for both the state and the parameter estimation. Hence, PSEM seems to be the most accurate method for the system identification and state estimation for the hemodynamic model inversion literature. This paper do not compare its results with Tikhonov-regularized Newton—CKF (TNF-CKF), a recent robust method which works in filtering sense.
NASA Astrophysics Data System (ADS)
Choi, D. H.; An, Y. H.; Chung, K. J.; Hwang, Y. S.
2012-01-01
A 94 GHz heterodyne interferometer system was designed to measure the plasma density of VEST (Versatile Experiment Spherical Torus), which was recently built at Seoul National University. Two 94 GHz Gunn oscillators with a frequency difference of 40 MHz were used in the microwave electronics part of a heterodyne interferometer system. A compact beam focusing system utilizing a pair of plano-convex lenses and a concave mirror was designed to maximize the effective beam reception and spatial resolution. Beam path analysis based on Gaussian optics was used in the design of the beam focusing system. The design of the beam focusing system and the beam path analysis were verified with a couple of experiments that were done within an experimental framework that considered the real dimensions of a vacuum vessel. Optimum distances between the optical components and the beam radii along the beam path obtained from the experiments were in good agreement with the beam path analysis using the Gaussian optics. Both experimentation and numerical calculations confirmed that the designed beam focusing system maximized the spatial resolution of the measurement; moreover, the beam waist was located at the center of the plasma to generate a phase shift more effectively in plasmas. The interferometer system presented in this paper is expected to be used in the measurements of line integrated plasma densities during the start-up phase of VEST.
The Impact of Menstrual Cycle Phase on Economic Choice and Rationality.
Lazzaro, Stephanie C; Rutledge, Robb B; Burghart, Daniel R; Glimcher, Paul W
2016-01-01
It is well known that hormones affect both brain and behavior, but less is known about the extent to which hormones affect economic decision-making. Numerous studies demonstrate gender differences in attitudes to risk and loss in financial decision-making, often finding that women are more loss and risk averse than men. It is unclear what drives these effects and whether cyclically varying hormonal differences between men and women contribute to differences in economic preferences. We focus here on how economic rationality and preferences change as a function of menstrual cycle phase in women. We tested adherence to the Generalized Axiom of Revealed Preference (GARP), the standard test of economic rationality. If choices satisfy GARP then there exists a well-behaved utility function that the subject's decisions maximize. We also examined whether risk attitudes and loss aversion change as a function of cycle phase. We found that, despite large fluctuations in hormone levels, women are as technically rational in their choice behavior as their male counterparts at all phases of the menstrual cycle. However, women are more likely to choose risky options that can lead to potential losses while ovulating; during ovulation women are less loss averse than men and therefore more economically rational than men in this regard. These findings may have market-level implications: ovulating women more effectively maximize expected value than do other groups.
Melioration behaviour in the Harvard game is reduced by simplifying decision outcomes.
Stillwell, David J; Tunney, Richard J
2009-11-01
Self-control experiments have previously been highlighted as examples of suboptimal decision making. In one such experiment, the Harvard game, participants make repeated choices between two alternatives. One alternative has a higher immediate pay-off than the other, but with repeated choices results in a lower overall pay-off. Preference for the alternative with the higher immediate pay-off seems to be impulsive and will result in a failure to maximize pay-offs. We report an experiment that modifies the Harvard game, dividing the pay-off from each choice into two separate consequences-the immediate and the historic components. Choosing the alternative with the higher immediate pay-off ends the session prematurely, leading to a loss of opportunities to earn further pay-offs and ultimately to a reduced overall pay-off. This makes it easier for participants to learn the outcomes of their actions. It also provides the opportunity for a further test of normative decision making by means of one of its most specific and paradoxical predictions-that the truly rational agent should switch from self-control to impulsivity toward the end of the experimental sessions. The finding that participants maximize their expected utility by both overcoming impulsivity and learning to switch implies that melioration behaviour is not due to the lure of impulsivity, but due to the difficulty of learning which components are included in the pay-off schedules.
The Impact of Menstrual Cycle Phase on Economic Choice and Rationality
Lazzaro, Stephanie C.; Rutledge, Robb B.; Burghart, Daniel R.; Glimcher, Paul W.
2016-01-01
It is well known that hormones affect both brain and behavior, but less is known about the extent to which hormones affect economic decision-making. Numerous studies demonstrate gender differences in attitudes to risk and loss in financial decision-making, often finding that women are more loss and risk averse than men. It is unclear what drives these effects and whether cyclically varying hormonal differences between men and women contribute to differences in economic preferences. We focus here on how economic rationality and preferences change as a function of menstrual cycle phase in women. We tested adherence to the Generalized Axiom of Revealed Preference (GARP), the standard test of economic rationality. If choices satisfy GARP then there exists a well-behaved utility function that the subject’s decisions maximize. We also examined whether risk attitudes and loss aversion change as a function of cycle phase. We found that, despite large fluctuations in hormone levels, women are as technically rational in their choice behavior as their male counterparts at all phases of the menstrual cycle. However, women are more likely to choose risky options that can lead to potential losses while ovulating; during ovulation women are less loss averse than men and therefore more economically rational than men in this regard. These findings may have market-level implications: ovulating women more effectively maximize expected value than do other groups. PMID:26824245
Competitive Facility Location with Random Demands
NASA Astrophysics Data System (ADS)
Uno, Takeshi; Katagiri, Hideki; Kato, Kosuke
2009-10-01
This paper proposes a new location problem of competitive facilities, e.g. shops and stores, with uncertain demands in the plane. By representing the demands for facilities as random variables, the location problem is formulated to a stochastic programming problem, and for finding its solution, three deterministic programming problems: expectation maximizing problem, probability maximizing problem, and satisfying level maximizing problem are considered. After showing that one of their optimal solutions can be found by solving 0-1 programming problems, their solution method is proposed by improving the tabu search algorithm with strategic vibration. Efficiency of the solution method is shown by applying to numerical examples of the facility location problems.
Physical renormalization condition for de Sitter QED
NASA Astrophysics Data System (ADS)
Hayashinaka, Takahiro; Xue, She-Sheng
2018-05-01
We considered a new renormalization condition for the vacuum expectation values of the scalar and spinor currents induced by a homogeneous and constant electric field background in de Sitter spacetime. Following a semiclassical argument, the condition named maximal subtraction imposes the exponential suppression on the massive charged particle limit of the renormalized currents. The maximal subtraction changes the behaviors of the induced currents previously obtained by the conventional minimal subtraction scheme. The maximal subtraction is favored for a couple of physically decent predictions including the identical asymptotic behavior of the scalar and spinor currents, the removal of the IR hyperconductivity from the scalar current, and the finite current for the massless fermion.
Time-Extended Policies in Mult-Agent Reinforcement Learning
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Agogino, Adrian K.
2004-01-01
Reinforcement learning methods perform well in many domains where a single agent needs to take a sequence of actions to perform a task. These methods use sequences of single-time-step rewards to create a policy that tries to maximize a time-extended utility, which is a (possibly discounted) sum of these rewards. In this paper we build on our previous work showing how these methods can be extended to a multi-agent environment where each agent creates its own policy that works towards maximizing a time-extended global utility over all agents actions. We show improved methods for creating time-extended utilities for the agents that are both "aligned" with the global utility and "learnable." We then show how to crate single-time-step rewards while avoiding the pi fall of having rewards aligned with the global reward leading to utilities not aligned with the global utility. Finally, we apply these reward functions to the multi-agent Gridworld problem. We explicitly quantify a utility's learnability and alignment, and show that reinforcement learning agents using the prescribed reward functions successfully tradeoff learnability and alignment. As a result they outperform both global (e.g., team games ) and local (e.g., "perfectly learnable" ) reinforcement learning solutions by as much as an order of magnitude.
Illustrated Examples of the Effects of Risk Preferences and Expectations on Bargaining Outcomes.
ERIC Educational Resources Information Center
Dickinson, David L.
2003-01-01
Describes bargaining examples that use expected utility theory. Provides example results that are intuitive, shown graphically and algebraically, and offer upper-level student samples that illustrate the usefulness of the expected utility theory. (JEH)
Trust regions in Kriging-based optimization with expected improvement
NASA Astrophysics Data System (ADS)
Regis, Rommel G.
2016-06-01
The Kriging-based Efficient Global Optimization (EGO) method works well on many expensive black-box optimization problems. However, it does not seem to perform well on problems with steep and narrow global minimum basins and on high-dimensional problems. This article develops a new Kriging-based optimization method called TRIKE (Trust Region Implementation in Kriging-based optimization with Expected improvement) that implements a trust-region-like approach where each iterate is obtained by maximizing an Expected Improvement (EI) function within some trust region. This trust region is adjusted depending on the ratio of the actual improvement to the EI. This article also develops the Kriging-based CYCLONE (CYClic Local search in OptimizatioN using Expected improvement) method that uses a cyclic pattern to determine the search regions where the EI is maximized. TRIKE and CYCLONE are compared with EGO on 28 test problems with up to 32 dimensions and on a 36-dimensional groundwater bioremediation application in appendices supplied as an online supplement available at http://dx.doi.org/10.1080/0305215X.2015.1082350. The results show that both algorithms yield substantial improvements over EGO and they are competitive with a radial basis function method.
Mears, Lisa; Stocks, Stuart M; Albaek, Mads O; Cassells, Benny; Sin, Gürkan; Gernaey, Krist V
2017-07-01
A novel model-based control strategy has been developed for filamentous fungal fed-batch fermentation processes. The system of interest is a pilot scale (550 L) filamentous fungus process operating at Novozymes A/S. In such processes, it is desirable to maximize the total product achieved in a batch in a defined process time. In order to achieve this goal, it is important to maximize both the product concentration, and also the total final mass in the fed-batch system. To this end, we describe the development of a control strategy which aims to achieve maximum tank fill, while avoiding oxygen limited conditions. This requires a two stage approach: (i) calculation of the tank start fill; and (ii) on-line control in order to maximize fill subject to oxygen transfer limitations. First, a mechanistic model was applied off-line in order to determine the appropriate start fill for processes with four different sets of process operating conditions for the stirrer speed, headspace pressure, and aeration rate. The start fills were tested with eight pilot scale experiments using a reference process operation. An on-line control strategy was then developed, utilizing the mechanistic model which is recursively updated using on-line measurements. The model was applied in order to predict the current system states, including the biomass concentration, and to simulate the expected future trajectory of the system until a specified end time. In this way, the desired feed rate is updated along the progress of the batch taking into account the oxygen mass transfer conditions and the expected future trajectory of the mass. The final results show that the target fill was achieved to within 5% under the maximum fill when tested using eight pilot scale batches, and over filling was avoided. The results were reproducible, unlike the reference experiments which show over 10% variation in the final tank fill, and this also includes over filling. The variance of the final tank fill is reduced by over 74%, meaning that it is possible to target the final maximum fill reproducibly. The product concentration achieved at a given set of process conditions was unaffected by the control strategy. Biotechnol. Bioeng. 2017;114: 1459-1468. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Ketcham, Jonathan D; Kuminoff, Nicolai V; Powers, Christopher A
2016-12-01
Consumers' enrollment decisions in Medicare Part D can be explained by Abaluck and Gruber’s (2011) model of utility maximization with psychological biases or by a neoclassical version of their model that precludes such biases. We evaluate these competing hypotheses by applying nonparametric tests of utility maximization and model validation tests to administrative data. We find that 79 percent of enrollment decisions from 2006 to 2010 satisfied basic axioms of consumer theory under the assumption of full information. The validation tests provide evidence against widespread psychological biases. In particular, we find that precluding psychological biases improves the structural model's out-of-sample predictions for consumer behavior.
Bayesian cross-entropy methodology for optimal design of validation experiments
NASA Astrophysics Data System (ADS)
Jiang, X.; Mahadevan, S.
2006-07-01
An important concern in the design of validation experiments is how to incorporate the mathematical model in the design in order to allow conclusive comparisons of model prediction with experimental output in model assessment. The classical experimental design methods are more suitable for phenomena discovery and may result in a subjective, expensive, time-consuming and ineffective design that may adversely impact these comparisons. In this paper, an integrated Bayesian cross-entropy methodology is proposed to perform the optimal design of validation experiments incorporating the computational model. The expected cross entropy, an information-theoretic distance between the distributions of model prediction and experimental observation, is defined as a utility function to measure the similarity of two distributions. A simulated annealing algorithm is used to find optimal values of input variables through minimizing or maximizing the expected cross entropy. The measured data after testing with the optimum input values are used to update the distribution of the experimental output using Bayes theorem. The procedure is repeated to adaptively design the required number of experiments for model assessment, each time ensuring that the experiment provides effective comparison for validation. The methodology is illustrated for the optimal design of validation experiments for a three-leg bolted joint structure and a composite helicopter rotor hub component.
Sequence, assembly and annotation of the maize W22 genome
USDA-ARS?s Scientific Manuscript database
Since its adoption by Brink and colleagues in the 1950s and 60s, the maize W22 inbred has been utilized extensively to understand fundamental genetic and epigenetic processes such recombination, transposition and paramutation. To maximize the utility of W22 in gene discovery, we have Illumina sequen...
Complete utilization of spent coffee to biodiesel, bio-oil and biochar
USDA-ARS?s Scientific Manuscript database
Energy production from renewable or waste biomass/material is a more attractive alternative compared to conventional feedstocks, such as corn and soybean. The objective of this study is to maximize utilization of any waste organic carbon material to produce renewable energy. This study presents tota...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-10
... Utility of List 1 Chemicals Screened Through EPA's Endocrine Disruptor Screening Program; Notice of... to the test orders issued under the Endocrine Disruptor Screening Program. DATES: Comments must be... testing of chemical substances for potential endocrine effects. Potentially affected entities, identified...
Can Monkeys Make Investments Based on Maximized Pay-off?
Steelandt, Sophie; Dufour, Valérie; Broihanne, Marie-Hélène; Thierry, Bernard
2011-01-01
Animals can maximize benefits but it is not known if they adjust their investment according to expected pay-offs. We investigated whether monkeys can use different investment strategies in an exchange task. We tested eight capuchin monkeys (Cebus apella) and thirteen macaques (Macaca fascicularis, Macaca tonkeana) in an experiment where they could adapt their investment to the food amounts proposed by two different experimenters. One, the doubling partner, returned a reward that was twice the amount given by the subject, whereas the other, the fixed partner, always returned a constant amount regardless of the amount given. To maximize pay-offs, subjects should invest a maximal amount with the first partner and a minimal amount with the second. When tested with the fixed partner only, one third of monkeys learned to remove a maximal amount of food for immediate consumption before investing a minimal one. With both partners, most subjects failed to maximize pay-offs by using different decision rules with each partner' quality. A single Tonkean macaque succeeded in investing a maximal amount to one experimenter and a minimal amount to the other. The fact that only one of over 21 subjects learned to maximize benefits in adapting investment according to experimenters' quality indicates that such a task is difficult for monkeys, albeit not impossible. PMID:21423777
Orellana, Liliana; Rotnitzky, Andrea; Robins, James M
2010-01-01
Dynamic treatment regimes are set rules for sequential decision making based on patient covariate history. Observational studies are well suited for the investigation of the effects of dynamic treatment regimes because of the variability in treatment decisions found in them. This variability exists because different physicians make different decisions in the face of similar patient histories. In this article we describe an approach to estimate the optimal dynamic treatment regime among a set of enforceable regimes. This set is comprised by regimes defined by simple rules based on a subset of past information. The regimes in the set are indexed by a Euclidean vector. The optimal regime is the one that maximizes the expected counterfactual utility over all regimes in the set. We discuss assumptions under which it is possible to identify the optimal regime from observational longitudinal data. Murphy et al. (2001) developed efficient augmented inverse probability weighted estimators of the expected utility of one fixed regime. Our methods are based on an extension of the marginal structural mean model of Robins (1998, 1999) which incorporate the estimation ideas of Murphy et al. (2001). Our models, which we call dynamic regime marginal structural mean models, are specially suitable for estimating the optimal treatment regime in a moderately small class of enforceable regimes of interest. We consider both parametric and semiparametric dynamic regime marginal structural models. We discuss locally efficient, double-robust estimation of the model parameters and of the index of the optimal treatment regime in the set. In a companion paper in this issue of the journal we provide proofs of the main results.
Joseph Buongiorno; Mo Zhou; Craig Johnston
2017-01-01
Markov decision process models were extended to reflect some consequences of the risk attitude of forestry decision makers. One approach consisted of maximizing the expected value of a criterion subject to an upper bound on the variance or, symmetrically, minimizing the variance subject to a lower bound on the expected value. The other method used the certainty...
Nurses wanted Is the job too harsh or is the wage too low?
Di Tommaso, M L; Strøm, S; Saether, E M
2009-05-01
When entering the job market, nurses choose among different kind of jobs. Each of these jobs is characterized by wage, sector (primary care or hospital) and shift (daytime work or shift). This paper estimates a multi-sector-job-type random utility model of labor supply on data for Norwegian registered nurses (RNs) in 2000. The empirical model implies that labor supply is rather inelastic; 10% increase in the wage rates for all nurses is estimated to yield 3.3% increase in overall labor supply. This modest response shadows for much stronger inter-job-type responses. Our approach differs from previous studies in two ways: First, to our knowledge, it is the first time that a model of labor supply for nurses is estimated taking explicitly into account the choices that RN's have regarding work place and type of job. Second, it differs from previous studies with respect to the measurement of the compensations for different types of work. So far, it has been focused on wage differentials. But there are more attributes of a job than the wage. Based on the estimated random utility model we therefore calculate the expected value of compensation that makes a utility maximizing agent indifferent between types of jobs, here between shift work and daytime work. It turns out that Norwegian nurses working shifts may be willing to work shift relative to daytime work for a lower wage than the current one.
Modeling of Mean-VaR portfolio optimization by risk tolerance when the utility function is quadratic
NASA Astrophysics Data System (ADS)
Sukono, Sidi, Pramono; Bon, Abdul Talib bin; Supian, Sudradjat
2017-03-01
The problems of investing in financial assets are to choose a combination of weighting a portfolio can be maximized return expectations and minimizing the risk. This paper discusses the modeling of Mean-VaR portfolio optimization by risk tolerance, when square-shaped utility functions. It is assumed that the asset return has a certain distribution, and the risk of the portfolio is measured using the Value-at-Risk (VaR). So, the process of optimization of the portfolio is done based on the model of Mean-VaR portfolio optimization model for the Mean-VaR done using matrix algebra approach, and the Lagrange multiplier method, as well as Khun-Tucker. The results of the modeling portfolio optimization is in the form of a weighting vector equations depends on the vector mean return vector assets, identities, and matrix covariance between return of assets, as well as a factor in risk tolerance. As an illustration of numeric, analyzed five shares traded on the stock market in Indonesia. Based on analysis of five stocks return data gained the vector of weight composition and graphics of efficient surface of portfolio. Vector composition weighting weights and efficient surface charts can be used as a guide for investors in decisions to invest.
Maximal sfermion flavour violation in super-GUTs
Ellis, John; Olive, Keith A.; Velasco-Sevilla, Liliana
2016-10-20
We consider supersymmetric grand unified theories with soft supersymmetry-breaking scalar masses m 0 specified above the GUT scale (super-GUTs) and patterns of Yukawa couplings motivated by upper limits on flavour-changing interactions beyond the Standard Model. If the scalar masses are smaller than the gaugino masses m 1/2, as is expected in no-scale models, the dominant effects of renormalisation between the input scale and the GUT scale are generally expected to be those due to the gauge couplings, which are proportional to m 1/2 and generation independent. In this case, the input scalar masses m 0 may violate flavour maximally, amore » scenario we call MaxSFV, and there is no supersymmetric flavour problem. As a result, we illustrate this possibility within various specific super-GUT scenarios that are deformations of no-scale gravity« less
Zeng, Nianyin; Wang, Zidong; Li, Yurong; Du, Min; Cao, Jie; Liu, Xiaohui
2013-12-01
In this paper, the expectation maximization (EM) algorithm is applied to the modeling of the nano-gold immunochromatographic assay (nano-GICA) via available time series of the measured signal intensities of the test and control lines. The model for the nano-GICA is developed as the stochastic dynamic model that consists of a first-order autoregressive stochastic dynamic process and a noisy measurement. By using the EM algorithm, the model parameters, the actual signal intensities of the test and control lines, as well as the noise intensity can be identified simultaneously. Three different time series data sets concerning the target concentrations are employed to demonstrate the effectiveness of the introduced algorithm. Several indices are also proposed to evaluate the inferred models. It is shown that the model fits the data very well.
Speeded Reaching Movements around Invisible Obstacles
Hudson, Todd E.; Wolfe, Uta; Maloney, Laurence T.
2012-01-01
We analyze the problem of obstacle avoidance from a Bayesian decision-theoretic perspective using an experimental task in which reaches around a virtual obstacle were made toward targets on an upright monitor. Subjects received monetary rewards for touching the target and incurred losses for accidentally touching the intervening obstacle. The locations of target-obstacle pairs within the workspace were varied from trial to trial. We compared human performance to that of a Bayesian ideal movement planner (who chooses motor strategies maximizing expected gain) using the Dominance Test employed in Hudson et al. (2007). The ideal movement planner suffers from the same sources of noise as the human, but selects movement plans that maximize expected gain in the presence of that noise. We find good agreement between the predictions of the model and actual performance in most but not all experimental conditions. PMID:23028276
Clustering performance comparison using K-means and expectation maximization algorithms.
Jung, Yong Gyu; Kang, Min Soo; Heo, Jun
2014-11-14
Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K -means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K -means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results.
Maintaining Registered Nurses' Currency in Informatics
ERIC Educational Resources Information Center
Strawn, Jennifer Alaine
2017-01-01
Technology has changed how registered nurses (RNs) provide care at the bedside. As more technologies are utilized to improve quality of care, safety of care, maximize efficiencies, and decrease costs of care, one must question how well the information technologies (IT) are fully integrated and utilized by the front-line bedside nurse in his or her…
NASA Astrophysics Data System (ADS)
Davendralingam, Navindran
Conceptual design of aircraft and the airline network (routes) on which aircraft fly on are inextricably linked to passenger driven demand. Many factors influence passenger demand for various Origin-Destination (O-D) city pairs including demographics, geographic location, seasonality, socio-economic factors and naturally, the operations of directly competing airlines. The expansion of airline operations involves the identificaion of appropriate aircraft to meet projected future demand. The decisions made in incorporating and subsequently allocating these new aircraft to serve air travel demand affects the inherent risk and profit potential as predicted through the airline revenue management systems. Competition between airlines then translates to latent passenger observations of the routes served between OD pairs and ticket pricing---this in effect reflexively drives future states of demand. This thesis addresses the integrated nature of aircraft design, airline operations and passenger demand, in order to maximize future expected profits as new aircraft are brought into service. The goal of this research is to develop an approach that utilizes aircraft design, airline network design and passenger demand as a unified framework to provide better integrated design solutions in order to maximize expexted profits of an airline. This is investigated through two approaches. The first is a static model that poses the concurrent engineering paradigm above as an investment portfolio problem. Modern financial portfolio optimization techniques are used to leverage risk of serving future projected demand using a 'yet to be introduced' aircraft against potentially generated future profits. Robust optimization methodologies are incorporated to mitigate model sensitivity and address estimation risks associated with such optimization techniques. The second extends the portfolio approach to include dynamic effects of an airline's operations. A dynamic programming approach is employed to simulate the reflexive nature of airline supply-demand interactions by modeling the aggregate changes in demand that would result from tactical allocations of aircraft to maximize profit. The best yet-to-be-introduced aircraft maximizes profit by minimizing the long term fleetwide direct operating costs.
Network efficient power control for wireless communication systems.
Campos-Delgado, Daniel U; Luna-Rivera, Jose Martin; Martinez-Sánchez, C J; Gutierrez, Carlos A; Tecpanecatl-Xihuitl, J L
2014-01-01
We introduce a two-loop power control that allows an efficient use of the overall power resources for commercial wireless networks based on cross-layer optimization. This approach maximizes the network's utility in the outer-loop as a function of the averaged signal to interference-plus-noise ratio (SINR) by considering adaptively the changes in the network characteristics. For this purpose, the concavity property of the utility function was verified with respect to the SINR, and an iterative search was proposed with guaranteed convergence. In addition, the outer-loop is in charge of selecting the detector that minimizes the overall power consumption (transmission and detection). Next the inner-loop implements a feedback power control in order to achieve the optimal SINR in the transmissions despite channel variations and roundtrip delays. In our proposal, the utility maximization process and detector selection and feedback power control are decoupled problems, and as a result, these strategies are implemented at two different time scales in the two-loop framework. Simulation results show that substantial utility gains may be achieved by improving the power management in the wireless network.
Network Efficient Power Control for Wireless Communication Systems
Campos-Delgado, Daniel U.; Luna-Rivera, Jose Martin; Martinez-Sánchez, C. J.; Gutierrez, Carlos A.; Tecpanecatl-Xihuitl, J. L.
2014-01-01
We introduce a two-loop power control that allows an efficient use of the overall power resources for commercial wireless networks based on cross-layer optimization. This approach maximizes the network's utility in the outer-loop as a function of the averaged signal to interference-plus-noise ratio (SINR) by considering adaptively the changes in the network characteristics. For this purpose, the concavity property of the utility function was verified with respect to the SINR, and an iterative search was proposed with guaranteed convergence. In addition, the outer-loop is in charge of selecting the detector that minimizes the overall power consumption (transmission and detection). Next the inner-loop implements a feedback power control in order to achieve the optimal SINR in the transmissions despite channel variations and roundtrip delays. In our proposal, the utility maximization process and detector selection and feedback power control are decoupled problems, and as a result, these strategies are implemented at two different time scales in the two-loop framework. Simulation results show that substantial utility gains may be achieved by improving the power management in the wireless network. PMID:24683350
Utilization of Model Predictive Control to Balance Power Absorption Against Load Accumulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbas, Nikhar; Tom, Nathan M
2017-06-03
Wave energy converter (WEC) control strategies have been primarily focused on maximizing power absorption. The use of model predictive control strategies allows for a finite-horizon, multiterm objective function to be solved. This work utilizes a multiterm objective function to maximize power absorption while minimizing the structural loads on the WEC system. Furthermore, a Kalman filter and autoregressive model were used to estimate and forecast the wave exciting force and predict the future dynamics of the WEC. The WEC's power-take-off time-averaged power and structural loads under a perfect forecast assumption in irregular waves were compared against results obtained from the Kalmanmore » filter and autoregressive model to evaluate model predictive control performance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbas, Nikhar; Tom, Nathan
Wave energy converter (WEC) control strategies have been primarily focused on maximizing power absorption. The use of model predictive control strategies allows for a finite-horizon, multiterm objective function to be solved. This work utilizes a multiterm objective function to maximize power absorption while minimizing the structural loads on the WEC system. Furthermore, a Kalman filter and autoregressive model were used to estimate and forecast the wave exciting force and predict the future dynamics of the WEC. The WEC's power-take-off time-averaged power and structural loads under a perfect forecast assumption in irregular waves were compared against results obtained from the Kalmanmore » filter and autoregressive model to evaluate model predictive control performance.« less
Mino, H
2007-01-01
To estimate the parameters, the impulse response (IR) functions of some linear time-invariant systems generating intensity processes, in Shot-Noise-Driven Doubly Stochastic Poisson Process (SND-DSPP) in which multivariate presynaptic spike trains and postsynaptic spike trains can be assumed to be modeled by the SND-DSPPs. An explicit formula for estimating the IR functions from observations of multivariate input processes of the linear systems and the corresponding counting process (output process) is derived utilizing the expectation maximization (EM) algorithm. The validity of the estimation formula was verified through Monte Carlo simulations in which two presynaptic spike trains and one postsynaptic spike train were assumed to be observable. The IR functions estimated on the basis of the proposed identification method were close to the true IR functions. The proposed method will play an important role in identifying the input-output relationship of pre- and postsynaptic neural spike trains in practical situations.
NASA Astrophysics Data System (ADS)
Mayvan, Ali D.; Aghaeinia, Hassan; Kazemi, Mohammad
2017-12-01
This paper focuses on robust transceiver design for throughput enhancement on the interference channel (IC), under imperfect channel state information (CSI). In this paper, two algorithms are proposed to improve the throughput of the multi-input multi-output (MIMO) IC. Each transmitter and receiver has, respectively, M and N antennas and IC operates in a time division duplex mode. In the first proposed algorithm, each transceiver adjusts its filter to maximize the expected value of signal-to-interference-plus-noise ratio (SINR). On the other hand, the second algorithm tries to minimize the variances of the SINRs to hedge against the variability due to CSI error. Taylor expansion is exploited to approximate the effect of CSI imperfection on mean and variance. The proposed robust algorithms utilize the reciprocity of wireless networks to optimize the estimated statistical properties in two different working modes. Monte Carlo simulations are employed to investigate sum rate performance of the proposed algorithms and the advantage of incorporating variation minimization into the transceiver design.
Directing solar photons to sustainably meet food, energy, and water needs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gencer, Emre; Miskin, Caleb; Sun, Xingshu
As we approach a “Full Earth” of over ten billion people within the next century, unprecedented demands will be placed on food, energy and water (FEW) supplies. The grand challenge before us is to sustainably meet humanity’s FEW needs using scarcer resources. To overcome this challenge, we propose the utilization of the entire solar spectrum by redirecting solar photons to maximize FEW production from a given land area. We present novel solar spectrum unbundling FEW systems (SUFEWS), which can meet FEW needs locally while reducing the overall environmental impact of meeting these needs. The ability to meet FEW needs locallymore » is critical, as significant population growth is expected in less-developed areas of the world. As a result, the proposed system presents a solution to harness the same amount of solar products (crops, electricity, and purified water) that could otherwise require ~60% more land if SUFEWS were not used—a major step for Full Earth preparedness.« less
Near-Earth Asteroid (NEA) Scout
NASA Technical Reports Server (NTRS)
McNutt, Leslie; Johnson, Les; Kahn, Peter; Castillo-Rogez, Julie; Frick, Andreas
2014-01-01
Near-Earth asteroids (NEAs) are the most easily accessible bodies in the solar system, and detections of NEAs are expected to grow exponentially in the near future, offering increasing target opportunities. As NASA continues to refine its plans to possibly explore these small worlds with human explorers, initial reconnaissance with comparatively inexpensive robotic precursors is necessary. Obtaining and analyzing relevant data about these bodies via robotic precursors before committing a crew to visit a NEA will significantly minimize crew and mission risk, as well as maximize exploration return potential. The Marshall Space Flight Center (MSFC) and Jet Propulsion Laboratory (JPL) are jointly examining a potential mission concept, tentatively called 'NEAScout,' utilizing a low-cost platform such as CubeSat in response to the current needs for affordable missions with exploration science value. The NEAScout mission concept would be treated as a secondary payload on the Space Launch System (SLS) Exploration Mission 1 (EM-1), the first planned flight of the SLS and the second un-crewed test flight of the Orion Multi-Purpose Crew Vehicle (MPCV).
NASA Technical Reports Server (NTRS)
McNutt, Leslie; Johnson, Les; Clardy, Dennon; Castillo-Rogez, Julie; Frick, Andreas; Jones, Laura
2014-01-01
Near-Earth Asteroids (NEAs) are an easily accessible object in Earth's vicinity. Detections of NEAs are expected to grow in the near future, offering increasing target opportunities. As NASA continues to refine its plans to possibly explore these small worlds with human explorers, initial reconnaissance with comparatively inexpensive robotic precursors is necessary. Obtaining and analyzing relevant data about these bodies via robotic precursors before committing a crew to visit a NEA will significantly minimize crew and mission risk, as well as maximize exploration return potential. The Marshall Space Flight Center (MSFC) and Jet Propulsion Laboratory (JPL) are jointly examining a mission concept, tentatively called 'NEA Scout,' utilizing a low-cost CubeSats platform in response to the current needs for affordable missions with exploration science value. The NEA Scout mission concept would be a secondary payload on the Space Launch System (SLS) Exploration Mission 1 (EM-1), the first planned flight of the SLS and the second un-crewed test flight of the Orion Multi-Purpose Crew Vehicle (MPCV).
The HST/STIS Next Generation Spectral Library
NASA Technical Reports Server (NTRS)
Gregg, M. D.; Silva, D.; Rayner, J.; Worthey, G.; Valdes, F.; Pickles, A.; Rose, J.; Carney, B.; Vacca, W.
2006-01-01
During Cycles 10, 12, and 13, we obtained STIS G230LB, G430L, and G750L spectra of 378 bright stars covering a wide range in abundance, effective temperature, and luminosity. This HST/STIS Next Generation Spectral Library was scheduled to reach its goal of 600 targets by the end of Cycle 13 when STIS came to an untimely end. Even at 2/3 complete, the library significantly improves the sampling of stellar atmosphere parameter space compared to most other spectral libraries by including the near-UV and significant numbers of metal poor and super-solar abundance stars. Numerous calibration challenges have been encountered, some expected, some not; these arise from the use of the E1 aperture location, non-standard wavelength calibration, and, most significantly, the serious contamination of the near-UV spectra by red light. Maximizing the utility of the library depends directly on overcoming or at least minimizing these problems, especially correcting the UV spectra.
Stimulus bill implementation: expanding meaningful use of health IT.
Cunningham, Rob
2009-08-25
The American Recovery and Reinvestment Act authorizes an estimated $38 billion in incentives and supports for health information technology (IT) from 2009 to 2019. After years of sluggish HIT adoption, this crisis-driven investment of public funds creates a unique opportunity for rapid diffusion of a technology that is widely expected to improve care, save money, and facilitate transformation of the troubled U.S. health system. Achieving maximal effect from the stimulus funds is nevertheless a difficult challenge. The Recovery Act strengthens the federal government's leadership role in promoting HIT. But successful adoption and utilization across the health system will also require development of a supportive infrastructure and broad-based efforts by providers, vendors, state-based agencies, and other health system stakeholders. Optimal use of IT for health care may require extensive reengineering of medical practice and of existing systems of payment. The future course of HIT adoption will also be subject to the effects of any health care reform legislation and of technological innovation in the fast-changing world of electronic communications
EVIDENCE – BASED MEDICINE/PRACTICE IN SPORTS PHYSICAL THERAPY
Lehecka, B.J.
2012-01-01
A push for the use of evidence‐based medicine and evidence‐based practice patterns has permeated most health care disciplines. The use of evidence‐based practice in sports physical therapy may improve health care quality, reduce medical errors, help balance known benefits and risks, challenge views based on beliefs rather than evidence, and help to integrate patient preferences into decision‐making. In this era of health care utilization sports physical therapists are expected to integrate clinical experience with conscientious, explicit, and judicious use of research evidence in order to make clearly informed decisions in order to help maximize and optimize patient well‐being. One of the more common reasons for not using evidence in clinical practice is the perceived lack of skills and knowledge when searching for or appraising research. This clinical commentary was developed to educate the readership on what constitutes evidence‐based practice, and strategies used to seek evidence in the daily clinical practice of sports physical therapy. PMID:23091778
Evidence - based medicine/practice in sports physical therapy.
Manske, Robert C; Lehecka, B J
2012-10-01
A push for the use of evidence-based medicine and evidence-based practice patterns has permeated most health care disciplines. The use of evidence-based practice in sports physical therapy may improve health care quality, reduce medical errors, help balance known benefits and risks, challenge views based on beliefs rather than evidence, and help to integrate patient preferences into decision-making. In this era of health care utilization sports physical therapists are expected to integrate clinical experience with conscientious, explicit, and judicious use of research evidence in order to make clearly informed decisions in order to help maximize and optimize patient well-being. One of the more common reasons for not using evidence in clinical practice is the perceived lack of skills and knowledge when searching for or appraising research. This clinical commentary was developed to educate the readership on what constitutes evidence-based practice, and strategies used to seek evidence in the daily clinical practice of sports physical therapy.
People learn other people's preferences through inverse decision-making.
Jern, Alan; Lucas, Christopher G; Kemp, Charles
2017-11-01
People are capable of learning other people's preferences by observing the choices they make. We propose that this learning relies on inverse decision-making-inverting a decision-making model to infer the preferences that led to an observed choice. In Experiment 1, participants observed 47 choices made by others and ranked them by how strongly each choice suggested that the decision maker had a preference for a specific item. An inverse decision-making model generated predictions that were in accordance with participants' inferences. Experiment 2 replicated and extended a previous study by Newtson (1974) in which participants observed pairs of choices and made judgments about which choice provided stronger evidence for a preference. Inverse decision-making again predicted the results, including a result that previous accounts could not explain. Experiment 3 used the same method as Experiment 2 and found that participants did not expect decision makers to be perfect utility-maximizers. Copyright © 2017 Elsevier B.V. All rights reserved.
The probabilistic nature of preferential choice.
Rieskamp, Jörg
2008-11-01
Previous research has developed a variety of theories explaining when and why people's decisions under risk deviate from the standard economic view of expected utility maximization. These theories are limited in their predictive accuracy in that they do not explain the probabilistic nature of preferential choice, that is, why an individual makes different choices in nearly identical situations, or why the magnitude of these inconsistencies varies in different situations. To illustrate the advantage of probabilistic theories, three probabilistic theories of decision making under risk are compared with their deterministic counterparts. The probabilistic theories are (a) a probabilistic version of a simple choice heuristic, (b) a probabilistic version of cumulative prospect theory, and (c) decision field theory. By testing the theories with the data from three experimental studies, the superiority of the probabilistic models over their deterministic counterparts in predicting people's decisions under risk become evident. When testing the probabilistic theories against each other, decision field theory provides the best account of the observed behavior.
Gray, Wayne D; Sims, Chris R; Fu, Wai-Tat; Schoelles, Michael J
2006-07-01
Soft constraints hypothesis (SCH) is a rational analysis approach that holds that the mixture of perceptual-motor and cognitive resources allocated for interactive behavior is adjusted based on temporal cost-benefit tradeoffs. Alternative approaches maintain that cognitive resources are in some sense protected or conserved in that greater amounts of perceptual-motor effort will be expended to conserve lesser amounts of cognitive effort. One alternative, the minimum memory hypothesis (MMH), holds that people favor strategies that minimize the use of memory. SCH is compared with MMH across 3 experiments and with predictions of an Ideal Performer Model that uses ACT-R's memory system in a reinforcement learning approach that maximizes expected utility by minimizing time. Model and data support the SCH view of resource allocation; at the under 1000-ms level of analysis, mixtures of cognitive and perceptual-motor resources are adjusted based on their cost-benefit tradeoffs for interactive behavior. ((c) 2006 APA, all rights reserved).
Directing solar photons to sustainably meet food, energy, and water needs
Gencer, Emre; Miskin, Caleb; Sun, Xingshu; ...
2017-06-09
As we approach a “Full Earth” of over ten billion people within the next century, unprecedented demands will be placed on food, energy and water (FEW) supplies. The grand challenge before us is to sustainably meet humanity’s FEW needs using scarcer resources. To overcome this challenge, we propose the utilization of the entire solar spectrum by redirecting solar photons to maximize FEW production from a given land area. We present novel solar spectrum unbundling FEW systems (SUFEWS), which can meet FEW needs locally while reducing the overall environmental impact of meeting these needs. The ability to meet FEW needs locallymore » is critical, as significant population growth is expected in less-developed areas of the world. As a result, the proposed system presents a solution to harness the same amount of solar products (crops, electricity, and purified water) that could otherwise require ~60% more land if SUFEWS were not used—a major step for Full Earth preparedness.« less
Network clustering and community detection using modulus of families of loops.
Shakeri, Heman; Poggi-Corradini, Pietro; Albin, Nathan; Scoglio, Caterina
2017-01-01
We study the structure of loops in networks using the notion of modulus of loop families. We introduce an alternate measure of network clustering by quantifying the richness of families of (simple) loops. Modulus tries to minimize the expected overlap among loops by spreading the expected link usage optimally. We propose weighting networks using these expected link usages to improve classical community detection algorithms. We show that the proposed method enhances the performance of certain algorithms, such as spectral partitioning and modularity maximization heuristics, on standard benchmarks.
Collective Intelligence. Chapter 17
NASA Technical Reports Server (NTRS)
Wolpert, David H.
2003-01-01
Many systems of self-interested agents have an associated performance criterion that rates the dynamic behavior of the overall system. This chapter presents an introduction to the science of such systems. Formally, collectives are defined as any system having the following two characteristics: First, the system must contain one or more agents each of which we view as trying to maximize an associated private utility; second, the system must have an associated world utility function that rates the possible behaviors of that overall system. In practice, collectives are often very large, distributed, and support little, if any, centralized communication and control, although those characteristics are not part of their formal definition. A naturally occurring example of a collective is a human economy. One can identify the agents and their private utilities as the human individuals in the economy and the associated personal rewards they are each trying to maximize. One could then identify the world utility as the time average of the gross domestic product. ("World utility" per se is not a construction internal to a human economy, but rather something defined from the outside.) To achieve high world utility it is necessary to avoid having the agents work at cross-purposes lest phenomena like liquidity traps or the Tragedy of the Commons (TOC) occur, in which agents' individually pursuing their private utilities lowers world utility. The obvious way to avoid such phenomena is by modifying the agents utility functions to be "aligned" with the world utility. This can be done via punitive legislation. A real-world example of an attempt to do this was the creation of antitrust regulations designed to prevent monopolistic practices.
Horton, Dane M; Saint, David A; Owens, Julie A; Gatford, Kathryn L; Kind, Karen L
2017-07-01
The guinea pig is an alternate small animal model for the study of metabolism, including insulin sensitivity. However, only one study to date has reported the use of the hyperinsulinemic euglycemic clamp in anesthetized animals in this species, and the dose response has not been reported. We therefore characterized the dose-response curve for whole body glucose uptake using recombinant human insulin in the adult guinea pig. Interspecies comparisons with published data showed species differences in maximal whole body responses (guinea pig ≈ human < rat < mouse) and the insulin concentrations at which half-maximal insulin responses occurred (guinea pig > human ≈ rat > mouse). In subsequent studies, we used concomitant d-[3- 3 H]glucose infusion to characterize insulin sensitivities of whole body glucose uptake, utilization, production, storage, and glycolysis in young adult guinea pigs at human insulin doses that produced approximately half-maximal (7.5 mU·min -1 ·kg -1 ) and near-maximal whole body responses (30 mU·min -1 ·kg -1 ). Although human insulin infusion increased rates of glucose utilization (up to 68%) and storage and, at high concentrations, increased rates of glycolysis in females, glucose production was only partially suppressed (~23%), even at high insulin doses. Fasting glucose, metabolic clearance of insulin, and rates of glucose utilization, storage, and production during insulin stimulation were higher in female than in male guinea pigs ( P < 0.05), but insulin sensitivity of these and whole body glucose uptake did not differ between sexes. This study establishes a method for measuring partitioned glucose metabolism in chronically catheterized conscious guinea pigs, allowing studies of regulation of insulin sensitivity in this species. Copyright © 2017 the American Physiological Society.
The management of patients with T1 adenocarcinoma of the low rectum: a decision analysis.
Johnston, Calvin F; Tomlinson, George; Temple, Larissa K; Baxter, Nancy N
2013-04-01
Decision making for patients with T1 adenocarcinoma of the low rectum, when treatment options are limited to a transanal local excision or abdominoperineal resection, is challenging. The aim of this study was to develop a contemporary decision analysis to assist patients and clinicians in balancing the goals of maximizing life expectancy and quality of life in this situation. We constructed a Markov-type microsimulation in open-source software. Recurrence rates and quality-of-life parameters were elicited by systematic literature reviews. Sensitivity analyses were performed on key model parameters. Our base case for analysis was a 65-year-old man with low-lying T1N0 rectal cancer. We determined the sensitivity of our model for sex, age up to 80, and T stage. The main outcome measured was quality-adjusted life-years. In the base case, selecting transanal local excision over abdominoperineal resection resulted in a loss of 0.53 years of life expectancy but a gain of 0.97 quality-adjusted life-years. One-way sensitivity analysis demonstrated a health state utility value threshold for permanent colostomy of 0.93. This value ranged from 0.88 to 1.0 based on tumor recurrence risk. There were no other model sensitivities. Some model parameter estimates were based on weak data. In our model, transanal local excision was found to be the preferable approach for most patients. An abdominoperineal resection has a 3.5% longer life expectancy, but this advantage is lost when the quality-of-life reduction reported by stoma patients is weighed in. The minority group in whom abdominoperineal resection is preferred are those who are unwilling to sacrifice 7% of their life expectancy to avoid a permanent stoma. This is estimated to be approximately 25% of all patients. The threshold increases to 12% of life expectancy in high-risk tumors. No other factors are found to be relevant to the decision.
Maximization, learning, and economic behavior
Erev, Ido; Roth, Alvin E.
2014-01-01
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design. PMID:25024182
Maximization, learning, and economic behavior.
Erev, Ido; Roth, Alvin E
2014-07-22
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design.
A Joint Multitarget Estimator for the Joint Target Detection and Tracking Filter
2015-06-27
function is the information theoretic part of the problem and aims for entropy maximization, while the second one arises from the constraint in the...objective functions in conflict. The first objective function is the information theo- retic part of the problem and aims for entropy maximization...theory. For the sake of completeness and clarity, we also summarize how each concept is utilized later. Entropy : A random variable is statistically
Social and psychological challenges of poker.
Siler, Kyle
2010-09-01
Poker is a competitive, social game of skill and luck, which presents players with numerous challenging strategic and interpersonal decisions. The adaptation of poker into a game played over the internet provides the unprecedented opportunity to quantitatively analyze extremely large numbers of hands and players. This paper analyzes roughly twenty-seven million hands played online in small-stakes, medium-stakes and high-stakes games. Using PokerTracker software, statistics are generated to (a) gauge the types of strategies utilized by players (i.e. the 'strategic demography') at each level and (b) examine the various payoffs associated with different strategies at varying levels of play. The results show that competitive edges attenuate as one moves up levels, and tight-aggressive strategies--which tend to be the most remunerative--become more prevalent. Further, payoffs for different combinations of cards, varies between levels, showing how strategic payoffs are derived from competitive interactions. Smaller-stakes players also have more difficulty appropriately weighting incentive structures with frequent small gains and occasional large losses. Consequently, the relationship between winning a large proportion of hands and profitability is negative, and is strongest in small-stakes games. These variations reveal a meta-game of rationality and psychology which underlies the card game. Adopting risk-neutrality to maximize expected value, aggression and appropriate mental accounting, are cognitive burdens on players, and underpin the rationality work--reconfiguring of personal preferences and goals--players engage into be competitive, and maximize their winning and profit chances.
D'Acremont, Mathieu; Bossaerts, Peter
2008-12-01
When modeling valuation under uncertainty, economists generally prefer expected utility because it has an axiomatic foundation, meaning that the resulting choices will satisfy a number of rationality requirements. In expected utility theory, values are computed by multiplying probabilities of each possible state of nature by the payoff in that state and summing the results. The drawback of this approach is that all state probabilities need to be dealt with separately, which becomes extremely cumbersome when it comes to learning. Finance academics and professionals, however, prefer to value risky prospects in terms of a trade-off between expected reward and risk, where the latter is usually measured in terms of reward variance. This mean-variance approach is fast and simple and greatly facilitates learning, but it impedes assigning values to new gambles on the basis of those of known ones. To date, it is unclear whether the human brain computes values in accordance with expected utility theory or with mean-variance analysis. In this article, we discuss the theoretical and empirical arguments that favor one or the other theory. We also propose a new experimental paradigm that could determine whether the human brain follows the expected utility or the mean-variance approach. Behavioral results of implementation of the paradigm are discussed.
Phenomenology of maximal and near-maximal lepton mixing
NASA Astrophysics Data System (ADS)
Gonzalez-Garcia, M. C.; Peña-Garay, Carlos; Nir, Yosef; Smirnov, Alexei Yu.
2001-01-01
The possible existence of maximal or near-maximal lepton mixing constitutes an intriguing challenge for fundamental theories of flavor. We study the phenomenological consequences of maximal and near-maximal mixing of the electron neutrino with other (x=tau and/or muon) neutrinos. We describe the deviations from maximal mixing in terms of a parameter ɛ≡1-2 sin2 θex and quantify the present experimental status for \\|ɛ\\|<0.3. We show that both probabilities and observables depend on ɛ quadratically when effects are due to vacuum oscillations and they depend on ɛ linearly if matter effects dominate. The most important information on νe mixing comes from solar neutrino experiments. We find that the global analysis of solar neutrino data allows maximal mixing with confidence level better than 99% for 10-8 eV2<~Δm2<~2×10-7 eV2. In the mass ranges Δm2>~1.5×10-5 eV2 and 4×10-10 eV2<~Δm2<~2×10-7 eV2 the full interval \\|ɛ\\|<0.3 is allowed within ~4σ (99.995% CL) We suggest ways to measure ɛ in future experiments. The observable that is most sensitive to ɛ is the rate [NC]/[CC] in combination with the day-night asymmetry in the SNO detector. With theoretical and statistical uncertainties, the expected accuracy after 5 years is Δɛ~0.07. We also discuss the effects of maximal and near-maximal νe mixing in atmospheric neutrinos, supernova neutrinos, and neutrinoless double beta decay.
Health Status and Health Dynamics in an Empirical Model of Expected Longevity*
Benítez-Silva, Hugo; Ni, Huan
2010-01-01
Expected longevity is an important factor influencing older individuals’ decisions such as consumption, savings, purchase of life insurance and annuities, claiming of Social Security benefits, and labor supply. It has also been shown to be a good predictor of actual longevity, which in turn is highly correlated with health status. A relatively new literature on health investments under uncertainty, which builds upon the seminal work by Grossman (1972), has directly linked longevity with characteristics, behaviors, and decisions by utility maximizing agents. Our empirical model can be understood within that theoretical framework as estimating a production function of longevity. Using longitudinal data from the Health and Retirement Study, we directly incorporate health dynamics in explaining the variation in expected longevities, and compare two alternative measures of health dynamics: the self-reported health change, and the computed health change based on self-reports of health status. In 38% of the reports in our sample, computed health changes are inconsistent with the direct report on health changes over time. And another 15% of the sample can suffer from information losses if computed changes are used to assess changes in actual health. These potentially serious problems raise doubts regarding the use and interpretation of the computed health changes and even the lagged measures of self-reported health as controls for health dynamics in a variety of empirical settings. Our empirical results, controlling for both subjective and objective measures of health status and unobserved heterogeneity in reporting, suggest that self-reported health changes are a preferred measure of health dynamics. PMID:18187217
Electromyographic and neuromuscular analysis in patients with post-polio syndrome.
Corrêa, J C F; Rocco, C Chiusoli de Miranda; de Andrade, D Ventura; Peres, J Augusto; Corrêa, F Ishida
2008-01-01
Proceed to a comparative analysis of the electromyographic (EMG) activity of the muscles rectus femoris, vastus medialis and vastus lateralis, and to assess muscle strength and fatigue after maximal isometric contraction during knee extension. Eighteen patients with post-polio syndrome, age and weight matched, were utilized in this study. The signal acquisition system utilized consisted of three pairs of surface electrodes positioned on the motor point of the analyzed muscles. It was possible to observe with the results of this study a decreased endurance on initial muscle contraction and during contraction after 15 minutes of the initial maximal voluntary contraction, along with a muscle fatigue that was assessed through linear regression executed with Pearson's test. There were significant differences among the comparative analysis of EMG activity of the muscles rectus femoris, vastus medialis and vastus lateralis after maximal isometric contraction during knee extension. Initial muscle contraction and contraction after a 15 minute-rest from initial contraction decreased considerably, indicating a decreased endurance on muscle contraction, concluding that a lower limb muscle fatigue was present on the analyzed PPS patients.
Power Dependence in Individual Bargaining: The Expected Utility of Influence.
ERIC Educational Resources Information Center
Lawler, Edward J.; Bacharach, Samuel B.
1979-01-01
This study uses power-dependence theory as a framework for examining whether and how parties use information on each other's dependence to estimate the utility of an influence attempt. The effect of dependence in expected utilities is investigated (by role playing) in bargaining between employer and employee for a pay raise. (MF)
Disconfirmation of Expectations of Utility in e-Learning
ERIC Educational Resources Information Center
Cacao, Rosario
2013-01-01
Using pre-training and post-training paired surveys in e-learning based training courses, we have compared the "expectations of utility," measured at the beginning of an e-learning course, with the "perceptions of utility," measured at the end of the course, and related it with the trainees' motivation. We have concluded that…
Assessing park-and-ride impacts.
DOT National Transportation Integrated Search
2010-06-01
Efficient transportation systems are vital to quality-of-life and mobility issues, and an effective park-and-ride (P&R) : network can help maximize system performance. Properly placed P&R facilities are expected to result in fewer calls : to increase...
NASA Astrophysics Data System (ADS)
Likozar, Blaž; Major, Zoltan
2010-11-01
The purpose of this work was to prepare nanocomposites by mixing multi-walled carbon nanotubes (MWCNT) with nitrile and hydrogenated nitrile elastomers (NBR and HNBR). Utilization of transmission electronic microscopy (TEM), scanning electron microscopy (SEM), and small- and wide-angle X-ray scattering techniques (SAXS and WAXS) for advanced morphology observation of conducting filler-reinforced nitrile and hydrogenated nitrile rubber composites is reported. Principal results were increases in hardness (maximally 97 Shore, type A), elastic modulus (maximally 981 MPa), tensile strength (maximally 27.7 MPa), elongation at break (maximally 216%), cross-link density (maximally 7.94 × 1028 m-3), density (maximally 1.16 g cm-3), and tear strength (11.2 kN m-1), which were clearly visible at particular acrylonitrile contents both for unhydrogenated and hydrogenated polymers due to enhanced distribution of carbon nanotubes (CNT) and their aggregated particles in the applied rubber matrix. Conclusion was that multi-walled carbon nanotubes improved the performance of nitrile and hydrogenated nitrile rubber nanocomposites prepared by melt compounding.
Foxall, Gordon R; Oliveira-Castro, Jorge M; Schrezenmaier, Teresa C
2004-06-30
Purchasers of fast-moving consumer goods generally exhibit multi-brand choice, selecting apparently randomly among a small subset or "repertoire" of tried and trusted brands. Their behavior shows both matching and maximization, though it is not clear just what the majority of buyers are maximizing. Each brand attracts, however, a small percentage of consumers who are 100%-loyal to it during the period of observation. Some of these are exclusively buyers of premium-priced brands who are presumably maximizing informational reinforcement because their demand for the brand is relatively price-insensitive or inelastic. Others buy exclusively the cheapest brands available and can be assumed to maximize utilitarian reinforcement since their behavior is particularly price-sensitive or elastic. Between them are the majority of consumers whose multi-brand buying takes the form of selecting a mixture of economy -- and premium-priced brands. Based on the analysis of buying patterns of 80 consumers for 9 product categories, the paper examines the continuum of consumers so defined and seeks to relate their buying behavior to the question of how and what consumers maximize.
Three faces of node importance in network epidemiology: Exact results for small graphs
NASA Astrophysics Data System (ADS)
Holme, Petter
2017-12-01
We investigate three aspects of the importance of nodes with respect to susceptible-infectious-removed (SIR) disease dynamics: influence maximization (the expected outbreak size given a set of seed nodes), the effect of vaccination (how much deleting nodes would reduce the expected outbreak size), and sentinel surveillance (how early an outbreak could be detected with sensors at a set of nodes). We calculate the exact expressions of these quantities, as functions of the SIR parameters, for all connected graphs of three to seven nodes. We obtain the smallest graphs where the optimal node sets are not overlapping. We find that (i) node separation is more important than centrality for more than one active node, (ii) vaccination and influence maximization are the most different aspects of importance, and (iii) the three aspects are more similar when the infection rate is low.
Mauser, Wolfram; Klepper, Gernot; Zabel, Florian; Delzeit, Ruth; Hank, Tobias; Putzenlechner, Birgitta; Calzadilla, Alvaro
2015-01-01
Global biomass demand is expected to roughly double between 2005 and 2050. Current studies suggest that agricultural intensification through optimally managed crops on today's cropland alone is insufficient to satisfy future demand. In practice though, improving crop growth management through better technology and knowledge almost inevitably goes along with (1) improving farm management with increased cropping intensity and more annual harvests where feasible and (2) an economically more efficient spatial allocation of crops which maximizes farmers' profit. By explicitly considering these two factors we show that, without expansion of cropland, today's global biomass potentials substantially exceed previous estimates and even 2050s' demands. We attribute 39% increase in estimated global production potentials to increasing cropping intensities and 30% to the spatial reallocation of crops to their profit-maximizing locations. The additional potentials would make cropland expansion redundant. Their geographic distribution points at possible hotspots for future intensification. PMID:26558436
Mauser, Wolfram; Klepper, Gernot; Zabel, Florian; Delzeit, Ruth; Hank, Tobias; Putzenlechner, Birgitta; Calzadilla, Alvaro
2015-11-12
Global biomass demand is expected to roughly double between 2005 and 2050. Current studies suggest that agricultural intensification through optimally managed crops on today's cropland alone is insufficient to satisfy future demand. In practice though, improving crop growth management through better technology and knowledge almost inevitably goes along with (1) improving farm management with increased cropping intensity and more annual harvests where feasible and (2) an economically more efficient spatial allocation of crops which maximizes farmers' profit. By explicitly considering these two factors we show that, without expansion of cropland, today's global biomass potentials substantially exceed previous estimates and even 2050s' demands. We attribute 39% increase in estimated global production potentials to increasing cropping intensities and 30% to the spatial reallocation of crops to their profit-maximizing locations. The additional potentials would make cropland expansion redundant. Their geographic distribution points at possible hotspots for future intensification.
Schrempf, Alexandra; Giehr, Julia; Röhrl, Ramona; Steigleder, Sarah; Heinze, Jürgen
2017-04-01
One of the central tenets of life-history theory is that organisms cannot simultaneously maximize all fitness components. This results in the fundamental trade-off between reproduction and life span known from numerous animals, including humans. Social insects are a well-known exception to this rule: reproductive queens outlive nonreproductive workers. Here, we take a step forward and show that under identical social and environmental conditions the fecundity-longevity trade-off is absent also within the queen caste. A change in reproduction did not alter life expectancy, and even a strong enforced increase in reproductive efforts did not reduce residual life span. Generally, egg-laying rate and life span were positively correlated. Queens of perennial social insects thus seem to maximize at the same time two fitness parameters that are normally negatively correlated. Even though they are not immortal, they best approach a hypothetical "Darwinian demon" in the animal kingdom.
WFIRST: Exoplanet Target Selection and Scheduling with Greedy Optimization
NASA Astrophysics Data System (ADS)
Keithly, Dean; Garrett, Daniel; Delacroix, Christian; Savransky, Dmitry
2018-01-01
We present target selection and scheduling algorithms for missions with direct imaging of exoplanets, and the Wide Field Infrared Survey Telescope (WFIRST) in particular, which will be equipped with a coronagraphic instrument (CGI). Optimal scheduling of CGI targets can maximize the expected value of directly imaged exoplanets (completeness). Using target completeness as a reward metric and integration time plus overhead time as a cost metric, we can maximize the sum completeness for a mission with a fixed duration. We optimize over these metrics to create a list of target stars using a greedy optimization algorithm based off altruistic yield optimization (AYO) under ideal conditions. We simulate full missions using EXOSIMS by observing targets in this list for their predetermined integration times. In this poster, we report the theoretical maximum sum completeness, mean number of detected exoplanets from Monte Carlo simulations, and the ideal expected value of the simulated missions.
Evaluating the Value of Information in the Presence of High Uncertainty
2013-06-01
in this hierarchy is subsumed in the Knowledge and Information layers. If information with high expected value is identified, it then passes up...be, the higher is its value. Based on the idea of expected utility of asking a question [36], Nelson [31] discusses different approaches for...18] formalizes the expected value of a sample of information using the concept of pre-posterior analysis as the expected increase in utility by
Are One Man's Rags Another Man's Riches? Identifying Adaptive Expectations Using Panel Data
ERIC Educational Resources Information Center
Burchardt, Tania
2005-01-01
One of the motivations frequently cited by Sen and Nussbaum for moving away from a utility metric towards a capabilities framework is a concern about adaptive preferences or conditioned expectations. If utility is related to the satisfaction of aspirations or expectations, and if these are affected by the individual's previous experience of…
Maximizing investments in work zone safety in Oregon : final report.
DOT National Transportation Integrated Search
2011-05-01
Due to the federal stimulus program and the 2009 Jobs and Transportation Act, the Oregon Department of Transportation (ODOT) anticipates that a large increase in highway construction will occur. There is the expectation that, since transportation saf...
ERIC Educational Resources Information Center
Lashway, Larry
1997-01-01
Principals today are expected to maximize their schools' performances with limited resources while also adopting educational innovations. This synopsis reviews five recent publications that offer some important insights about the nature of principals' leadership strategies: (1) "Leadership Styles and Strategies" (Larry Lashway); (2) "Facilitative…
Ceriani, Luca; Ruberto, Teresa; Delaloye, Angelika Bischof; Prior, John O; Giovanella, Luca
2010-03-01
The purposes of this study were to characterize the performance of a 3-dimensional (3D) ordered-subset expectation maximization (OSEM) algorithm in the quantification of left ventricular (LV) function with (99m)Tc-labeled agent gated SPECT (G-SPECT), the QGS program, and a beating-heart phantom and to optimize the reconstruction parameters for clinical applications. A G-SPECT image of a dynamic heart phantom simulating the beating left ventricle was acquired. The exact volumes of the phantom were known and were as follows: end-diastolic volume (EDV) of 112 mL, end-systolic volume (ESV) of 37 mL, and stroke volume (SV) of 75 mL; these volumes produced an LV ejection fraction (LVEF) of 67%. Tomographic reconstructions were obtained after 10-20 iterations (I) with 4, 8, and 16 subsets (S) at full width at half maximum (FWHM) gaussian postprocessing filter cutoff values of 8-15 mm. The QGS program was used for quantitative measurements. Measured values ranged from 72 to 92 mL for EDV, from 18 to 32 mL for ESV, and from 54 to 63 mL for SV, and the calculated LVEF ranged from 65% to 76%. Overall, the combination of 10 I, 8 S, and a cutoff filter value of 10 mm produced the most accurate results. The plot of the measures with respect to the expectation maximization-equivalent iterations (I x S product) revealed a bell-shaped curve for the LV volumes and a reverse distribution for the LVEF, with the best results in the intermediate range. In particular, FWHM cutoff values exceeding 10 mm affected the estimation of the LV volumes. The QGS program is able to correctly calculate the LVEF when used in association with an optimized 3D OSEM algorithm (8 S, 10 I, and FWHM of 10 mm) but underestimates the LV volumes. However, various combinations of technical parameters, including a limited range of I and S (80-160 expectation maximization-equivalent iterations) and low cutoff values (< or =10 mm) for the gaussian postprocessing filter, produced results with similar accuracies and without clinically relevant differences in the LV volumes and the estimated LVEF.
Cost-Effective Cloud Computing: A Case Study Using the Comparative Genomics Tool, Roundup
Kudtarkar, Parul; DeLuca, Todd F.; Fusaro, Vincent A.; Tonellato, Peter J.; Wall, Dennis P.
2010-01-01
Background Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource—Roundup—using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Methods Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon’s Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. Results We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon’s computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure. PMID:21258651
On the Achievable Throughput Over TVWS Sensor Networks
Caleffi, Marcello; Cacciapuoti, Angela Sara
2016-01-01
In this letter, we study the throughput achievable by an unlicensed sensor network operating over TV white space spectrum in presence of coexistence interference. Through the letter, we first analytically derive the achievable throughput as a function of the channel ordering. Then, we show that the problem of deriving the maximum expected throughput through exhaustive search is computationally unfeasible. Finally, we derive a computational-efficient algorithm characterized by polynomial-time complexity to compute the channel set maximizing the expected throughput and, stemming from this, we derive a closed-form expression of the maximum expected throughput. Numerical simulations validate the theoretical analysis. PMID:27043565
Humphrey, Clinton D; Tollefson, Travis T; Kriet, J David
2010-05-01
Facial plastic surgeons are accumulating massive digital image databases with the evolution of photodocumentation and widespread adoption of digital photography. Managing and maximizing the utility of these vast data repositories, or digital asset management (DAM), is a persistent challenge. Developing a DAM workflow that incorporates a file naming algorithm and metadata assignment will increase the utility of a surgeon's digital images. Copyright 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Grecu, M.; Tian, L.; Heymsfield, G. M.
2017-12-01
A major challenge in deriving accurate estimates of physical properties of falling snow particles from single frequency space- or airborne radar observations is that snow particles exhibit a large variety of shapes and their electromagnetic scattering characteristics are highly dependent on these shapes. Triple frequency (Ku-Ka-W) radar observations are expected to facilitate the derivation of more accurate snow estimates because specific snow particle shapes tend to have specific signatures in the associated two-dimensional dual-reflectivity-ratio (DFR) space. However, the derivation of accurate snow estimates from triple frequency radar observations is by no means a trivial task. This is because the radar observations can be subject to non-negligible attenuation (especially at W-band when super-cooled water is present), which may significantly impact the interpretation of the information in the DFR space. Moreover, the electromagnetic scattering properties of snow particles are computationally expensive to derive, which makes the derivation of reliable parameterizations usable in estimation methodologies challenging. In this study, we formulate an two-step Expectation Maximization (EM) methodology to derive accurate snow estimates in Extratropical Cyclones (ECTs) from triple frequency airborne radar observations. The Expectation (E) step consists of a least-squares triple frequency estimation procedure applied with given assumptions regarding the relationships between the density of snow particles and their sizes, while the Maximization (M) step consists of the optimization of the assumptions used in step E. The electromagnetic scattering properties of snow particles are derived using the Rayleigh-Gans approximation. The methodology is applied to triple frequency radar observations collected during the Olympic Mountains Experiment (OLYMPEX). Results show that snowfall estimates above the freezing level in ETCs consistent with the triple frequency radar observations as well as with independent rainfall estimates below the freezing level may be derived using the EM methodology formulated in the study.
Hudson, H M; Ma, J; Green, P
1994-01-01
Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.
Networking Micro-Processors for Effective Computer Utilization in Nursing
Mangaroo, Jewellean; Smith, Bob; Glasser, Jay; Littell, Arthur; Saba, Virginia
1982-01-01
Networking as a social entity has important implications for maximizing computer resources for improved utilization in nursing. This paper describes the one process of networking of complementary resources at three institutions. Prairie View A&M University, Texas A&M University and the University of Texas School of Public Health, which has effected greater utilization of computers at the college. The results achieved in this project should have implications for nurses, users, and consumers in the development of computer resources.
Regan, Matthew D; Brauner, Colin J
2010-06-01
The Root effect, a reduction in blood oxygen (O(2)) carrying capacity at low pH, is used by many fish species to maximize O(2) delivery to the eye and swimbladder. It is believed to have evolved in the basal actinopterygian lineage of fishes, species that lack the intracellular pH (pH(i)) protection mechanism of more derived species' red blood cells (i.e., adrenergically activated Na(+)/H(+) exchangers; betaNHE). These basal actinopterygians may consequently experience a reduction in blood O(2) carrying capacity, and thus O(2) uptake at the gills, during hypoxia- and exercise-induced generalized blood acidoses. We analyzed the hemoglobins (Hbs) of seven species within this group [American paddlefish (Polyodon spathula), white sturgeon (Acipenser transmontanus), spotted gar (Lepisosteus oculatus), alligator gar (Atractosteus spatula), bowfin (Amia calva), mooneye (Hiodon tergisus), and pirarucu (Arapaima gigas)] for their Root effect characteristics so as to test the hypothesis of the Root effect onset pH value being lower than those pH values expected during a generalized acidosis in vivo. Analysis of the haemolysates revealed that, although each of the seven species displayed Root effects (ranging from 7.3 to 40.5% desaturation of Hb with O(2), i.e., Hb O(2) desaturation), the Root effect onset pH values of all species are considerably lower (ranging from pH 5.94 to 7.04) than the maximum blood acidoses that would be expected following hypoxia or exercise (pH(i) 7.15-7.3). Thus, although these primitive fishes possess Hbs with large Root effects and lack any significant red blood cell betaNHE activity, it is unlikely that the possession of a Root effect would impair O(2) uptake at the gills following a generalized acidosis of the blood. As well, it was shown that both maximal Root effect and Root effect onset pH values increased significantly in bowfin over those of the more basal species, toward values of similar magnitude to those of most of the more derived teleosts studied to date. This is paralleled by the initial appearance of the choroid rete in bowfin, as well as a significant decrease in Hb buffer value and an increase in Bohr/Haldane effects, together suggesting bowfin as the most basal species capable of utilizing its Root effect to maximize O(2) delivery to the eye.
Robust Coordination for Large Sets of Simple Rovers
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Agogino, Adrian
2006-01-01
The ability to coordinate sets of rovers in an unknown environment is critical to the long-term success of many of NASA;s exploration missions. Such coordination policies must have the ability to adapt in unmodeled or partially modeled domains and must be robust against environmental noise and rover failures. In addition such coordination policies must accommodate a large number of rovers, without excessive and burdensome hand-tuning. In this paper we present a distributed coordination method that addresses these issues in the domain of controlling a set of simple rovers. The application of these methods allows reliable and efficient robotic exploration in dangerous, dynamic, and previously unexplored domains. Most control policies for space missions are directly programmed by engineers or created through the use of planning tools, and are appropriate for single rover missions or missions requiring the coordination of a small number of rovers. Such methods typically require significant amounts of domain knowledge, and are difficult to scale to large numbers of rovers. The method described in this article aims to address cases where a large number of rovers need to coordinate to solve a complex time dependent problem in a noisy environment. In this approach, each rover decomposes a global utility, representing the overall goal of the system, into rover-specific utilities that properly assign credit to the rover s actions. Each rover then has the responsibility to create a control policy that maximizes its own rover-specific utility. We show a method of creating rover-utilities that are "aligned" with the global utility, such that when the rovers maximize their own utility, they also maximize the global utility. In addition we show that our method creates rover-utilities that allow the rovers to create their control policies quickly and reliably. Our distributed learning method allows large sets rovers be used unmodeled domains, while providing robustness against rover failures and changing environments. In experimental simulations we show that our method scales well with large numbers of rovers in addition to being robust against noisy sensor inputs and noisy servo control. The results show that our method is able to scale to large numbers of rovers and achieves up to 400% performance improvement over standard machine learning methods.
NASA Technical Reports Server (NTRS)
Byman, J. E.
1985-01-01
A brief history of aircraft production techniques is given. A flexible machining cell is then described. It is a computer controlled system capable of performing 4-axis machining part cleaning, dimensional inspection and materials handling functions in an unmanned environment. The cell was designed to: allow processing of similar and dissimilar parts in random order without disrupting production; allow serial (one-shipset-at-a-time) manufacturing; reduce work-in-process inventory; maximize machine utilization through remote set-up; maximize throughput and minimize labor.
A test of ecological optimality for semiarid vegetation. M.S. Thesis
NASA Technical Reports Server (NTRS)
Salvucci, Guido D.; Eagleson, Peter S.; Turner, Edmund K.
1992-01-01
Three ecological optimality hypotheses which have utility in parameter reduction and estimation in a climate-soil-vegetation water balance model are reviewed and tested. The first hypothesis involves short term optimization of vegetative canopy density through equilibrium soil moisture maximization. The second hypothesis involves vegetation type selection again through soil moisture maximization, and the third involves soil genesis through plant induced modification of soil hydraulic properties to values which result in a maximum rate of biomass productivity.
Gehring, Dominic; Wissler, Sabrina; Lohrer, Heinz; Nauck, Tanja; Gollhofer, Albert
2014-03-01
A thorough understanding of the functional aspects of ankle joint control is essential to developing effective injury prevention. It is of special interest to understand how neuromuscular control mechanisms and mechanical constraints stabilize the ankle joint. Therefore, the aim of the present study was to determine how expecting ankle tilts and the application of an ankle brace influence ankle joint control when imitating the ankle sprain mechanism during walking. Ankle kinematics and muscle activity were assessed in 17 healthy men. During gait rapid perturbations were applied using a trapdoor (tilting with 24° inversion and 15° plantarflexion). The subjects either knew that a perturbation would definitely occur (expected tilts) or there was only the possibility that a perturbation would occur (potential tilts). Both conditions were conducted with and without a semi-rigid ankle brace. Expecting perturbations led to an increased ankle eversion at foot contact, which was mediated by an altered muscle preactivation pattern. Moreover, the maximal inversion angle (-7%) and velocity (-4%), as well as the reactive muscle response were significantly reduced when the perturbation was expected. While wearing an ankle brace did not influence muscle preactivation nor the ankle kinematics before ground contact, it significantly reduced the maximal ankle inversion angle (-14%) and velocity (-11%) as well as reactive neuromuscular responses. The present findings reveal that expecting ankle inversion modifies neuromuscular joint control prior to landing. Although such motor control strategies are weaker in their magnitude compared with braces, they seem to assist ankle joint stabilization in a close-to-injury situation. Copyright © 2013 Elsevier B.V. All rights reserved.
Edwards, W; Fasolo, B
2001-01-01
This review is about decision technology-the rules and tools that help us make wiser decisions. First, we review the three rules that are at the heart of most traditional decision technology-multi-attribute utility, Bayes' theorem, and subjective expected utility maximization. Since the inception of decision research, these rules have prescribed how we should infer values and probabilities and how we should combine them to make better decisions. We suggest how to make best use of all three rules in a comprehensive 19-step model. The remainder of the review explores recently developed tools of decision technology. It examines the characteristics and problems of decision-facilitating sites on the World Wide Web. Such sites now provide anyone who can use a personal computer with access to very sophisticated decision-aiding tools structured mainly to facilitate consumer decision making. It seems likely that the Web will be the mode by means of which decision tools will be distributed to lay users. But methods for doing such apparently simple things as winnowing 3000 options down to a more reasonable number, like 10, contain traps for unwary decision technologists. The review briefly examines Bayes nets and influence diagrams-judgment and decision-making tools that are available as computer programs. It very briefly summarizes the state of the art of eliciting probabilities from experts. It concludes that decision tools will be as important in the 21st century as spreadsheets were in the 20th.
Explaining the harmonic sequence paradox.
Schmidt, Ulrich; Zimper, Alexander
2012-05-01
According to the harmonic sequence paradox, an expected utility decision maker's willingness to pay for a gamble whose expected payoffs evolve according to the harmonic series is finite if and only if his marginal utility of additional income becomes zero for rather low payoff levels. Since the assumption of zero marginal utility is implausible for finite payoff levels, expected utility theory - as well as its standard generalizations such as cumulative prospect theory - are apparently unable to explain a finite willingness to pay. This paper presents first an experimental study of the harmonic sequence paradox. Additionally, it demonstrates that the theoretical argument of the harmonic sequence paradox only applies to time-patient decision makers, whereas the paradox is easily avoided if time-impatience is introduced. ©2011 The British Psychological Society.
77 FR 25145 - Commerce Spectrum Management Advisory Committee Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-27
... innovation as possible, and make wireless services available to all Americans. (See charter, at http://www... federal capabilities and maximizing commercial utilization. NTIA will post a detailed agenda on its Web...
Retrieval of cloud cover parameters from multispectral satellite images
NASA Technical Reports Server (NTRS)
Arking, A.; Childs, J. D.
1985-01-01
A technique is described for extracting cloud cover parameters from multispectral satellite radiometric measurements. Utilizing three channels from the AVHRR (Advanced Very High Resolution Radiometer) on NOAA polar orbiting satellites, it is shown that one can retrieve four parameters for each pixel: cloud fraction within the FOV, optical thickness, cloud-top temperature and a microphysical model parameter. The last parameter is an index representing the properties of the cloud particle and is determined primarily by the radiance at 3.7 microns. The other three parameters are extracted from the visible and 11 micron infrared radiances, utilizing the information contained in the two-dimensional scatter plot of the measured radiances. The solution is essentially one in which the distributions of optical thickness and cloud-top temperature are maximally clustered for each region, with cloud fraction for each pixel adjusted to achieve maximal clustering.
Optimal population size and endogenous growth.
Palivos, T; Yip, C K
1993-01-01
"Many applications in economics require the selection of an objective function which enables the comparison of allocations involving different population sizes. The two most commonly used criteria are the Benthamite and the Millian welfare functions, also known as classical and average utilitarianism, respectively. The former maximizes total utility of the society and thus represents individuals, while the latter maximizes average utility and so represents generations. Edgeworth (1925) was the first to conjecture, that the Benthamite principle leads to a larger population size and a lower standard of living.... The purpose of this paper is to examine Edgeworth's conjecture in an endogenous growth framework in which there are interactions between output and population growth rates. It is shown that, under conditions that ensure an optimum, the Benthamite criterion leads to smaller population and higher output growth rates than the Millian." excerpt
Allenby, Mark C; Misener, Ruth; Panoskaltsis, Nicki; Mantalaris, Athanasios
2017-02-01
Three-dimensional (3D) imaging techniques provide spatial insight into environmental and cellular interactions and are implemented in various fields, including tissue engineering, but have been restricted by limited quantification tools that misrepresent or underutilize the cellular phenomena captured. This study develops image postprocessing algorithms pairing complex Euclidean metrics with Monte Carlo simulations to quantitatively assess cell and microenvironment spatial distributions while utilizing, for the first time, the entire 3D image captured. Although current methods only analyze a central fraction of presented confocal microscopy images, the proposed algorithms can utilize 210% more cells to calculate 3D spatial distributions that can span a 23-fold longer distance. These algorithms seek to leverage the high sample cost of 3D tissue imaging techniques by extracting maximal quantitative data throughout the captured image.
Gray, D T; Weinstein, M C
1998-01-01
Decision and cost-utility analyses considered the tradeoffs of treating patent ductus arteriosus (PDA) using conventional surgery versus transcatheter implantation of the Rashkind occluder. Physicians and informed lay parents assigned utility scores to procedure success/complications combinations seen in prognostically similar pediatric patients with isolated PDA treated from 1982 to 1987. Utility scores multiplied by outcome frequencies from a comparative study generated expected utility values for the two approaches. Cost-utility analyses combined these results with simulated provider cost estimates from 1989. On a 0-100 scale (worst to best observed outcome), the median expected utility for surgery was 99.96, versus 98.88 for the occluder. Results of most sensitivity analyses also slightly favored surgery. Expected utility differences based on 1987 data were minimal. With a mean overall simulated cost of $8,838 vs $12,466 for the occluder, surgery was favored in most cost-utility analyses. Use of the inherently less invasive but less successful, more risky, and more costly occluder approach conferred no apparent net advantage in this study. Analyses of comparable current data would be informative.
Long-Term Counterinsurgency Strategy: Maximizing Special Operations and Airpower
2010-02-01
operations forces possess a repertoire of capabilities and attributes which impart them with unique strategic utility. “That utility reposes most...flashlight), LTMs are employed in a similar role to cue aircrews equipped with Night Vision Devices (NVDs). Concurrently, employment of small laptop...Special Operations Forces (PSS-SOF) and Precision Fires Image Generator (PFIG) have brought similar benefit to the employment of GPS/INS targeted weapons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tabita, F. Robert
2013-07-30
In this study, the Principal Investigator, F.R. Tabita has teamed up with J. C. Liao from UCLA. This project's main goal is to manipulate regulatory networks in phototrophic bacteria to affect and maximize the production of large amounts of hydrogen gas under conditions where wild-type organisms are constrained by inherent regulatory mechanisms from allowing this to occur. Unrestrained production of hydrogen has been achieved and this will allow for the potential utilization of waste materials as a feed stock to support hydrogen production. By further understanding the means by which regulatory networks interact, this study will seek to maximize themore » ability of currently available “unrestrained” organisms to produce hydrogen. The organisms to be utilized in this study, phototrophic microorganisms, in particular nonsulfur purple (NSP) bacteria, catalyze many significant processes including the assimilation of carbon dioxide into organic carbon, nitrogen fixation, sulfur oxidation, aromatic acid degradation, and hydrogen oxidation/evolution. Moreover, due to their great metabolic versatility, such organisms highly regulate these processes in the cell and since virtually all such capabilities are dispensable, excellent experimental systems to study aspects of molecular control and biochemistry/physiology are available.« less
On Social Optima of Non-Cooperative Mean Field Games
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Sen; Zhang, Wei; Zhao, Lin
This paper studies the social optima in noncooperative mean-field games for a large population of agents with heterogeneous stochastic dynamic systems. Each agent seeks to maximize an individual utility functional, and utility functionals of different agents are coupled through a mean field term that depends on the mean of the population states/controls. The paper has the following contributions. First, we derive a set of control strategies for the agents that possess *-Nash equilibrium property, and converge to the mean-field Nash equilibrium as the population size goes to infinity. Second, we study the social optimal in the mean field game. Wemore » derive the conditions, termed the socially optimal conditions, under which the *-Nash equilibrium of the mean field game maximizes the social welfare. Third, a primal-dual algorithm is proposed to compute the *-Nash equilibrium of the mean field game. Since the *-Nash equilibrium of the mean field game is socially optimal, we can compute the equilibrium by solving the social welfare maximization problem, which can be addressed by a decentralized primal-dual algorithm. Numerical simulations are presented to demonstrate the effectiveness of the proposed approach.« less
Zhang, ZhiZhuo; Chang, Cheng Wei; Hugo, Willy; Cheung, Edwin; Sung, Wing-Kin
2013-03-01
Although de novo motifs can be discovered through mining over-represented sequence patterns, this approach misses some real motifs and generates many false positives. To improve accuracy, one solution is to consider some additional binding features (i.e., position preference and sequence rank preference). This information is usually required from the user. This article presents a de novo motif discovery algorithm called SEME (sampling with expectation maximization for motif elicitation), which uses pure probabilistic mixture model to model the motif's binding features and uses expectation maximization (EM) algorithms to simultaneously learn the sequence motif, position, and sequence rank preferences without asking for any prior knowledge from the user. SEME is both efficient and accurate thanks to two important techniques: the variable motif length extension and importance sampling. Using 75 large-scale synthetic datasets, 32 metazoan compendium benchmark datasets, and 164 chromatin immunoprecipitation sequencing (ChIP-Seq) libraries, we demonstrated the superior performance of SEME over existing programs in finding transcription factor (TF) binding sites. SEME is further applied to a more difficult problem of finding the co-regulated TF (coTF) motifs in 15 ChIP-Seq libraries. It identified significantly more correct coTF motifs and, at the same time, predicted coTF motifs with better matching to the known motifs. Finally, we show that the learned position and sequence rank preferences of each coTF reveals potential interaction mechanisms between the primary TF and the coTF within these sites. Some of these findings were further validated by the ChIP-Seq experiments of the coTFs. The application is available online.
NASA Astrophysics Data System (ADS)
Tian, F.; Lu, Y.
2017-12-01
Based on socioeconomic and hydrological data in three arid inland basins and error analysis, the dynamics of human water consumption (HWC) are analyzed to be asymmetric, i.e., HWC increase rapidly in wet periods while maintain or decrease slightly in dry periods. Besides the qualitative analysis that in wet periods great water availability inspires HWC to grow fast but the now expanded economy is managed to sustain by over-exploitation in dry periods, two quantitative models are established and tested, based on expected utility theory (EUT) and prospect theory (PT) respectively. EUT states that humans make decisions based on the total expected utility, namely the sum of utility function multiplied by probability of each result, while PT states that the utility function is defined over gains and losses separately, and probability should be replaced by probability weighting function.
Health status and health dynamics in an empirical model of expected longevity.
Benítez-Silva, Hugo; Ni, Huan
2008-05-01
Expected longevity is an important factor influencing older individuals' decisions such as consumption, savings, purchase of life insurance and annuities, claiming of Social Security benefits, and labor supply. It has also been shown to be a good predictor of actual longevity, which in turn is highly correlated with health status. A relatively new literature on health investments under uncertainty, which builds upon the seminal work by Grossman [Grossman, M., 1972. On the concept of health capital and demand for health. Journal of Political Economy 80, 223-255] has directly linked longevity with characteristics, behaviors, and decisions by utility maximizing agents. Our empirical model can be understood within that theoretical framework as estimating a production function of longevity. Using longitudinal data from the Health and Retirement Study, we directly incorporate health dynamics in explaining the variation in expected longevities, and compare two alternative measures of health dynamics: the self-reported health change, and the computed health change based on self-reports of health status. In 38% of the reports in our sample, computed health changes are inconsistent with the direct report on health changes over time. And another 15% of the sample can suffer from information losses if computed changes are used to assess changes in actual health. These potentially serious problems raise doubts regarding the use and interpretation of the computed health changes and even the lagged measures of self-reported health as controls for health dynamics in a variety of empirical settings. Our empirical results, controlling for both subjective and objective measures of health status and unobserved heterogeneity in reporting, suggest that self-reported health changes are a preferred measure of health dynamics.
Zhao, Panpan; Zhong, Jiayong; Liu, Wanting; Zhao, Jing; Zhang, Gong
2017-12-01
Multiple search engines based on various models have been developed to search MS/MS spectra against a reference database, providing different results for the same data set. How to integrate these results efficiently with minimal compromise on false discoveries is an open question due to the lack of an independent, reliable, and highly sensitive standard. We took the advantage of the translating mRNA sequencing (RNC-seq) result as a standard to evaluate the integration strategies of the protein identifications from various search engines. We used seven mainstream search engines (Andromeda, Mascot, OMSSA, X!Tandem, pFind, InsPecT, and ProVerB) to search the same label-free MS data sets of human cell lines Hep3B, MHCCLM3, and MHCC97H from the Chinese C-HPP Consortium for Chromosomes 1, 8, and 20. As expected, the union of seven engines resulted in a boosted false identification, whereas the intersection of seven engines remarkably decreased the identification power. We found that identifications of at least two out of seven engines resulted in maximizing the protein identification power while minimizing the ratio of suspicious/translation-supported identifications (STR), as monitored by our STR index, based on RNC-Seq. Furthermore, this strategy also significantly improves the peptides coverage of the protein amino acid sequence. In summary, we demonstrated a simple strategy to significantly improve the performance for shotgun mass spectrometry by protein-level integrating multiple search engines, maximizing the utilization of the current MS spectra without additional experimental work.
Safdie, Fernando M; Sanchez, Manuel Villa; Sarkaria, Inderpal S
2017-01-01
Video assisted thoracic surgery (VATS) has become a routinely utilized approach to complex procedures of the chest, such as pulmonary resection. It has been associated with decreased postoperative pain, shorter length of stay and lower incidence of complications such as pneumonia. Limitations to this modality may include limited exposure, lack of tactile feedback, and a two-dimensional view of the surgical field. Furthermore, the lack of an open incision may incur technical challenges in preventing and controlling operative misadventures leading to major hemorrhage or other intraoperative emergencies. While these events may occur in the best of circumstances, prevention strategies are the primary means of avoiding these injuries. Unplanned conversions for major intraoperative bleeding or airway injury during general thoracic surgical procedures are relatively rare and often can be avoided with careful preoperative planning, review of relevant imaging, and meticulous surgical technique. When these events occur, a pre-planned, methodical response with initial control of bleeding, assessment of injury, and appropriate repair and/or salvage procedures are necessary to maximize outcomes. The surgeon should be well versed in injury-specific incisions and approaches to maximize adequate exposure and when feasible, allow completion of the index operation. Decisions to continue with a minimally invasive approach should consider the comfort and experience level of the surgeon with these techniques, and the relative benefit gained against the risk incurred to the patient. These algorithms may be expected to shift in the future with increasing sophistication and capabilities of minimally invasive technologies and approaches.
Dai, Wenrui; Xiong, Hongkai; Jiang, Xiaoqian; Chen, Chang Wen
2014-01-01
This paper proposes a novel model on intra coding for High Efficiency Video Coding (HEVC), which simultaneously predicts blocks of pixels with optimal rate distortion. It utilizes the spatial statistical correlation for the optimal prediction based on 2-D contexts, in addition to formulating the data-driven structural interdependences to make the prediction error coherent with the probability distribution, which is desirable for successful transform and coding. The structured set prediction model incorporates a max-margin Markov network (M3N) to regulate and optimize multiple block predictions. The model parameters are learned by discriminating the actual pixel value from other possible estimates to maximize the margin (i.e., decision boundary bandwidth). Compared to existing methods that focus on minimizing prediction error, the M3N-based model adaptively maintains the coherence for a set of predictions. Specifically, the proposed model concurrently optimizes a set of predictions by associating the loss for individual blocks to the joint distribution of succeeding discrete cosine transform coefficients. When the sample size grows, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. As an internal step, we optimize the underlying Markov network structure to find states that achieve the maximal energy using expectation propagation. For validation, we integrate the proposed model into HEVC for optimal mode selection on rate-distortion optimization. The proposed prediction model obtains up to 2.85% bit rate reduction and achieves better visual quality in comparison to the HEVC intra coding. PMID:25505829
DOT report for implementing OMB's information dissemination quality guidelines
DOT National Transportation Integrated Search
2002-08-01
Consistent with The Office of : Management and Budgets (OMB) Guidelines (for Ensuring and Maximizing the Quality, : Objectivity, Utility, and Integrity of Information Disseminated by Federal Agencies) : implementing Section 515 of the Treasury and...
Engaging Older Adult Volunteers in National Service
ERIC Educational Resources Information Center
McBride, Amanda Moore; Greenfield, Jennifer C.; Morrow-Howell, Nancy; Lee, Yung Soo; McCrary, Stacey
2012-01-01
Volunteer-based programs are increasingly designed as interventions to affect the volunteers and the beneficiaries of the volunteers' activities. To achieve the intended impacts for both, programs need to leverage the volunteers' engagement by meeting their expectations, retaining them, and maximizing their perceptions of benefits. Programmatic…
NASA Astrophysics Data System (ADS)
Qiu, Sihang; Chen, Bin; Wang, Rongxiao; Zhu, Zhengqiu; Wang, Yuan; Qiu, Xiaogang
2018-04-01
Hazardous gas leak accident has posed a potential threat to human beings. Predicting atmospheric dispersion and estimating its source become increasingly important in emergency management. Current dispersion prediction and source estimation models cannot satisfy the requirement of emergency management because they are not equipped with high efficiency and accuracy at the same time. In this paper, we develop a fast and accurate dispersion prediction and source estimation method based on artificial neural network (ANN), particle swarm optimization (PSO) and expectation maximization (EM). The novel method uses a large amount of pre-determined scenarios to train the ANN for dispersion prediction, so that the ANN can predict concentration distribution accurately and efficiently. PSO and EM are applied for estimating the source parameters, which can effectively accelerate the process of convergence. The method is verified by the Indianapolis field study with a SF6 release source. The results demonstrate the effectiveness of the method.
NASA Astrophysics Data System (ADS)
Hawthorne, Bryant; Panchal, Jitesh H.
2014-07-01
A bilevel optimization formulation of policy design problems considering multiple objectives and incomplete preferences of the stakeholders is presented. The formulation is presented for Feed-in-Tariff (FIT) policy design for decentralized energy infrastructure. The upper-level problem is the policy designer's problem and the lower-level problem is a Nash equilibrium problem resulting from market interactions. The policy designer has two objectives: maximizing the quantity of energy generated and minimizing policy cost. The stakeholders decide on quantities while maximizing net present value and minimizing capital investment. The Nash equilibrium problem in the presence of incomplete preferences is formulated as a stochastic linear complementarity problem and solved using expected value formulation, expected residual minimization formulation, and the Monte Carlo technique. The primary contributions in this article are the mathematical formulation of the FIT policy, the extension of computational policy design problems to multiple objectives, and the consideration of incomplete preferences of stakeholders for policy design problems.
Yang, Defu; Wang, Lin; Chen, Dongmei; Yan, Chenggang; He, Xiaowei; Liang, Jimin; Chen, Xueli
2018-05-17
The reconstruction of bioluminescence tomography (BLT) is severely ill-posed due to the insufficient measurements and diffuses nature of the light propagation. Predefined permissible source region (PSR) combined with regularization terms is one common strategy to reduce such ill-posedness. However, the region of PSR is usually hard to determine and can be easily affected by subjective consciousness. Hence, we theoretically developed a filtered maximum likelihood expectation maximization (fMLEM) method for BLT. Our method can avoid predefining the PSR and provide a robust and accurate result for global reconstruction. In the method, the simplified spherical harmonics approximation (SP N ) was applied to characterize diffuse light propagation in medium, and the statistical estimation-based MLEM algorithm combined with a filter function was used to solve the inverse problem. We systematically demonstrated the performance of our method by the regular geometry- and digital mouse-based simulations and a liver cancer-based in vivo experiment. Graphical abstract The filtered MLEM-based global reconstruction method for BLT.
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Huber, David J.; Martin, Kevin
2017-05-01
This paper† describes a technique in which we improve upon the prior performance of the Rapid Serial Visual Presentation (RSVP) EEG paradigm for image classification though the insertion of visual attention distracters and overall sequence reordering based upon the expected ratio of rare to common "events" in the environment and operational context. Inserting distracter images maintains the ratio of common events to rare events at an ideal level, maximizing the rare event detection via P300 EEG response to the RSVP stimuli. The method has two steps: first, we compute the optimal number of distracters needed for an RSVP stimuli based on the desired sequence length and expected number of targets and insert the distracters into the RSVP sequence, and then we reorder the RSVP sequence to maximize P300 detection. We show that by reducing the ratio of target events to nontarget events using this method, we can allow RSVP sequences with more targets without sacrificing area under the ROC curve (azimuth).
Choosing Fitness-Enhancing Innovations Can Be Detrimental under Fluctuating Environments
Xue, Julian Z.; Costopoulos, Andre; Guichard, Frederic
2011-01-01
The ability to predict the consequences of one's behavior in a particular environment is a mechanism for adaptation. In the absence of any cost to this activity, we might expect agents to choose behaviors that maximize their fitness, an example of directed innovation. This is in contrast to blind mutation, where the probability of becoming a new genotype is independent of the fitness of the new genotypes. Here, we show that under environments punctuated by rapid reversals, a system with both genetic and cultural inheritance should not always maximize fitness through directed innovation. This is because populations highly accurate at selecting the fittest innovations tend to over-fit the environment during its stable phase, to the point that a rapid environmental reversal can cause extinction. A less accurate population, on the other hand, can track long term trends in environmental change, keeping closer to the time-average of the environment. We use both analytical and agent-based models to explore when this mechanism is expected to occur. PMID:22125601
Castillo-Barnes, Diego; Peis, Ignacio; Martínez-Murcia, Francisco J.; Segovia, Fermín; Illán, Ignacio A.; Górriz, Juan M.; Ramírez, Javier; Salas-Gonzalez, Diego
2017-01-01
A wide range of segmentation approaches assumes that intensity histograms extracted from magnetic resonance images (MRI) have a distribution for each brain tissue that can be modeled by a Gaussian distribution or a mixture of them. Nevertheless, intensity histograms of White Matter and Gray Matter are not symmetric and they exhibit heavy tails. In this work, we present a hidden Markov random field model with expectation maximization (EM-HMRF) modeling the components using the α-stable distribution. The proposed model is a generalization of the widely used EM-HMRF algorithm with Gaussian distributions. We test the α-stable EM-HMRF model in synthetic data and brain MRI data. The proposed methodology presents two main advantages: Firstly, it is more robust to outliers. Secondly, we obtain similar results than using Gaussian when the Gaussian assumption holds. This approach is able to model the spatial dependence between neighboring voxels in tomographic brain MRI. PMID:29209194
Solomon, Harvey
2011-01-01
In 1989, there were 19,000 patients on the UNOS (United Network of Organ Sharing) wait list for organs compared to 110,000 today. Without an equivalent increase in donors, the patients awaiting these organs for transplant face increasing severity of illness and risk of dying without receiving a transplant. This disparity in supply and demand has led to acceptance of organs with lower than expected success rates compared to previous standard donors variously defined as extended criteria donors in order to increase transplantation. The reluctance to wider use of these types of organs is based on the less than expected transplant center graft and patient survival results associated with their use, as well as the increased resources required to care for the patients who receive these organs. The benefits need to be compared to the survival of not receiving a transplant and remaining on the waiting list rather than on outcomes of receiving a standard donor. A lack of a systematic risk outcomes adjustment is one of the most important factors preventing more extensive utilization as transplant centers are held to patient and graft survival statistics as a performance measure by multiple regulatory organizations and insurers. Newer classification systems of such donors may allow a more systematic approach to analyzing the specific risks to individualized patients. Due to changes in donor policies across the country, there has been an increase in Extended Criteria Donors (ECD) organs procured by organ procurement organizations (OPO) but their uneven acceptance by the transplant centers has contributed to an increase in discards and organs not being used. This is one of the reasons that wider sharing of organs is currently receiving much attention. Transplanting ECD organs presents unique challenges and innovative approaches to achieve satisfactory results. Improved logistics and information technology combined strategies for improving donor quality with may prevent discards while insuring maximal benefit. Transplant centers, organ procurement organizations, third party payers and government agencies all must be involved in maximizing the potential for ECD organs.
Using return on investment to maximize conservation effectiveness in Argentine grasslands.
Murdoch, William; Ranganathan, Jai; Polasky, Stephen; Regetz, James
2010-12-07
The rapid global loss of natural habitats and biodiversity, and limited resources, place a premium on maximizing the expected benefits of conservation actions. The scarcity of information on the fine-grained distribution of species of conservation concern, on risks of loss, and on costs of conservation actions, especially in developing countries, makes efficient conservation difficult. The distribution of ecosystem types (unique ecological communities) is typically better known than species and arguably better represents the entirety of biodiversity than do well-known taxa, so we use conserving the diversity of ecosystem types as our conservation goal. We define conservation benefit to include risk of conversion, spatial effects that reward clumping of habitat, and diminishing returns to investment in any one ecosystem type. Using Argentine grasslands as an example, we compare three strategies: protecting the cheapest land ("minimize cost"), maximizing conservation benefit regardless of cost ("maximize benefit"), and maximizing conservation benefit per dollar ("return on investment"). We first show that the widely endorsed goal of saving some percentage (typically 10%) of a country or habitat type, although it may inspire conservation, is a poor operational goal. It either leads to the accumulation of areas with low conservation benefit or requires infeasibly large sums of money, and it distracts from the real problem: maximizing conservation benefit given limited resources. Second, given realistic budgets, return on investment is superior to the other conservation strategies. Surprisingly, however, over a wide range of budgets, minimizing cost provides more conservation benefit than does the maximize-benefit strategy.
An elicitation of utility for quality of life under prospect theory.
Attema, Arthur E; Brouwer, Werner B F; l'Haridon, Olivier; Pinto, Jose Luis
2016-07-01
This paper performs several tests of decision analysis applied to the health domain. First, we conduct a test of the normative expected utility theory. Second, we investigate the possibility to elicit the more general prospect theory. We observe risk aversion for gains and losses and violations of expected utility. These results imply that mechanisms governing decisions in the health domain are similar to those in the monetary domain. However, we also report one important deviation: utility is universally concave for the health outcomes used in this study, in contrast to the commonly found S-shaped utility for monetary outcomes, with concave utility for gains and convex utility for losses. Copyright © 2016 Elsevier B.V. All rights reserved.
The Priority Heuristic: Making Choices without Trade-Offs
ERIC Educational Resources Information Center
Brandstatter, Eduard; Gigerenzer, Gerd; Hertwig, Ralph
2006-01-01
Bernoulli's framework of expected utility serves as a model for various psychological processes, including motivation, moral sense, attitudes, and decision making. To account for evidence at variance with expected utility, the authors generalize the framework of fast and frugal heuristics from inferences to preferences. The priority heuristic…
Robustness via Run-Time Adaptation of Contingent Plans
NASA Technical Reports Server (NTRS)
Bresina, John L.; Washington, Richard; Norvig, Peter (Technical Monitor)
2000-01-01
In this paper, we discuss our approach to making the behavior of planetary rovers more robust for the purpose of increased productivity. Due to the inherent uncertainty in rover exploration, the traditional approach to rover control is conservative, limiting the autonomous operation of the rover and sacrificing performance for safety. Our objective is to increase the science productivity possible within a single uplink by allowing the rover's behavior to be specified with flexible, contingent plans and by employing dynamic plan adaptation during execution. We have deployed a system exhibiting flexible, contingent execution; this paper concentrates on our ongoing efforts on plan adaptation, Plans can be revised in two ways: plan steps may be deleted, with execution continuing with the plan suffix; and the current plan may be merged with an "alternate plan" from an on-board library. The plan revision action is chosen to maximize the expected utility of the plan. Plan merging and action deletion constitute a more conservative general-purpose planning system; in return, our approach is more efficient and more easily verified, two important criteria for deployed rovers.
Kondo, Yumi; Zhao, Yinshan; Petkau, John
2017-05-30
Identification of treatment responders is a challenge in comparative studies where treatment efficacy is measured by multiple longitudinally collected continuous and count outcomes. Existing procedures often identify responders on the basis of only a single outcome. We propose a novel multiple longitudinal outcome mixture model that assumes that, conditionally on a cluster label, each longitudinal outcome is from a generalized linear mixed effect model. We utilize a Monte Carlo expectation-maximization algorithm to obtain the maximum likelihood estimates of our high-dimensional model and classify patients according to their estimated posterior probability of being a responder. We demonstrate the flexibility of our novel procedure on two multiple sclerosis clinical trial datasets with distinct data structures. Our simulation study shows that incorporating multiple outcomes improves the responder identification performance; this can occur even if some of the outcomes are ineffective. Our general procedure facilitates the identification of responders who are comprehensively defined by multiple outcomes from various distributions. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Flow in Rotating Serpentine Coolant Passages With Skewed Trip Strips
NASA Technical Reports Server (NTRS)
Tse, David G.N.; Steuber, Gary
1996-01-01
Laser velocimetry was utilized to map the velocity field in serpentine turbine blade cooling passages with skewed trip strips. The measurements were obtained at Reynolds and Rotation numbers of 25,000 and 0.24 to assess the influence of trips, passage curvature and Coriolis force on the flow field. The interaction of the secondary flows induced by skewed trips with the passage rotation produces a swirling vortex and a corner recirculation zone. With trips skewed at +45 deg, the secondary flows remain unaltered as the cross-flow proceeds from the passage to the turn. However, the flow characteristics at these locations differ when trips are skewed at -45 deg. Changes in the flow structure are expected to augment heat transfer, in agreement with the heat transfer measurements of Johnson, et al. The present results show that trips are skewed at -45 deg in the outward flow passage and trips are skewed at +45 deg in the inward flow passage maximize heat transfer. Details of the present measurements were related to the heat transfer measurements of Johnson, et al. to relate fluid flow and heat transfer measurements.
Model of personal consumption under conditions of modern economy
NASA Astrophysics Data System (ADS)
Rakhmatullina, D. K.; Akhmetshina, E. R.; Ignatjeva, O. A.
2017-12-01
In the conditions of the modern economy, in connection with the development of production, the expansion of the market for goods and services, its differentiation, active use of marketing tools in the sphere of sales, changes occur in the system of values and consumer needs. Motives that drive the consumer are transformed, stimulating it to activity. The article presents a model of personal consumption that takes into account modern trends in consumer behavior. The consumer, making a choice, seeks to maximize the overall utility from consumption, physiological and socio-psychological satisfaction, in accordance with his expectations, preferences and conditions of consumption. The system of his preferences is formed under the influence of factors of a different nature. It is also shown that the structure of consumer spending allows us to characterize and predict its further behavior in the market. Based on the proposed model and analysis of current trends in consumer behavior, conclusions and recommendations have been made that can be used by legislative and executive government bodies, business organizations, research centres and other structures to form a methodological and analytical tool for preparing a forecast model of consumption.
Extrinsic Sources of Scatter in the Richness-mass Relation of Galaxy Clusters
NASA Astrophysics Data System (ADS)
Rozo, Eduardo; Rykoff, Eli; Koester, Benjamin; Nord, Brian; Wu, Hao-Yi; Evrard, August; Wechsler, Risa
2011-10-01
Maximizing the utility of upcoming photometric cluster surveys requires a thorough understanding of the richness-mass relation of galaxy clusters. We use Monte Carlo simulations to study the impact of various sources of observational scatter on this relation. Cluster ellipticity, photometric errors, photometric redshift errors, and cluster-to-cluster variations in the properties of red-sequence galaxies contribute negligible noise. Miscentering, however, can be important, and likely contributes to the scatter in the richness-mass relation of galaxy maxBCG clusters at the low-mass end, where centering is more difficult. We also investigate the impact of projection effects under several empirically motivated assumptions about cluster environments. Using Sloan Digital Sky Survey data and the maxBCG cluster catalog, we demonstrate that variations in cluster environments can rarely (≈1%-5% of the time) result in significant richness boosts. Due to the steepness of the mass/richness function, the corresponding fraction of optically selected clusters that suffer from these projection effects is ≈5%-15%. We expect these numbers to be generic in magnitude, but a precise determination requires detailed, survey-specific modeling.
Method for Controlled Mitochondrial Perturbation during Phosphorus MRS in Children
Cree-Green, Melanie; Newcomer, Bradley R.; Brown, Mark; Hull, Amber; West, Amy D.; Singel, Debra; Reusch, Jane E.B.; McFann, Kim; Regensteiner, Judith G.; Nadeau, Kristen J.
2014-01-01
Introduction Insulin resistance (IR) is increasingly prevalent in children, and may be related to muscle mitochondrial dysfunction, necessitating development of mitochondrial assessment techniques. Recent studies used 31Phosphorus magnetic resonance spectroscopy (31P-MRS), a non-invasive technique appealing for clinical research. 31P-MRS requires exercise at a precise percentage of maximum volitional contraction (MVC). MVC measurement in children, particularly with disease, is problematic due to variability in perception of effort and motivation. We therefore developed a method to predict MVC, using maximal calf muscle cross-sectional area (MCSA) to assure controlled and reproducible muscle metabolic perturbations. Methods Data were collected from 66 sedentary 12–20 year-olds. Plantar flexion-volitional MVC was assessed using a MRI-compatible exercise treadle device. MCSA of the calf muscles were measured from MRI images. Data from the first 26 participants were utilized to model the relationship between MVC and MCSA (predicted MVC = 24.763+0.0047*MCSA). This model was then applied to the subsequent 40 participants. Results Volitional vs. model-predicted mean MVC was 43.9±0.8 kg vs. 44.2±1.81 (P=0.90). 31P-MRS results when predicted and volitional MVC were similar showed expected changes during volitional MVC-based exercise. In contrast, volitional MVC was markedly lower than predicted in 4 participants, and produced minimal metabolic perturbation. Upon repeat testing, these individuals could perform their predicted MVC with coaching, which produced expected metabolic perturbations. Conclusions Compared to using MVC testing alone, utilizing MRI to predict muscle strength allows for a more accurate and standardized 31P-MRS protocol during exercise in children. This method overcomes a major obstacle in assessing mitochondrial function in youth. These studies have importance as we seek to determine the role of mitochondrial function in youth with IR and diabetes and response to interventions. PMID:24576856
Intelligent transportation systems : tools to maximize state transportation investments
DOT National Transportation Integrated Search
1997-07-28
This Issue Brief summarizes national ITS goals and state transportation needs. It reviews states experience with ITS to date and discusses the utility of ITS technologies to improve transportation infrastructure. The Issue Brief also provides cost...
TIME SHARING WITH AN EXPLICIT PRIORITY QUEUING DISCIPLINE.
exponentially distributed service times and an ordered priority queue. Each new arrival buys a position in this queue by offering a non-negative bribe to the...parameters is investigated through numerical examples. Finally, to maximize the expected revenue per unit time accruing from bribes , an optimization
Subjective Expected Utility: A Model of Decision-Making.
ERIC Educational Resources Information Center
Fischoff, Baruch; And Others
1981-01-01
Outlines a model of decision making known to researchers in the field of behavioral decision theory (BDT) as subjective expected utility (SEU). The descriptive and predictive validity of the SEU model, probability and values assessment using SEU, and decision contexts are examined, and a 54-item reference list is provided. (JL)
Robust radio interferometric calibration using the t-distribution
NASA Astrophysics Data System (ADS)
Kazemi, S.; Yatawatta, S.
2013-10-01
A major stage of radio interferometric data processing is calibration or the estimation of systematic errors in the data and the correction for such errors. A stochastic error (noise) model is assumed, and in most cases, this underlying model is assumed to be Gaussian. However, outliers in the data due to interference or due to errors in the sky model would have adverse effects on processing based on a Gaussian noise model. Most of the shortcomings of calibration such as the loss in flux or coherence, and the appearance of spurious sources, could be attributed to the deviations of the underlying noise model. In this paper, we propose to improve the robustness of calibration by using a noise model based on Student's t-distribution. Student's t-noise is a special case of Gaussian noise when the variance is unknown. Unlike Gaussian-noise-model-based calibration, traditional least-squares minimization would not directly extend to a case when we have a Student's t-noise model. Therefore, we use a variant of the expectation-maximization algorithm, called the expectation-conditional maximization either algorithm, when we have a Student's t-noise model and use the Levenberg-Marquardt algorithm in the maximization step. We give simulation results to show the robustness of the proposed calibration method as opposed to traditional Gaussian-noise-model-based calibration, especially in preserving the flux of weaker sources that are not included in the calibration model.
Adar, Shay; Dor, Roi
2018-02-01
Habitat choice is an important decision that influences animals' fitness. Insect larvae are less mobile than the adults. Consequently, the contribution of the maternal choice of habitat to the survival and development of the offspring is considered to be crucial. According to the "preference-performance hypothesis", ovipositing females are expected to choose habitats that will maximize the performance of their offspring. We tested this hypothesis in wormlions (Diptera: Vermileonidae), which are small sand-dwelling insects that dig pit-traps in sandy patches and ambush small arthropods. Larvae prefer relatively deep and obstacle-free sand, and here we tested the habitat preference of the ovipositing female. In contrast to our expectation, ovipositing females showed no clear preference for either a deep sand or obstacle-free habitat, in contrast to the larval choice. This suboptimal female choice led to smaller pits being constructed later by the larvae, which may reduce prey capture success of the larvae. We offer several explanations for this apparently suboptimal female behavior, related either to maximizing maternal rather than offspring fitness, or to constraints on the female's behavior. Female's ovipositing habitat choice may have weaker negative consequences than expected for the offspring, as larvae can partially correct suboptimal maternal choice. Copyright © 2017 Elsevier B.V. All rights reserved.
Optimal Operation of Data Centers in Future Smart Grid
NASA Astrophysics Data System (ADS)
Ghamkhari, Seyed Mahdi
The emergence of cloud computing has established a growing trend towards building massive, energy-hungry, and geographically distributed data centers. Due to their enormous energy consumption, data centers are expected to have major impact on the electric grid by significantly increasing the load at locations where they are built. However, data centers also provide opportunities to help the grid with respect to robustness and load balancing. For instance, as data centers are major and yet flexible electric loads, they can be proper candidates to offer ancillary services, such as voluntary load reduction, to the smart grid. Also, data centers may better stabilize the price of energy in the electricity markets, and at the same time reduce their electricity cost by exploiting the diversity in the price of electricity in the day-ahead and real-time electricity markets. In this thesis, such potentials are investigated within an analytical profit maximization framework by developing new mathematical models based on queuing theory. The proposed models capture the trade-off between quality-of-service and power consumption in data centers. They are not only accurate, but also they posses convexity characteristics that facilitate joint optimization of data centers' service rates, demand levels and demand bids to different electricity markets. The analysis is further expanded to also develop a unified comprehensive energy portfolio optimization for data centers in the future smart grid. Specifically, it is shown how utilizing one energy option may affect selecting other energy options that are available to a data center. For example, we will show that the use of on-site storage and the deployment of geographical workload distribution can particularly help data centers in utilizing high-risk energy options such as renewable generation. The analytical approach in this thesis takes into account service-level-agreements, risk management constraints, and also the statistical characteristics of the Internet workload and the electricity prices. Using empirical data, the performance of our proposed profit maximization models for data centers are evaluated, and the capability of data centers to benefit from participation in a variety of Demand Response programs is assessed.
Pastor-Bernier, Alexandre; Plott, Charles R.; Schultz, Wolfram
2017-01-01
Revealed preference theory provides axiomatic tools for assessing whether individuals make observable choices “as if” they are maximizing an underlying utility function. The theory evokes a tradeoff between goods whereby individuals improve themselves by trading one good for another good to obtain the best combination. Preferences revealed in these choices are modeled as curves of equal choice (indifference curves) and reflect an underlying process of optimization. These notions have far-reaching applications in consumer choice theory and impact the welfare of human and animal populations. However, they lack the empirical implementation in animals that would be required to establish a common biological basis. In a design using basic features of revealed preference theory, we measured in rhesus monkeys the frequency of repeated choices between bundles of two liquids. For various liquids, the animals’ choices were compatible with the notion of giving up a quantity of one good to gain one unit of another good while maintaining choice indifference, thereby implementing the concept of marginal rate of substitution. The indifference maps consisted of nonoverlapping, linear, convex, and occasionally concave curves with typically negative, but also sometimes positive, slopes depending on bundle composition. Out-of-sample predictions using homothetic polynomials validated the indifference curves. The animals’ preferences were internally consistent in satisfying transitivity. Change of option set size demonstrated choice optimality and satisfied the Weak Axiom of Revealed Preference (WARP). These data are consistent with a version of revealed preference theory in which preferences are stochastic; the monkeys behaved “as if” they had well-structured preferences and maximized utility. PMID:28202727
Pastor-Bernier, Alexandre; Plott, Charles R; Schultz, Wolfram
2017-03-07
Revealed preference theory provides axiomatic tools for assessing whether individuals make observable choices "as if" they are maximizing an underlying utility function. The theory evokes a tradeoff between goods whereby individuals improve themselves by trading one good for another good to obtain the best combination. Preferences revealed in these choices are modeled as curves of equal choice (indifference curves) and reflect an underlying process of optimization. These notions have far-reaching applications in consumer choice theory and impact the welfare of human and animal populations. However, they lack the empirical implementation in animals that would be required to establish a common biological basis. In a design using basic features of revealed preference theory, we measured in rhesus monkeys the frequency of repeated choices between bundles of two liquids. For various liquids, the animals' choices were compatible with the notion of giving up a quantity of one good to gain one unit of another good while maintaining choice indifference, thereby implementing the concept of marginal rate of substitution. The indifference maps consisted of nonoverlapping, linear, convex, and occasionally concave curves with typically negative, but also sometimes positive, slopes depending on bundle composition. Out-of-sample predictions using homothetic polynomials validated the indifference curves. The animals' preferences were internally consistent in satisfying transitivity. Change of option set size demonstrated choice optimality and satisfied the Weak Axiom of Revealed Preference (WARP). These data are consistent with a version of revealed preference theory in which preferences are stochastic; the monkeys behaved "as if" they had well-structured preferences and maximized utility.
Reliability analysis based on the losses from failures.
Todinov, M T
2006-04-01
The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the early-life failures region and the expected losses given failure characterizing the corresponding time intervals. For complex systems whose components are not logically arranged in series, discrete simulation algorithms and software have been created for determining the losses from failures in terms of expected lost production time, cost of intervention, and cost of replacement. Different system topologies are assessed to determine the effect of modifications of the system topology on the expected losses from failures. It is argued that the reliability allocation in a production system should be done to maximize the profit/value associated with the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the profit by minimizing the total cost has been developed. Reliability allocation that maximizes the profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually the reliabilities of the components in the block that minimize the sum of the capital, operation costs, and the expected losses from failures. A Monte Carlo simulation based net present value (NPV) cash-flow model has also been proposed, which has significant advantages to cash-flow models based on the expected value of the losses from failures per time interval. Unlike these models, the proposed model has the capability to reveal the variation of the NPV due to different number of failures occurring during a specified time interval (e.g., during one year). The model also permits tracking the impact of the distribution pattern of failure occurrences and the time dependence of the losses from failures.
Smoking Outcome Expectancies among College Students.
ERIC Educational Resources Information Center
Brandon, Thomas H.; Baker, Timothy B.
Alcohol expectancies have been found to predict later onset of drinking among adolescents. This study examined whether the relationship between level of alcohol use and expectancies is paralleled with cigarette smoking, and attempted to identify the content of smoking expectancies. An instrument to measure the subjective expected utility of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, Mark H., E-mail: markp@u.washington.ed; Smith, Wade P.; Parvathaneni, Upendra
2011-03-15
Purpose: To determine under what conditions positron emission tomography (PET) imaging will be useful in decisions regarding the use of radiotherapy for the treatment of clinically occult lymph node metastases in head-and-neck cancer. Methods and Materials: A decision model of PET imaging and its downstream effects on radiotherapy outcomes was constructed using an influence diagram. This model included the sensitivity and specificity of PET, as well as the type and stage of the primary tumor. These parameters were varied to determine the optimal strategy for imaging and therapy for different clinical situations. Maximum expected utility was the metric by whichmore » different actions were ranked. Results: For primary tumors with a low probability of lymph node metastases, the sensitivity of PET should be maximized, and 50 Gy should be delivered if PET is positive and 0 Gy if negative. As the probability for lymph node metastases increases, PET imaging becomes unnecessary in some situations, and the optimal dose to the lymph nodes increases. The model needed to include the causes of certain health states to predict current clinical practice. Conclusion: The model demonstrated the ability to reproduce expected outcomes for a range of tumors and provided recommendations for different clinical situations. The differences between the optimal policies and current clinical practice are likely due to a disparity between stated clinical decision processes and actual decision making by clinicians.« less
Technologies for Prolonging Cardiac Implantable Electronic Device Longevity.
Lau, Ernest W
2017-01-01
Prolonged longevity of cardiac implantable electronic devices (CIEDs) is needed not only as a passive response to match the prolonging life expectancy of patient recipients, but will also actively prolong their life expectancy by avoiding/deferring the risks (and costs) associated with device replacement. CIEDs are still exclusively powered by nonrechargeable primary batteries, and energy exhaustion is the dominant and an inevitable cause of device replacement. The longevity of a CIED is thus determined by the attrition rate of its finite energy reserve. The energy available from a battery depends on its capacity (total amount of electric charge), chemistry (anode, cathode, and electrolyte), and internal architecture (stacked plate, folded plate, and spiral wound). The energy uses of a CIED vary and include a background current for running electronic circuitry, periodic radiofrequency telemetry, high-voltage capacitor reformation, constant ventricular pacing, and sporadic shocks for the cardiac resynchronization therapy defibrillators. The energy use by a CIED is primarily determined by the patient recipient's clinical needs, but the energy stored in the device battery is entirely under the manufacturer's control. A larger battery capacity generally results in a longer-lasting device, but improved battery chemistry and architecture may allow more space-efficient designs. Armed with the necessary technical knowledge, healthcare professionals and purchasers will be empowered to make judicious selection on device models and maximize the utilization of all their energy-saving features, to prolong device longevity for the benefits of their patients and healthcare systems. © 2016 Wiley Periodicals, Inc.
Allocating dissipation across a molecular machine cycle to maximize flux
Brown, Aidan I.; Sivak, David A.
2017-01-01
Biomolecular machines consume free energy to break symmetry and make directed progress. Nonequilibrium ATP concentrations are the typical free energy source, with one cycle of a molecular machine consuming a certain number of ATP, providing a fixed free energy budget. Since evolution is expected to favor rapid-turnover machines that operate efficiently, we investigate how this free energy budget can be allocated to maximize flux. Unconstrained optimization eliminates intermediate metastable states, indicating that flux is enhanced in molecular machines with fewer states. When maintaining a set number of states, we show that—in contrast to previous findings—the flux-maximizing allocation of dissipation is not even. This result is consistent with the coexistence of both “irreversible” and reversible transitions in molecular machine models that successfully describe experimental data, which suggests that, in evolved machines, different transitions differ significantly in their dissipation. PMID:29073016
Ning, Jing; Chen, Yong; Piao, Jin
2017-07-01
Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Akbaş, Halil; Bilgen, Bilge; Turhan, Aykut Melih
2015-11-01
This study proposes an integrated prediction and optimization model by using multi-layer perceptron neural network and particle swarm optimization techniques. Three different objective functions are formulated. The first one is the maximization of methane percentage with single output. The second one is the maximization of biogas production with single output. The last one is the maximization of biogas quality and biogas production with two outputs. Methane percentage, carbon dioxide percentage, and other contents' percentage are used as the biogas quality criteria. Based on the formulated models and data from a wastewater treatment facility, optimal values of input variables and their corresponding maximum output values are found out for each model. It is expected that the application of the integrated prediction and optimization models increases the biogas production and biogas quality, and contributes to the quantity of electricity production at the wastewater treatment facility. Copyright © 2015 Elsevier Ltd. All rights reserved.
Optimisation of the mean boat velocity in rowing.
Rauter, G; Baumgartner, L; Denoth, J; Riener, R; Wolf, P
2012-01-01
In rowing, motor learning may be facilitated by augmented feedback that displays the ratio between actual mean boat velocity and maximal achievable mean boat velocity. To provide this ratio, the aim of this work was to develop and evaluate an algorithm calculating an individual maximal mean boat velocity. The algorithm optimised the horizontal oar movement under constraints such as the individual range of the horizontal oar displacement, individual timing of catch and release and an individual power-angle relation. Immersion and turning of the oar were simplified, and the seat movement of a professional rower was implemented. The feasibility of the algorithm, and of the associated ratio between actual boat velocity and optimised boat velocity, was confirmed by a study on four subjects: as expected, advanced rowing skills resulted in higher ratios, and the maximal mean boat velocity depended on the range of the horizontal oar displacement.
Meurrens, Julie; Steiner, Thomas; Ponette, Jonathan; Janssen, Hans Antonius; Ramaekers, Monique; Wehrlin, Jon Peter; Vandekerckhove, Philippe; Deldicque, Louise
2016-12-01
The aims of the present study were to investigate the impact of three whole blood donations on endurance capacity and hematological parameters and to determine the duration to fully recover initial endurance capacity and hematological parameters after each donation. Twenty-four moderately trained subjects were randomly divided in a donation (n = 16) and a placebo (n = 8) group. Each of the three donations was interspersed by 3 months, and the recovery of endurance capacity and hematological parameters was monitored up to 1 month after donation. Maximal power output, peak oxygen consumption, and hemoglobin mass decreased (p < 0.001) up to 4 weeks after a single blood donation with a maximal decrease of 4, 10, and 7%, respectively. Hematocrit, hemoglobin concentration, ferritin, and red blood cell count (RBC), all key hematological parameters for oxygen transport, were lowered by a single donation (p < 0.001) and cumulatively further affected by the repetition of the donations (p < 0.001). The maximal decrease after a blood donation was 11% for hematocrit, 10% for hemoglobin concentration, 50% for ferritin, and 12% for RBC (p < 0.001). Maximal power output cumulatively increased in the placebo group as the maximal exercise tests were repeated (p < 0.001), which indicates positive training adaptations. This increase in maximal power output over the whole duration of the study was not observed in the donation group. Maximal, but not submaximal, endurance capacity was altered after blood donation in moderately trained people and the expected increase in capacity after multiple maximal exercise tests was not present when repeating whole blood donations.
Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.
ERIC Educational Resources Information Center
Wang, Yuh-Yin Wu; Schafer, William D.
This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…
Aging Education: A Worldwide Imperative
ERIC Educational Resources Information Center
McGuire, Sandra L.
2017-01-01
Life expectancy is increasing worldwide. Unfortunately, people are generally not prepared for this long life ahead and have ageist attitudes that inhibit maximizing the "longevity dividend" they have been given. Aging education can prepare people for life's later years and combat ageism. It can reimage aging as a time of continued…
ERIC Educational Resources Information Center
Casabianca, Jodi M.; Lewis, Charles
2015-01-01
Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…
India's growing participation in global clinical trials.
Gupta, Yogendra K; Padhy, Biswa M
2011-06-01
Lower operational costs, recent regulatory reforms and several logistic advantages make India an attractive destination for conducting clinical trials. Efforts for maintaining stringent ethical standards and the launch of Pharmacovigilance Program of India are expected to maximize the potential of the country for clinical research. Copyright © 2011. Published by Elsevier Ltd.
Stochastic Approximation Methods for Latent Regression Item Response Models
ERIC Educational Resources Information Center
von Davier, Matthias; Sinharay, Sandip
2010-01-01
This article presents an application of a stochastic approximation expectation maximization (EM) algorithm using a Metropolis-Hastings (MH) sampler to estimate the parameters of an item response latent regression model. Latent regression item response models are extensions of item response theory (IRT) to a latent variable model with covariates…
ERIC Educational Resources Information Center
Chen, Ping
2017-01-01
Calibration of new items online has been an important topic in item replenishment for multidimensional computerized adaptive testing (MCAT). Several online calibration methods have been proposed for MCAT, such as multidimensional "one expectation-maximization (EM) cycle" (M-OEM) and multidimensional "multiple EM cycles"…
Optimization Techniques for College Financial Aid Managers
ERIC Educational Resources Information Center
Bosshardt, Donald I.; Lichtenstein, Larry; Palumbo, George; Zaporowski, Mark P.
2010-01-01
In the context of a theoretical model of expected profit maximization, this paper shows how historic institutional data can be used to assist enrollment managers in determining the level of financial aid for students with varying demographic and quality characteristics. Optimal tuition pricing in conjunction with empirical estimation of…
2005-04-01
experience. The critical incident interview uses recollection of a specific incident as its starting point and employs a semistructured interview format...context assessment, expectancies, and judgments. The four sweeps in the critical incident interview include: Sweep 1 - Prompting the interviewee to
An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models
ERIC Educational Resources Information Center
Lee, Taehun
2010-01-01
In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…
ERIC Educational Resources Information Center
Hess, Frederick M.; McShane, Michael Q.
2013-01-01
There are at least four key places where the Common Core intersects with current efforts to improve education in the United States--testing, professional development, expectations, and accountability. Understanding them can help educators, parents, and policymakers maximize the chance that the Common Core is helpful to these efforts and, perhaps…
Designing Contributing Student Pedagogies to Promote Students' Intrinsic Motivation to Learn
ERIC Educational Resources Information Center
Herman, Geoffrey L.
2012-01-01
In order to maximize the effectiveness of our pedagogies, we must understand how our pedagogies align with prevailing theories of cognition and motivation and design our pedagogies according to this understanding. When implementing Contributing Student Pedagogies (CSPs), students are expected to make meaningful contributions to the learning of…
Charter School Discipline: Examples of Policies and School Climate Efforts from the Field
ERIC Educational Resources Information Center
Kern, Nora; Kim, Suzie
2016-01-01
Students need a safe and supportive school environment to maximize their academic and social-emotional learning potential. A school's discipline policies and practices directly impact school climate and student achievement. Together, discipline policies and positive school climate efforts can reinforce behavioral expectations and ensure student…
Llewellyn-Thomas, H; Thiel, E; Paterson, M; Naylor, D
1999-04-01
To elicit patients' maximal acceptable waiting times (MAWT) for non-urgent coronary artery bypass grafting (CABG), and to determine if MAWT is related to prior expectations of waiting times, symptom burden, expected relief, or perceived risks of myocardial infarction while waiting. Seventy-two patients on an elective CABG waiting list chose between two hypothetical but plausible options: a 1-month wait with 2% risk of surgical mortality, and a 6-month wait with 1% risk of surgical mortality. Waiting time in the 6-month option was varied up if respondents chose the 6-month/lower risk option, and down if they chose the 1-month/higher risk option, until the MAWT switch point was reached. Patients also reported their expected waiting time, perceived risks of myocardial infarction while waiting, current function, expected functional improvement and the value of that improvement. Only 17 (24%) patients chose the 6-month/1% risk option, while 55 (76%) chose the 1-month/2% risk option. The median MAWT was 2 months; scores ranged from 1 to 12 months (with two outliers). Many perceived high cumulative risks of myocardial infarction if waiting for 1 (upper quartile, > or = 1.45%) or 6 (upper quartile, > or = 10%) months. However, MAWT scores were related only to expected waiting time (r = 0.47; P < 0.0001). Most patients reject waiting 6 months for elective CABG, even if offered along with a halving in surgical mortality (from 2% to 1%). Intolerance for further delay seems to be determined primarily by patients' attachment to their scheduled surgical dates. Many also have severely inflated perceptions of their risk of myocardial infarction in the queue. These results suggest a need for interventions to modify patients' inaccurate risk perceptions, particularly if a scheduled surgical date must be deferred.
Impact of chronobiology on neuropathic pain treatment.
Gilron, Ian
2016-01-01
Inflammatory pain exhibits circadian rhythmicity. Recently, a distinct diurnal pattern has been described for peripheral neuropathic conditions. This diurnal variation has several implications: advancing understanding of chronobiology may facilitate identification of new and improved treatments; developing pain-contingent strategies that maximize treatment at times of the day associated with highest pain intensity may provide optimal pain relief as well as minimize treatment-related adverse effects (e.g., daytime cognitive dysfunction); and consideration of the impact of chronobiology on pain measurement may lead to improvements in analgesic study design that will maximize assay sensitivity of clinical trials. Recent and ongoing chronobiology studies are thus expected to advance knowledge and treatment of neuropathic pain.
Interleaved Observation Execution and Rescheduling on Earth Observing Systems
NASA Technical Reports Server (NTRS)
Khatib, Lina; Frank, Jeremy; Smith, David; Morris, Robert; Dungan, Jennifer
2003-01-01
Observation scheduling for Earth orbiting satellites solves the following problem: given a set of requests for images of the Earth, a set of instruments for acquiring those images distributed on a collecting of orbiting satellites, and a set of temporal and resource constraints, generate a set of assignments of instruments and viewing times to those requests that satisfy those constraints. Observation scheduling is often construed as a constrained optimization problem with the objective of maximizing the overall utility of the science data acquired. The utility of an image is typically based on the intrinsic importance of acquiring it (for example, its importance in meeting a mission or science campaign objective) as well as the expected value of the data given current viewing conditions (for example, if the image is occluded by clouds, its value is usually diminished). Currently, science observation scheduling for Earth Observing Systems is done on the ground, for periods covering a day or more. Schedules are uplinked to the satellites and are executed rigorously. An alternative to this scenario is to do some of the decision-making about what images are to be acquired on-board. The principal argument for this capability is that the desirability of making an observation can change dynamically, because of changes in meteorological conditions (e.g. cloud cover), unforeseen events such as fires, floods, or volcanic eruptions, or un-expected changes in satellite or ground station capability. Furthermore, since satellites can only communicate with the ground between 5% to 10% of the time, it may be infeasible to make the desired changes to the schedule on the ground, and uplink the revisions in time for the on-board system to execute them. Examples of scenarios that motivate an on-board capability for revising schedules include the following. First, if a desired visual scene is completely obscured by clouds, then there is little point in taking it. In this case, satellite resources, such as power and storage space can be better utilized taking another image that is higher quality. Second, if an unexpected but important event occurs (such as a fire, flood, or volcanic eruption), there may be good reason to take images of it, instead of expending satellite resources on some of the lower priority scheduled observations. Finally, if there is unexpected loss of capability, it may be impossible to carry out the schedule of planned observations. For example, if a ground station goes down temporarily, a satellite may not be able to free up enough storage space to continue with the remaining schedule of observations. This paper describes an approach for interleaving execution of observation schedules with dynamic schedule revision based on changes to the expected utility of the acquired images. We describe the problem in detail, formulate an algorithm for interleaving schedule revision and execution, and discuss refinements to the algorithm based on the need for search efficiency. We summarize with a brief discussion of the tests performed on the system.
Risk-dependent reward value signal in human prefrontal cortex
Tobler, Philippe N.; Christopoulos, George I.; O'Doherty, John P.; Dolan, Raymond J.; Schultz, Wolfram
2009-01-01
When making choices under uncertainty, people usually consider both the expected value and risk of each option, and choose the one with the higher utility. Expected value increases the expected utility of an option for all individuals. Risk increases the utility of an option for risk-seeking individuals, but decreases it for risk averse individuals. In 2 separate experiments, one involving imperative (no-choice), the other choice situations, we investigated how predicted risk and expected value aggregate into a common reward signal in the human brain. Blood oxygen level dependent responses in lateral regions of the prefrontal cortex increased monotonically with increasing reward value in the absence of risk in both experiments. Risk enhanced these responses in risk-seeking participants, but reduced them in risk-averse participants. The aggregate value and risk responses in lateral prefrontal cortex contrasted with pure value signals independent of risk in the striatum. These results demonstrate an aggregate risk and value signal in the prefrontal cortex that would be compatible with basic assumptions underlying the mean-variance approach to utility. PMID:19369207
NASA Technical Reports Server (NTRS)
Eliason, E.; Hansen, C. J.; McEwen, A.; Delamere, W. A.; Bridges, N.; Grant, J.; Gulich, V.; Herkenhoff, K.; Keszthelyi, L.; Kirk, R.
2003-01-01
Science return from the Mars Reconnaissance Orbiter (MRO) High Resolution Imaging Science Experiment (HiRISE) will be optimized by maximizing science participation in the experiment. MRO is expected to arrive at Mars in March 2006, and the primary science phase begins near the end of 2006 after aerobraking (6 months) and a transition phase. The primary science phase lasts for almost 2 Earth years, followed by a 2-year relay phase in which science observations by MRO are expected to continue. We expect to acquire approx. 10,000 images with HiRISE over the course of MRO's two earth-year mission. HiRISE can acquire images with a ground sampling dimension of as little as 30 cm (from a typical altitude of 300 km), in up to 3 colors, and many targets will be re-imaged for stereo. With such high spatial resolution, the percent coverage of Mars will be very limited in spite of the relatively high data rate of MRO (approx. 10x greater than MGS or Odyssey). We expect to cover approx. 1% of Mars at approx. 1m/pixel or better, approx. 0.1% at full resolution, and approx. 0.05% in color or in stereo. Therefore, the placement of each HiRISE image must be carefully considered in order to maximize the scientific return from MRO. We believe that every observation should be the result of a mini research project based on pre-existing datasets. During operations, we will need a large database of carefully researched 'suggested' observations to select from. The HiRISE team is dedicated to involving the broad Mars community in creating this database, to the fullest degree that is both practical and legal. The philosophy of the team and the design of the ground data system are geared to enabling community involvement. A key aspect of this is that image data will be made available to the planetary community for science analysis as quickly as possible to encourage feedback and new ideas for targets.
Artacho, Paulina; Jouanneau, Isabelle; Le Galliard, Jean-François
2013-01-01
Studies of the relationship of performance and behavioral traits with environmental factors have tended to neglect interindividual variation even though quantification of this variation is fundamental to understanding how phenotypic traits can evolve. In ectotherms, functional integration of locomotor performance, thermal behavior, and energy metabolism is of special interest because of the potential for coadaptation among these traits. For this reason, we analyzed interindividual variation, covariation, and repeatability of the thermal sensitivity of maximal sprint speed, preferred body temperature, thermal precision, and resting metabolic rate measured in ca. 200 common lizards (Zootoca vivipara) that varied by sex, age, and body size. We found significant interindividual variation in selected body temperatures and in the thermal performance curve of maximal sprint speed for both the intercept (expected trait value at the average temperature) and the slope (measure of thermal sensitivity). Interindividual differences in maximal sprint speed across temperatures, preferred body temperature, and thermal precision were significantly repeatable. A positive relationship existed between preferred body temperature and thermal precision, implying that individuals selecting higher temperatures were more precise. The resting metabolic rate was highly variable but was not related to thermal sensitivity of maximal sprint speed or thermal behavior. Thus, locomotor performance, thermal behavior, and energy metabolism were not directly functionally linked in the common lizard.
Using return on investment to maximize conservation effectiveness in Argentine grasslands
Murdoch, William; Ranganathan, Jai; Polasky, Stephen; Regetz, James
2010-01-01
The rapid global loss of natural habitats and biodiversity, and limited resources, place a premium on maximizing the expected benefits of conservation actions. The scarcity of information on the fine-grained distribution of species of conservation concern, on risks of loss, and on costs of conservation actions, especially in developing countries, makes efficient conservation difficult. The distribution of ecosystem types (unique ecological communities) is typically better known than species and arguably better represents the entirety of biodiversity than do well-known taxa, so we use conserving the diversity of ecosystem types as our conservation goal. We define conservation benefit to include risk of conversion, spatial effects that reward clumping of habitat, and diminishing returns to investment in any one ecosystem type. Using Argentine grasslands as an example, we compare three strategies: protecting the cheapest land (“minimize cost”), maximizing conservation benefit regardless of cost (“maximize benefit”), and maximizing conservation benefit per dollar (“return on investment”). We first show that the widely endorsed goal of saving some percentage (typically 10%) of a country or habitat type, although it may inspire conservation, is a poor operational goal. It either leads to the accumulation of areas with low conservation benefit or requires infeasibly large sums of money, and it distracts from the real problem: maximizing conservation benefit given limited resources. Second, given realistic budgets, return on investment is superior to the other conservation strategies. Surprisingly, however, over a wide range of budgets, minimizing cost provides more conservation benefit than does the maximize-benefit strategy. PMID:21098281
Optimization of detectors for the ILC
NASA Astrophysics Data System (ADS)
Suehara, Taikan; ILD Group; SID Group
2016-04-01
International Linear Collider (ILC) is a next-generation e+e- linear collider to explore Higgs, Beyond-Standard-Models, top and electroweak particles with great precision. We are optimizing our two detectors, International Large Detector (ILD) and Silicon Detector (SiD) to maximize the physics reach expected in ILC with reasonable detector cost and good reliability. The optimization study on vertex detectors, main trackers and calorimeters is underway. We aim to conclude the optimization to establish final designs in a few years, to finish detector TDR and proposal in reply to expected ;green sign; of the ILC project.
Is expected utility theory normative for medical decision making?
Cohen, B J
1996-01-01
Expected utility theory is felt by its proponents to be a normative theory of decision making under uncertainty. The theory starts with some simple axioms that are held to be rules that any rational person would follow. It can be shown that if one adheres to these axioms, a numerical quantity, generally referred to as utility, can be assigned to each possible outcome, with the preferred course of action being that which has the highest expected utility. One of these axioms, the independence principle, is controversial, and is frequently violated in experimental situations. Proponents of the theory hold that these violations are irrational. The independence principle is simply an axiom dictating consistency among preferences, in that it dictates that a rational agent should hold a specified preference given another stated preference. When applied to preferences between lotteries, the independence principle can be demonstrated to be a rule that is followed only when preferences are formed in a particular way. The logic of expected utility theory is that this demonstration proves that preferences should be formed in this way. An alternative interpretation is that this demonstrates that the independence principle is not a valid general rule of consistency, but in particular, is a rule that must be followed if one is to consistently apply the decision rule "choose the lottery that has the highest expected utility." This decision rule must be justified on its own terms as a valid rule of rationality by demonstration that violation would lead to decisions that conflict with the decision maker's goals. This rule does not appear to be suitable for medical decisions because often these are one-time decisions in which expectation, a long-run property of a random variable, would not seem to be applicable. This is particularly true for those decisions involving a non-trivial risk of death.
Increased cardiac output elicits higher V̇O2max in response to self-paced exercise.
Astorino, Todd Anthony; McMillan, David William; Edmunds, Ross Montgomery; Sanchez, Eduardo
2015-03-01
Recently, a self-paced protocol demonstrated higher maximal oxygen uptake versus the traditional ramp protocol. The primary aim of the current study was to further explore potential differences in maximal oxygen uptake between the ramp and self-paced protocols using simultaneous measurement of cardiac output. Active men and women of various fitness levels (N = 30, mean age = 26.0 ± 5.0 years) completed 3 graded exercise tests separated by a minimum of 48 h. Participants initially completed progressive ramp exercise to exhaustion to determine maximal oxygen uptake followed by a verification test to confirm maximal oxygen uptake attainment. Over the next 2 sessions, they performed a self-paced and an additional ramp protocol. During exercise, gas exchange data were obtained using indirect calorimetry, and thoracic impedance was utilized to estimate hemodynamic function (stroke volume and cardiac output). One-way ANOVA with repeated measures was used to determine differences in maximal oxygen uptake and cardiac output between ramp and self-paced testing. Results demonstrated lower (p < 0.001) maximal oxygen uptake via the ramp (47.2 ± 10.2 mL·kg(-1)·min(-1)) versus the self-paced (50.2 ± 9.6 mL·kg(-1)·min(-1)) protocol, with no interaction (p = 0.06) seen for fitness level. Maximal heart rate and cardiac output (p = 0.02) were higher in the self-paced protocol versus ramp exercise. In conclusion, data show that the traditional ramp protocol may underestimate maximal oxygen uptake compared with a newly developed self-paced protocol, with a greater cardiac output potentially responsible for this outcome.
Renal Perfusion in Scleroderma Patients Assessed by Microbubble-Based Contrast-Enhanced Ultrasound
Kleinert, Stefan; Roll, Petra; Baumgaertner, Christian; Himsel, Andrea; Mueller, Adelheid; Fleck, Martin; Feuchtenberger, Martin; Jenett, Manfred; Tony, Hans-Peter
2012-01-01
Objectives: Renal damage is common in scleroderma. It can occur acutely or chronically. Renal reserve might already be impaired before it can be detected by laboratory findings. Microbubble-based contrast-enhanced ultrasound has been demonstrated to improve blood perfusion imaging in organs. Therefore, we conducted a study to assess renal perfusion in scleroderma patients utilizing this novel technique. Materials and Methodology: Microbubble-based contrast agent was infused and destroyed by using high mechanical index by Siemens Sequoia (curved array, 4.5 MHz). Replenishment was recorded for 8 seconds. Regions of interests (ROI) were analyzed in renal parenchyma, interlobular artery and renal pyramid with quantitative contrast software (CUSQ 1.4, Siemens Acuson, Mountain View, California). Time to maximal Enhancement (TmE), maximal enhancement (mE) and maximal enhancement relative to maximal enhancement of the interlobular artery (mE%A) were calculated for different ROIs. Results: There was a linear correlation between the time to maximal enhancement in the parenchyma and the glomerular filtration rate. However, the other parameters did not reveal significant differences between scleroderma patients and healthy controls. Conclusion: Renal perfusion of scleroderma patients including the glomerular filtration rate can be assessed using microbubble-based contrast media. PMID:22670165
Dolman, M; Chase, J
1996-08-01
A small-scale study was undertaken to test the relative predictive power of the Health Belief Model and Subjective Expected Utility Theory for the uptake of a behaviour (pelvic floor exercises) to reduce post-partum urinary incontinence in primigravida females. A structured questionnaire was used to gather data relevant to both models from a sample antenatal and postnatal primigravida women. Questions examined the perceived probability of becoming incontinent, the perceived (dis)utility of incontinence, the perceived probability of pelvic floor exercises preventing future urinary incontinence, the costs and benefits of performing pelvic floor exercises and sources of information and knowledge about incontinence. Multiple regression analysis focused on whether or not respondents intended to perform pelvic floor exercises and the factors influencing their decisions. Aggregated data were analysed to compare the Health Belief Model and Subjective Expected Utility Theory directly.
NASA Astrophysics Data System (ADS)
Trinh, H. P.
2012-06-01
Utilization of new cold hypergolic propellants and leverage Missile Defense Agency technology for propulsion systems on Mars explorations will provide an increase of science payload and have significant payoffs and benefits for NASA missions.
Genetic variation in the USDA Chamaecrista fasciculata collection
USDA-ARS?s Scientific Manuscript database
Germplasm collections serve as critical repositories of genetic variation. Characterizing genetic diversity in existing collections is necessary to maximize their utility and to guide future collecting efforts. We have used AFLP markers to characterize genetic variation in the USDA germplasm collect...
García, J B; Tormo, José R
2003-06-01
A new tool, HPLC Studio, was developed for the comparison of high-performance liquid chromatography (HPLC) chromatograms from microbial extracts. The new utility makes it possible to create a virtual chromatogram by mixing up to 20 individual chromatograms. The virtual chromatogram is the first step in establishing a ranking of the microbial fermentation conditions based on either the area or diversity of HPLC peaks. The utility was used to maximize the diversity of secondary metabolites tested from a microorganism and therefore increase the chances of finding new lead compounds in a drug discovery program.
Method and system for controlling a gasification or partial oxidation process
Rozelle, Peter L; Der, Victor K
2015-02-10
A method and system for controlling a fuel gasification system includes optimizing a conversion of solid components in the fuel to gaseous fuel components, controlling the flux of solids entrained in the product gas through equipment downstream of the gasifier, and maximizing the overall efficiencies of processes utilizing gasification. A combination of models, when utilized together, can be integrated with existing plant control systems and operating procedures and employed to develop new control systems and operating procedures. Such an approach is further applicable to gasification systems that utilize both dry feed and slurry feed.
Suboptimal Decision Criteria Are Predicted by Subjectively Weighted Probabilities and Rewards
Ackermann, John F.; Landy, Michael S.
2014-01-01
Subjects performed a visual detection task in which the probability of target occurrence at each of the two possible locations, and the rewards for correct responses for each, were varied across conditions. To maximize monetary gain, observers should bias their responses, choosing one location more often than the other in line with the varied probabilities and rewards. Typically, and in our task, observers do not bias their responses to the extent they should, and instead distribute their responses more evenly across locations, a phenomenon referred to as ‘conservatism.’ We investigated several hypotheses regarding the source of the conservatism. We measured utility and probability weighting functions under Prospect Theory for each subject in an independent economic choice task and used the weighting-function parameters to calculate each subject’s subjective utility (SU(c)) as a function of the criterion c, and the corresponding weighted optimal criteria (wcopt). Subjects’ criteria were not close to optimal relative to wcopt. The slope of SU (c) and of expected gain EG(c) at the neutral criterion corresponding to β = 1 were both predictive of subjects’ criteria. The slope of SU(c) was a better predictor of observers’ decision criteria overall. Thus, rather than behaving optimally, subjects move their criterion away from the neutral criterion by estimating how much they stand to gain by such a change based on the slope of subjective gain as a function of criterion, using inherently distorted probabilities and values. PMID:25366822
Suboptimal decision criteria are predicted by subjectively weighted probabilities and rewards.
Ackermann, John F; Landy, Michael S
2015-02-01
Subjects performed a visual detection task in which the probability of target occurrence at each of the two possible locations, and the rewards for correct responses for each, were varied across conditions. To maximize monetary gain, observers should bias their responses, choosing one location more often than the other in line with the varied probabilities and rewards. Typically, and in our task, observers do not bias their responses to the extent they should, and instead distribute their responses more evenly across locations, a phenomenon referred to as 'conservatism.' We investigated several hypotheses regarding the source of the conservatism. We measured utility and probability weighting functions under Prospect Theory for each subject in an independent economic choice task and used the weighting-function parameters to calculate each subject's subjective utility (SU(c)) as a function of the criterion c, and the corresponding weighted optimal criteria (wc opt ). Subjects' criteria were not close to optimal relative to wc opt . The slope of SU(c) and of expected gain EG(c) at the neutral criterion corresponding to β = 1 were both predictive of the subjects' criteria. The slope of SU(c) was a better predictor of observers' decision criteria overall. Thus, rather than behaving optimally, subjects move their criterion away from the neutral criterion by estimating how much they stand to gain by such a change based on the slope of subjective gain as a function of criterion, using inherently distorted probabilities and values.
Inferring the most probable maps of underground utilities using Bayesian mapping model
NASA Astrophysics Data System (ADS)
Bilal, Muhammad; Khan, Wasiq; Muggleton, Jennifer; Rustighi, Emiliano; Jenks, Hugo; Pennock, Steve R.; Atkins, Phil R.; Cohn, Anthony
2018-03-01
Mapping the Underworld (MTU), a major initiative in the UK, is focused on addressing social, environmental and economic consequences raised from the inability to locate buried underground utilities (such as pipes and cables) by developing a multi-sensor mobile device. The aim of MTU device is to locate different types of buried assets in real time with the use of automated data processing techniques and statutory records. The statutory records, even though typically being inaccurate and incomplete, provide useful prior information on what is buried under the ground and where. However, the integration of information from multiple sensors (raw data) with these qualitative maps and their visualization is challenging and requires the implementation of robust machine learning/data fusion approaches. An approach for automated creation of revised maps was developed as a Bayesian Mapping model in this paper by integrating the knowledge extracted from sensors raw data and available statutory records. The combination of statutory records with the hypotheses from sensors was for initial estimation of what might be found underground and roughly where. The maps were (re)constructed using automated image segmentation techniques for hypotheses extraction and Bayesian classification techniques for segment-manhole connections. The model consisting of image segmentation algorithm and various Bayesian classification techniques (segment recognition and expectation maximization (EM) algorithm) provided robust performance on various simulated as well as real sites in terms of predicting linear/non-linear segments and constructing refined 2D/3D maps.
Martian resource locations: Identification and optimization
NASA Astrophysics Data System (ADS)
Chamitoff, Gregory; James, George; Barker, Donald; Dershowitz, Adam
2005-04-01
The identification and utilization of in situ Martian natural resources is the key to enable cost-effective long-duration missions and permanent human settlements on Mars. This paper presents a powerful software tool for analyzing Martian data from all sources, and for optimizing mission site selection based on resource collocation. This program, called Planetary Resource Optimization and Mapping Tool (PROMT), provides a wide range of analysis and display functions that can be applied to raw data or imagery. Thresholds, contours, custom algorithms, and graphical editing are some of the various methods that can be used to process data. Output maps can be created to identify surface regions on Mars that meet any specific criteria. The use of this tool for analyzing data, generating maps, and collocating features is demonstrated using data from the Mars Global Surveyor and the Odyssey spacecraft. The overall mission design objective is to maximize a combination of scientific return and self-sufficiency based on utilization of local materials. Landing site optimization involves maximizing accessibility to collocated science and resource features within a given mission radius. Mission types are categorized according to duration, energy resources, and in situ resource utilization. Preliminary optimization results are shown for a number of mission scenarios.
Endogenous patient responses and the consistency principle in cost-effectiveness analysis.
Liu, Liqun; Rettenmaier, Andrew J; Saving, Thomas R
2012-01-01
In addition to incurring direct treatment costs and generating direct health benefits that improve longevity and/or health-related quality of life, medical interventions often have further or "unrelated" financial and health impacts, raising the issue of what costs and effects should be included in calculating the cost-effectiveness ratio of an intervention. The "consistency principle" in medical cost-effectiveness analysis (CEA) requires that one include both the cost and the utility benefit of a change (in medical expenditures, consumption, or leisure) caused by an intervention or neither of them. By distinguishing between exogenous changes directly brought about by an intervention and endogenous patient responses to the exogenous changes, and within a lifetime utility maximization framework, this article addresses 2 questions related to the consistency principle: 1) how to choose among alternative internally consistent exclusion/inclusion rules, and 2) what to do with survival consumption costs and earnings. It finds that, for an endogenous change, excluding or including both the cost and the utility benefit of the change does not alter cost-effectiveness results. Further, in agreement with the consistency principle, welfare maximization implies that consumption costs and earnings during the extended life directly caused by an intervention should be included in CEA.
The utility of hand transplantation in hand amputee patients.
Alolabi, Noor; Chuback, Jennifer; Grad, Sharon; Thoma, Achilles
2015-01-01
To measure the desirable health outcome, termed utility, and the expected quality-adjusted life years (QALYs) gained with hand composite tissue allotransplantation (CTA) using hand amputee patients and the general public. Using the standard gamble (SG) and time trade-off (TTO) techniques, utilities were obtained from 30 general public participants and 12 amputee patients. The health utility and net QALYs gained or lost with transplantation were computed. A sensitivity analysis was conducted to account for the effects of lifelong immunosuppression on the life expectancy of transplant recipients. Higher scores represent greater utility. Hand amputation mean health utility as measured by the SG and TTO methods, respectively, was 0.72 and 0.80 for the general public and 0.69 and 0.70 for hand amputees. In comparison, hand CTA mean health utility was 0.74 and 0.82 for the general public and 0.83 and 0.86 for amputees. Hand CTA imparted an expected gain of 0.9 QALYs (SG and TTO) in the general public and 7.0 (TTO) and 7.8 (SG) QALYs in hand amputees. A loss of at least 1.7 QALYs was demonstrated when decreasing the life expectancy in the sensitivity analysis in the hand amputee group. Hand amputee patients did not show a preference toward hand CTA with its inherent risks. With this procedure being increasingly adopted worldwide, the benefits must be carefully weighed against the risks of lifelong immunosuppressive therapy. This study does not show clear benefit to advocate hand CTA. Copyright © 2015 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
Wunderlich, Adam; Abbey, Craig K
2013-11-01
Studies of lesion detectability are often carried out to evaluate medical imaging technology. For such studies, several approaches have been proposed to measure observer performance, such as the receiver operating characteristic (ROC), the localization ROC (LROC), the free-response ROC (FROC), the alternative free-response ROC (AFROC), and the exponentially transformed FROC (EFROC) paradigms. Therefore, an experimenter seeking to carry out such a study is confronted with an array of choices. Traditionally, arguments for different approaches have been made on the basis of practical considerations (statistical power, etc.) or the gross level of analysis (case-level or lesion-level). This article contends that a careful consideration of utility should form the rationale for matching the assessment paradigm to the clinical task of interest. In utility theory, task performance is commonly evaluated with total expected utility, which integrates the various event utilities against the probability of each event. To formalize the relationship between expected utility and the summary curve associated with each assessment paradigm, the concept of a "natural" utility structure is proposed. A natural utility structure is defined for a summary curve when the variables associated with the summary curve axes are sufficient for computing total expected utility, assuming that the disease prevalence is known. Natural utility structures for ROC, LROC, FROC, AFROC, and EFROC curves are introduced, clarifying how the utilities of correct and incorrect decisions are aggregated by summary curves. Further, conditions are given under which general utility structures for localization-based methodologies reduce to case-based assessment. Overall, the findings reveal how summary curves correspond to natural utility structures of diagnostic tasks, suggesting utility as a motivating principle for choosing an assessment paradigm.
Patil, V M; Chakraborty, S; Jithin, T K; Dessai, S; Sajith Babu, T P; Raghavan, V; Geetha, M; Kumar, T Shiva; Biji, M S; Bhattacharjee, A; Nair, C
2016-01-01
The objective was to design and validate the questionnaire for capturing palliative chemotherapy-related preferences and expectations. Single arm, unicentric, prospective observational study. EXPECT questionnaire was designed to capture preferences and expectations of patients undergoing palliative chemotherapy. This questionnaire underwent a linguistic validation and then was tested in patients. Ten patients are undergoing chemotherapy for solid tumors who fulfilled the inclusion and exclusion criteria self-administered the EXPECT questionnaire in regional language. After filling this questionnaire, they self-administered quick questionnaire-10 (QQ-10). SPSS version 16 (IBM New York) was used for analysis. Completion rate of EXPECT questionnaire was calculated. The feasibility, face validity, utility and time taken for completion of EXPECT questionnaire was also assessed. The completion rate of this questionnaire was 100%. All patients completed questionnaire within 5 min. The QQ-10 tool confirmed the feasibility, face validity and utility of the questionnaire. EXPECT questionnaire was validated in the regional language, and it's an effective tool for capturing patient's preferences and expectation from chemotherapy.
NASA Astrophysics Data System (ADS)
Zhao, Yongli; Tian, Rui; Yu, Xiaosong; Zhang, Jiawei; Zhang, Jie
2017-03-01
A proper traffic grooming strategy in dynamic optical networks can improve the utilization of bandwidth resources. An auxiliary graph (AG) is designed to solve the traffic grooming problem under a dynamic traffic scenario in spatial division multiplexing enabled elastic optical networks (SDM-EON) with multi-core fibers. Five traffic grooming policies achieved by adjusting the edge weights of an AG are proposed and evaluated through simulation: maximal electrical grooming (MEG), maximal optical grooming (MOG), maximal SDM grooming (MSG), minimize virtual hops (MVH), and minimize physical hops (MPH). Numeric results show that each traffic grooming policy has its own features. Among different traffic grooming policies, an MPH policy can achieve the lowest bandwidth blocking ratio, MEG can save the most transponders, and MSG can obtain the fewest cores for each request.
1985-11-01
arranged to maximize thermal output; - Plant will meet PURPA criteria for recognition as a "Qualifying Facility" (QF). 7587A 2 - GFC emissions will be...10. Plant must meet Public Utilities Regulatory Policies Act ( PURPA ) criteria for classification as a "Qualifying Facility" (QF). 11. Visual effect...assessments. 3 The Public Utilities Regulatory Policies Act ( PURPA ) which is administered by the Federal Energy Regulatory Commission (FERC), governs how a
Global Snow from Space: Development of a Satellite-based, Terrestrial Snow Mission Planning Tool
NASA Astrophysics Data System (ADS)
Forman, B. A.; Kumar, S.; LeMoigne, J.; Nag, S.
2017-12-01
A global, satellite-based, terrestrial snow mission planning tool is proposed to help inform experimental mission design with relevance to snow depth and snow water equivalent (SWE). The idea leverages the capabilities of NASA's Land Information System (LIS) and the Tradespace Analysis Tool for Constellations (TAT-C) to harness the information content of Earth science mission data across a suite of hypothetical sensor designs, orbital configurations, data assimilation algorithms, and optimization and uncertainty techniques, including cost estimates and risk assessments of each hypothetical permutation. One objective of the proposed observing system simulation experiment (OSSE) is to assess the complementary - or perhaps contradictory - information content derived from the simultaneous collection of passive microwave (radiometer), active microwave (radar), and LIDAR observations from space-based platforms. The integrated system will enable a true end-to-end OSSE that can help quantify the value of observations based on their utility towards both scientific research and applications as well as to better guide future mission design. Science and mission planning questions addressed as part of this concept include: What observational records are needed (in space and time) to maximize terrestrial snow experimental utility? How might observations be coordinated (in space and time) to maximize this utility? What is the additional utility associated with an additional observation? How can future mission costs be minimized while ensuring Science requirements are fulfilled?
Towards the Development of a Global, Satellite-based, Terrestrial Snow Mission Planning Tool
NASA Technical Reports Server (NTRS)
Forman, Bart; Kumar, Sujay; Le Moigne, Jacqueline; Nag, Sreeja
2017-01-01
A global, satellite-based, terrestrial snow mission planning tool is proposed to help inform experimental mission design with relevance to snow depth and snow water equivalent (SWE). The idea leverages the capabilities of NASAs Land Information System (LIS) and the Tradespace Analysis Tool for Constellations (TAT C) to harness the information content of Earth science mission data across a suite of hypothetical sensor designs, orbital configurations, data assimilation algorithms, and optimization and uncertainty techniques, including cost estimates and risk assessments of each hypothetical orbital configuration.One objective the proposed observing system simulation experiment (OSSE) is to assess the complementary or perhaps contradictory information content derived from the simultaneous collection of passive microwave (radiometer), active microwave (radar), and LIDAR observations from space-based platforms. The integrated system will enable a true end-to-end OSSE that can help quantify the value of observations based on their utility towards both scientific research and applications as well as to better guide future mission design. Science and mission planning questions addressed as part of this concept include:1. What observational records are needed (in space and time) to maximize terrestrial snow experimental utility?2. How might observations be coordinated (in space and time) to maximize utility? 3. What is the additional utility associated with an additional observation?4. How can future mission costs being minimized while ensuring Science requirements are fulfilled?
Towards the Development of a Global, Satellite-Based, Terrestrial Snow Mission Planning Tool
NASA Technical Reports Server (NTRS)
Forman, Bart; Kumar, Sujay; Le Moigne, Jacqueline; Nag, Sreeja
2017-01-01
A global, satellite-based, terrestrial snow mission planning tool is proposed to help inform experimental mission design with relevance to snow depth and snow water equivalent (SWE). The idea leverages the capabilities of NASA's Land Information System (LIS) and the Tradespace Analysis Tool for Constellations (TAT-C) to harness the information content of Earth science mission data across a suite of hypothetical sensor designs, orbital configurations, data assimilation algorithms, and optimization and uncertainty techniques, including cost estimates and risk assessments of each hypothetical permutation. One objective of the proposed observing system simulation experiment (OSSE) is to assess the complementary or perhaps contradictory information content derived from the simultaneous collection of passive microwave (radiometer), active microwave (radar), and LIDAR observations from space-based platforms. The integrated system will enable a true end-to-end OSSE that can help quantify the value of observations based on their utility towards both scientific research and applications as well as to better guide future mission design. Science and mission planning questions addressed as part of this concept include: What observational records are needed (in space and time) to maximize terrestrial snow experimental utility? How might observations be coordinated (in space and time) to maximize this utility? What is the additional utility associated with an additional observation? How can future mission costs be minimized while ensuring Science requirements are fulfilled?
Vellakkal, Sukumar
2013-01-01
The health insurers administer retrospectively package rates for various inpatient procedures as a provider payment mechanism to empanelled hospitals in Indian healthcare market. This study analyzed the impact of private health insurance on healthcare utilization in terms of both lengths of hospitalization and per-day hospitalization expenditure in Indian healthcare market where package rates are retrospectively defined as healthcare provider payment mechanism. The claim records of 94443 insured individuals and the hospitalisation data of 32665 uninsured individuals were used. By applying stepwise and propensity score matching method, the sample of uninsured individual was matched with insured and 'average treatment effect on treated' (ATT) was estimated. Overall, the strategies of hospitals, insured and insurers for maximizing their utility were competing with each other. However, two aligning co-operative strategies between insurer and hospitals were significant with dominant role of hospitals. The hospitals maximize their utility by providing high cost healthcare in par with pre-defined package rates but align with the interest of insurers by reducing the number (length) of hospitalisation days. The empirical results show that private health insurance coverage leads to i) reduction in length of hospitalization, and ii) increase in per day hospital (health) expenditure. It is necessary to regulate and develop a competent healthcare market in the country with proper monitoring mechanism on healthcare utilization and benchmarks for pricing and provision of healthcare services.
Reliability and cost: A sensitivity analysis
NASA Technical Reports Server (NTRS)
Suich, Ronald C.; Patterson, Richard L.
1991-01-01
In the design phase of a system, how a design engineer or manager choose between a subsystem with .990 reliability and a more costly subsystem with .995 reliability is examined, along with the justification of the increased cost. High reliability is not necessarily an end in itself but may be desirable in order to reduce the expected cost due to subsystem failure. However, this may not be the wisest use of funds since the expected cost due to subsystem failure is not the only cost involved. The subsystem itself may be very costly. The cost of the subsystem nor the expected cost due to subsystem failure should not be considered separately but the total of the two costs should be maximized, i.e., the total of the cost of the subsystem plus the expected cost due to subsystem failure.
Effective Fund-Raising for Non-profit Camps.
ERIC Educational Resources Information Center
Larson, Paula
1998-01-01
Identifies and describes strategies for effective fundraising: imagining the possibilities, identifying fund-raising sources, targeting fund-raising efforts, maximizing time by utilizing public relations efforts and involving staff, writing quality proposals and requests, and staying educated on fund-raising topics. Sidebars describe planned…
RFC: EPA's Action Plan for Bisphenol A Pursuant to EPA's Data Quality Guidelines
The American Chemistry Council (ACC) submits this Request for Correction to the U.S. Environmental Protection Agency under the Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility, and Integrity of Information Disseminated by the Environmental Protection Agency
Maximizing Educational Opportunity through Community Resources.
ERIC Educational Resources Information Center
Maradian, Steve
In the face of increased demands and diminishing resources, educational administrators at correctional facilities should look beyond institutional resources and utilize the services of area community colleges. The community college has an established track record in correctional education. Besides the nationally recognized correctional programs…
Applying Intermediate Microeconomics to Terrorism
ERIC Educational Resources Information Center
Anderton, Charles H.; Carter, John R.
2006-01-01
The authors show how microeconomic concepts and principles are applicable to the study of terrorism. The utility maximization model provides insights into both terrorist resource allocation choices and government counterterrorism efforts, and basic game theory helps characterize the strategic interdependencies among terrorists and governments.…
Automatic classification and detection of clinically relevant images for diabetic retinopathy
NASA Astrophysics Data System (ADS)
Xu, Xinyu; Li, Baoxin
2008-03-01
We proposed a novel approach to automatic classification of Diabetic Retinopathy (DR) images and retrieval of clinically-relevant DR images from a database. Given a query image, our approach first classifies the image into one of the three categories: microaneurysm (MA), neovascularization (NV) and normal, and then it retrieves DR images that are clinically-relevant to the query image from an archival image database. In the classification stage, the query DR images are classified by the Multi-class Multiple-Instance Learning (McMIL) approach, where images are viewed as bags, each of which contains a number of instances corresponding to non-overlapping blocks, and each block is characterized by low-level features including color, texture, histogram of edge directions, and shape. McMIL first learns a collection of instance prototypes for each class that maximizes the Diverse Density function using Expectation- Maximization algorithm. A nonlinear mapping is then defined using the instance prototypes and maps every bag to a point in a new multi-class bag feature space. Finally a multi-class Support Vector Machine is trained in the multi-class bag feature space. In the retrieval stage, we retrieve images from the archival database who bear the same label with the query image, and who are the top K nearest neighbors of the query image in terms of similarity in the multi-class bag feature space. The classification approach achieves high classification accuracy, and the retrieval of clinically-relevant images not only facilitates utilization of the vast amount of hidden diagnostic knowledge in the database, but also improves the efficiency and accuracy of DR lesion diagnosis and assessment.
Trifiletti, Daniel M; Hill, Colin; Cohen-Inbar, Or; Xu, Zhiyuan; Sheehan, Jason P
2017-09-01
While stereotactic radiosurgery (SRS) has been shown effective in the management of brain metastases, small brain metastases (≤10 mm) can pose unique challenges. Our aim was to investigate the efficacy of SRS in the treatment of small brain metastases, as well as elucidate clinically relevant factors impacting local failure (LF). We utilized a large, single-institution cohort to perform a retrospective analysis of patients with brain metastases up to 1 cm in maximal dimension. Clinical and radiosurgical parameters were investigated for an association with LF and compared using a competing risk model to calculate cumulative incidence functions, with death and whole brain radiotherapy serving as competing risks. 1596 small brain metastases treated with SRS among 424 patients were included. Among these tumors, 33 developed LF during the follow-up period (2.4% at 12 months following SRS). Competing risk analysis demonstrated that LF was dependent on tumor size (0.7% if ≤2 mm and 3.0% if 2-10 mm at 12 months, p = 0.016). Other factors associated with increasing risk of LF were the decreasing margin dose, increasing maximal tumor diameter, volume, and radioresistant tumors (each p < 0.01). 22 tumors (0.78%) developed radiographic radiation necrosis following SRS, and this incidence did not differ by tumor size (≤2 mm and 2-10 mm, p = 0.200). This large analysis confirms that SRS remains an effective modality in treatment of small brain metastases. In light of the excellent local control and relatively low risk of toxicity, patients with small brain metastases who otherwise have a reasonable expected survival should be considered for radiosurgical management.
Learning and exploration in action-perception loops.
Little, Daniel Y; Sommer, Friedrich T
2013-01-01
Discovering the structure underlying observed data is a recurring problem in machine learning with important applications in neuroscience. It is also a primary function of the brain. When data can be actively collected in the context of a closed action-perception loop, behavior becomes a critical determinant of learning efficiency. Psychologists studying exploration and curiosity in humans and animals have long argued that learning itself is a primary motivator of behavior. However, the theoretical basis of learning-driven behavior is not well understood. Previous computational studies of behavior have largely focused on the control problem of maximizing acquisition of rewards and have treated learning the structure of data as a secondary objective. Here, we study exploration in the absence of external reward feedback. Instead, we take the quality of an agent's learned internal model to be the primary objective. In a simple probabilistic framework, we derive a Bayesian estimate for the amount of information about the environment an agent can expect to receive by taking an action, a measure we term the predicted information gain (PIG). We develop exploration strategies that approximately maximize PIG. One strategy based on value-iteration consistently learns faster than previously developed reward-free exploration strategies across a diverse range of environments. Psychologists believe the evolutionary advantage of learning-driven exploration lies in the generalized utility of an accurate internal model. Consistent with this hypothesis, we demonstrate that agents which learn more efficiently during exploration are later better able to accomplish a range of goal-directed tasks. We will conclude by discussing how our work elucidates the explorative behaviors of animals and humans, its relationship to other computational models of behavior, and its potential application to experimental design, such as in closed-loop neurophysiology studies.
Simons, M; Kee, E Gee; Kimble, R; Tyack, Z
2017-08-01
The aim of this study was to investigate the reproducibility and validity of measuring scar height in children using ultrasound and 3D camera. Using a cross-sectional design, children with discrete burn scars were included. Reproducibility was tested using Intraclass Correlation Coefficient (ICC) for reliability, and percentage agreement within 1mm between test and re-test, standard error of measurement (SEM), smallest detectable change (SDC) and Bland Altman limits of agreement for agreement. Concurrent validity was tested using Spearman's rho for support of pre-specified hypotheses. Forty-nine participants (55 scars) were included. For ultrasound, test-retest and inter-rater reproducibility of scar thickness was acceptable for scarred skin (ICC=0.95, SDC=0.06cm and ICC=0.82, SDC=0.14cm). The ultrasound picked up changes of <1mm. Inter-rater reproducibility of maximal scar height using the 3D camera was acceptable (ICC=0.73, SDC=0.55cm). Construct validity of the ultrasound was supported with a strong correlation between the measure of scar thickness and observer ratings of thickness using the POSAS (ρ=0.61). Construct validity of the 3D camera was also supported with a moderate correlation (ρ=0.37) with the same measure using maximal scar height. The ultrasound is capable of detecting smaller changes or differences in scar thickness than the 3D camera, in children with burn scars. However agreement as part of reproducibility was lower than expected between raters for the ultrasound. Improving the accuracy of scar relocation may go some way to address agreement. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Maximizing Your Grant Development: A Guide for CEOs.
ERIC Educational Resources Information Center
Snyder, Thomas
1993-01-01
Since most private and public sources of external funding generally expect increased effort and accountability, Chief Executive Officers (CEOs) at two-year colleges must inform faculty and staff that if they do not expend extra effort their college will not receive significant grants. The CEO must also work with the college's professional…
A Prelude to Strategic Management of an Online Enterprise
ERIC Educational Resources Information Center
Pan, Cheng-Chang; Sivo, Stephen A.; Goldsmith, Clair
2016-01-01
Strategic management is expected to allow an organization to maximize given constraints and optimize limited resources in an effort to create a competitive advantage that leads to better results. For both for-profit and non-profit organizations, such strategic thinking helps the management make informed decisions and sustain long-term planning. To…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-22
... state-operated permit banks for the purpose of maximizing the fishing opportunities made available by... activity to regain their DAS for that trip, providing another opportunity to profit from the DAS that would... entities. Further, no reductions in profit are expected for any small entities, so the profitability...
Smooth Transitions: Helping Students with Autism Spectrum Disorder Navigate the School Day
ERIC Educational Resources Information Center
Hume, Kara; Sreckovic, Melissa; Snyder, Kate; Carnahan, Christina R.
2014-01-01
In school, students are expected to navigate different types of transitions every day, including those between instructors, subjects, and instructional formats, as well as classrooms. Despite the routines that many teachers develop to facilitate efficient transitions and maximize instructional time, many learners with ASD continue to struggle with…
3D image reconstruction algorithms for cryo-electron-microscopy images of virus particles
NASA Astrophysics Data System (ADS)
Doerschuk, Peter C.; Johnson, John E.
2000-11-01
A statistical model for the object and the complete image formation process in cryo electron microscopy of viruses is presented. Using this model, maximum likelihood reconstructions of the 3D structure of viruses are computed using the expectation maximization algorithm and an example based on Cowpea mosaic virus is provided.
A Benefit-Maximization Solution to Our Faculty Promotion and Tenure Process
ERIC Educational Resources Information Center
Barat, Somjit; Harvey, Hanafiah
2015-01-01
Tenure-track/tenured faculty at higher education institutions are expected to teach, conduct research and provide service as part of their promotion and tenure process, the relative importance of each component varying with the position and/or the university. However, based on the author's personal experience, feedback received from several…
"At Least One" Way to Add Value to Conferences
ERIC Educational Resources Information Center
Wilson, Warren J.
2005-01-01
In "EDUCAUSE Quarterly," Volume 25, Number 3, 2002, Joan Getman and Nikki Reynolds published an excellent article about getting the most from a conference. They listed 10 strategies that a conference attendee could use to maximize the conference's yield in information and motivation: (1) Plan ahead; (2) Set realistic expectations; (3) Use e-mail…
ERIC Educational Resources Information Center
Tseng, Hung Wei; Yeh, Hsin-Te
2013-01-01
Teamwork factors can facilitate team members, committing themselves to the purposes of maximizing their own and others' contributions and successes. It is important for online instructors to comprehend students' expectations on learning collaboratively. The aims of this study were to investigate online collaborative learning experiences and to…
A Probability Based Framework for Testing the Missing Data Mechanism
ERIC Educational Resources Information Center
Lin, Johnny Cheng-Han
2013-01-01
Many methods exist for imputing missing data but fewer methods have been proposed to test the missing data mechanism. Little (1988) introduced a multivariate chi-square test for the missing completely at random data mechanism (MCAR) that compares observed means for each pattern with expectation-maximization (EM) estimated means. As an alternative,…
Effects of Missing Data Methods in Structural Equation Modeling with Nonnormal Longitudinal Data
ERIC Educational Resources Information Center
Shin, Tacksoo; Davison, Mark L.; Long, Jeffrey D.
2009-01-01
The purpose of this study is to investigate the effects of missing data techniques in longitudinal studies under diverse conditions. A Monte Carlo simulation examined the performance of 3 missing data methods in latent growth modeling: listwise deletion (LD), maximum likelihood estimation using the expectation and maximization algorithm with a…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-03
... the status quo. The action is expected to maximize the profitability for the spiny dogfish fishery... possible commercial quotas by not making a deduction from the ACL accounting for management uncertainty...) in 2015; however, not accounting for management uncertainty would have increased the risk of...
ERIC Educational Resources Information Center
Weissman, Alexander
2013-01-01
Convergence of the expectation-maximization (EM) algorithm to a global optimum of the marginal log likelihood function for unconstrained latent variable models with categorical indicators is presented. The sufficient conditions under which global convergence of the EM algorithm is attainable are provided in an information-theoretic context by…
Optimization-Based Model Fitting for Latent Class and Latent Profile Analyses
ERIC Educational Resources Information Center
Huang, Guan-Hua; Wang, Su-Mei; Hsu, Chung-Chu
2011-01-01
Statisticians typically estimate the parameters of latent class and latent profile models using the Expectation-Maximization algorithm. This paper proposes an alternative two-stage approach to model fitting. The first stage uses the modified k-means and hierarchical clustering algorithms to identify the latent classes that best satisfy the…
Steganalysis feature improvement using expectation maximization
NASA Astrophysics Data System (ADS)
Rodriguez, Benjamin M.; Peterson, Gilbert L.; Agaian, Sos S.
2007-04-01
Images and data files provide an excellent opportunity for concealing illegal or clandestine material. Currently, there are over 250 different tools which embed data into an image without causing noticeable changes to the image. From a forensics perspective, when a system is confiscated or an image of a system is generated the investigator needs a tool that can scan and accurately identify files suspected of containing malicious information. The identification process is termed the steganalysis problem which focuses on both blind identification, in which only normal images are available for training, and multi-class identification, in which both the clean and stego images at several embedding rates are available for training. In this paper an investigation of a clustering and classification technique (Expectation Maximization with mixture models) is used to determine if a digital image contains hidden information. The steganalysis problem is for both anomaly detection and multi-class detection. The various clusters represent clean images and stego images with between 1% and 10% embedding percentage. Based on the results it is concluded that the EM classification technique is highly suitable for both blind detection and the multi-class problem.
Liu, Haiguang; Spence, John C H
2014-11-01
Crystallographic auto-indexing algorithms provide crystal orientations and unit-cell parameters and assign Miller indices based on the geometric relations between the Bragg peaks observed in diffraction patterns. However, if the Bravais symmetry is higher than the space-group symmetry, there will be multiple indexing options that are geometrically equivalent, and hence many ways to merge diffraction intensities from protein nanocrystals. Structure factor magnitudes from full reflections are required to resolve this ambiguity but only partial reflections are available from each XFEL shot, which must be merged to obtain full reflections from these 'stills'. To resolve this chicken-and-egg problem, an expectation maximization algorithm is described that iteratively constructs a model from the intensities recorded in the diffraction patterns as the indexing ambiguity is being resolved. The reconstructed model is then used to guide the resolution of the indexing ambiguity as feedback for the next iteration. Using both simulated and experimental data collected at an X-ray laser for photosystem I in the P63 space group (which supports a merohedral twinning indexing ambiguity), the method is validated.
Aaltonen, T; Adelman, J; Akimoto, T; Albrow, M G; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzi-Bacchetta, P; Azzurri, P; Bacchetta, N; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Baroiant, S; Bar-Shalom, S; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Bednar, P; Behari, S; Bellettini, G; Bellinger, J; Belloni, A; Benjamin, D; Beretvas, A; Beringer, J; Berry, T; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bolshov, A; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Cooper, B; Copic, K; Cordelli, M; Cortiana, G; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lentdecker, G; De Lorenzo, G; Dell'orso, M; Demortier, L; Deng, J; Deninno, M; De Pedis, D; Derwent, P F; Di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Forrester, S; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Genser, K; Gerberich, H; Gerdes, D; Giagu, S; Giakoumopolou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Hamilton, A; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; Iyutin, B; James, E; Jayatilaka, B; Jeans, D; Jeon, E J; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Klute, M; Knuteson, B; Ko, B R; Koay, S A; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kraus, J; Kreps, M; Kroll, J; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhlmann, S E; Kuhr, T; Kulkarni, N P; Kusakabe, Y; Kwang, S; Laasanen, A T; Lai, S; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, J; Lee, J; Lee, Y J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Lin, C; Lin, C S; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lu, R-S; Lucchesi, D; Lueck, J; Luci, C; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; Macqueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, M; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; Miao, T; Miladinovic, N; Miles, J; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moed, S; Moggi, N; Moon, C S; Moore, R; Morello, M; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Oldeman, R; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Portell, X; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Rajaraman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Salamanna, G; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyria, A; Shalhout, S Z; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soderberg, M; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spinella, F; Spreitzer, T; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Sun, H; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tu, Y; Turini, N; Ukegawa, F; Uozumi, S; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner-Kuhr, J; Wagner, W; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Yagil, A; Yamamoto, K; Yamaoka, J; Yamashita, T; Yang, C; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, F; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zheng, Y; Zucchelli, S
2009-01-30
Models of maximal flavor violation (MxFV) in elementary particle physics may contain at least one new scalar SU(2) doublet field Phi(FV)=(eta(0),eta(+)) that couples the first and third generation quarks (q_(1), q_(3)) via a Lagrangian term L(FV)=xi(13)Phi(FV)q(1)q(3). These models have a distinctive signature of same-charge top-quark pairs and evade flavor-changing limits from meson mixing measurements. Data corresponding to 2 fb(-1) collected by the Collider Dectector at Fermilab II detector in pp[over ] collisions at sqrt[s]=1.96 TeV are analyzed for evidence of the MxFV signature. For a neutral scalar eta(0) with m_(eta;(0))=200 GeV/c(2) and coupling xi(13)=1, approximately 11 signal events are expected over a background of 2.1+/-1.8 events. Three events are observed in the data, consistent with background expectations, and limits are set on the coupling xi(13) for m(eta(0)=180-300 GeV/c(2).
Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia
2013-02-01
The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.
Measuring Generalized Expectancies for Negative Mood Regulation.
ERIC Educational Resources Information Center
Catanzaro, Salvatore J.; Mearns, Jack
Research has suggested the utility of studying individual differences in the regulation of negative mood states. Generalized response expectancies for negative mood regulation were defined as expectancies that some overt behavior or cognition would alleviate negative mood states as they occur across situations. The Generalized Expectancy for…
Cano, I; Roca, J; Wagner, P D
2015-01-01
Previous models of O2 transport and utilization in health considered diffusive exchange of O2 in lung and muscle, but, reasonably, neglected functional heterogeneities in these tissues. However, in disease, disregarding such heterogeneities would not be justified. Here, pulmonary ventilation–perfusion and skeletal muscle metabolism–perfusion mismatching were added to a prior model of only diffusive exchange. Previously ignored O2 exchange in non-exercising tissues was also included. We simulated maximal exercise in (a) healthy subjects at sea level and altitude, and (b) COPD patients at sea level, to assess the separate and combined effects of pulmonary and peripheral functional heterogeneities on overall muscle O2 uptake ( and on mitochondrial (). In healthy subjects at maximal exercise, the combined effects of pulmonary and peripheral heterogeneities reduced arterial () at sea level by 32 mmHg, but muscle by only 122 ml min−1 (–3.5%). At the altitude of Mt Everest, lung and tissue heterogeneity together reduced by less than 1 mmHg and by 32 ml min−1 (–2.4%). Skeletal muscle heterogeneity led to a wide range of potential among muscle regions, a range that becomes narrower as increases, and in regions with a low ratio of metabolic capacity to blood flow, can exceed that of mixed muscle venous blood. For patients with severe COPD, peak was insensitive to substantial changes in the mitochondrial characteristics for O2 consumption or the extent of muscle heterogeneity. This integrative computational model of O2 transport and utilization offers the potential for estimating profiles of both in health and in diseases such as COPD if the extent for both lung ventilation–perfusion and tissue metabolism–perfusion heterogeneity is known. PMID:25640017
The temporal derivative of expected utility: a neural mechanism for dynamic decision-making.
Zhang, Xian; Hirsch, Joy
2013-01-15
Real world tasks involving moving targets, such as driving a vehicle, are performed based on continuous decisions thought to depend upon the temporal derivative of the expected utility (∂V/∂t), where the expected utility (V) is the effective value of a future reward. However, the neural mechanisms that underlie dynamic decision-making are not well understood. This study investigates human neural correlates of both V and ∂V/∂t using fMRI and a novel experimental paradigm based on a pursuit-evasion game optimized to isolate components of dynamic decision processes. Our behavioral data show that players of the pursuit-evasion game adopt an exponential discounting function, supporting the expected utility theory. The continuous functions of V and ∂V/∂t were derived from the behavioral data and applied as regressors in fMRI analysis, enabling temporal resolution that exceeded the sampling rate of image acquisition, hyper-temporal resolution, by taking advantage of numerous trials that provide rich and independent manipulation of those variables. V and ∂V/∂t were each associated with distinct neural activity. Specifically, ∂V/∂t was associated with anterior and posterior cingulate cortices, superior parietal lobule, and ventral pallidum, whereas V was primarily associated with supplementary motor, pre and post central gyri, cerebellum, and thalamus. The association between the ∂V/∂t and brain regions previously related to decision-making is consistent with the primary role of the temporal derivative of expected utility in dynamic decision-making. Copyright © 2012 Elsevier Inc. All rights reserved.
The Temporal Derivative of Expected Utility: A Neural Mechanism for Dynamic Decision-making
Zhang, Xian; Hirsch, Joy
2012-01-01
Real world tasks involving moving targets, such as driving a vehicle, are performed based on continuous decisions thought to depend upon the temporal derivative of the expected utility (∂V/∂t), where the expected utility (V) is the effective value of a future reward. However, those neural mechanisms that underlie dynamic decision-making are not well understood. This study investigates human neural correlates of both V and ∂V/∂t using fMRI and a novel experimental paradigm based on a pursuit-evasion game optimized to isolate components of dynamic decision processes. Our behavioral data show that players of the pursuit-evasion game adopt an exponential discounting function, supporting the expected utility theory. The continuous functions of V and ∂V/∂t were derived from the behavioral data and applied as regressors in fMRI analysis, enabling temporal resolution that exceeded the sampling rate of image acquisition, hyper-temporal resolution, by taking advantage of numerous trials that provide rich and independent manipulation of those variables. V and ∂V/∂t were each associated with distinct neural activity. Specifically, ∂V/∂t was associated with anterior and posterior cingulate cortices, superior parietal lobule, and ventral pallidum, whereas V was primarily associated with supplementary motor, pre and post central gyri, cerebellum, and thalamus. The association between the ∂V/∂t and brain regions previously related to decision-making is consistent with the primary role of the temporal derivative of expected utility in dynamic decision-making. PMID:22963852
Optimal flight initiation distance.
Cooper, William E; Frederick, William G
2007-01-07
Decisions regarding flight initiation distance have received scant theoretical attention. A graphical model by Ydenberg and Dill (1986. The economics of fleeing from predators. Adv. Stud. Behav. 16, 229-249) that has guided research for the past 20 years specifies when escape begins. In the model, a prey detects a predator, monitors its approach until costs of escape and of remaining are equal, and then flees. The distance between predator and prey when escape is initiated (approach distance = flight initiation distance) occurs where decreasing cost of remaining and increasing cost of fleeing intersect. We argue that prey fleeing as predicted cannot maximize fitness because the best prey can do is break even during an encounter. We develop two optimality models, one applying when all expected future contribution to fitness (residual reproductive value) is lost if the prey dies, the other when any fitness gained (increase in expected RRV) during the encounter is retained after death. Both models predict optimal flight initiation distance from initial expected fitness, benefits obtainable during encounters, costs of escaping, and probability of being killed. Predictions match extensively verified predictions of Ydenberg and Dill's (1986) model. Our main conclusion is that optimality models are preferable to break-even models because they permit fitness maximization, offer many new testable predictions, and allow assessment of prey decisions in many naturally occurring situations through modification of benefit, escape cost, and risk functions.
Graham, Jeffrey K; Smith, Myron L; Simons, Andrew M
2014-07-22
All organisms are faced with environmental uncertainty. Bet-hedging theory expects unpredictable selection to result in the evolution of traits that maximize the geometric-mean fitness even though such traits appear to be detrimental over the shorter term. Despite the centrality of fitness measures to evolutionary analysis, no direct test of the geometric-mean fitness principle exists. Here, we directly distinguish between predictions of competing fitness maximization principles by testing Cohen's 1966 classic bet-hedging model using the fungus Neurospora crassa. The simple prediction is that propagule dormancy will evolve in proportion to the frequency of 'bad' years, whereas the prediction of the alternative arithmetic-mean principle is the evolution of zero dormancy as long as the expectation of a bad year is less than 0.5. Ascospore dormancy fraction in N. crassa was allowed to evolve under five experimental selection regimes that differed in the frequency of unpredictable 'bad years'. Results were consistent with bet-hedging theory: final dormancy fraction in 12 genetic lineages across 88 independently evolving samples was proportional to the frequency of bad years, and evolved both upwards and downwards as predicted from a range of starting dormancy fractions. These findings suggest that selection results in adaptation to variable rather than to expected environments. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Optimal rotation sequences for active perception
NASA Astrophysics Data System (ADS)
Nakath, David; Rachuy, Carsten; Clemens, Joachim; Schill, Kerstin
2016-05-01
One major objective of autonomous systems navigating in dynamic environments is gathering information needed for self localization, decision making, and path planning. To account for this, such systems are usually equipped with multiple types of sensors. As these sensors often have a limited field of view and a fixed orientation, the task of active perception breaks down to the problem of calculating alignment sequences which maximize the information gain regarding expected measurements. Action sequences that rotate the system according to the calculated optimal patterns then have to be generated. In this paper we present an approach for calculating these sequences for an autonomous system equipped with multiple sensors. We use a particle filter for multi- sensor fusion and state estimation. The planning task is modeled as a Markov decision process (MDP), where the system decides in each step, what actions to perform next. The optimal control policy, which provides the best action depending on the current estimated state, maximizes the expected cumulative reward. The latter is computed from the expected information gain of all sensors over time using value iteration. The algorithm is applied to a manifold representation of the joint space of rotation and time. We show the performance of the approach in a spacecraft navigation scenario where the information gain is changing over time, caused by the dynamic environment and the continuous movement of the spacecraft
Priess-Groben, Heather A; Hyde, Janet Shibley
2017-06-01
Mathematics motivation declines for many adolescents, which limits future educational and career options. The present study sought to identify predictors of this decline by examining whether implicit theories assessed in ninth grade (incremental/entity) predicted course-taking behaviors and utility value in college. The study integrated implicit theory with variables from expectancy-value theory to examine potential moderators and mediators of the association of implicit theories with college mathematics outcomes. Implicit theories and expectancy-value variables were assessed in 165 American high school students (47 % female; 92 % White), who were then followed into their college years, at which time mathematics courses taken, course-taking intentions, and utility value were assessed. Implicit theories predicted course-taking intentions and utility value, but only self-concept of ability predicted courses taken, course-taking intentions, and utility value after controlling for prior mathematics achievement and baseline values. Expectancy for success in mathematics mediated associations between self-concept of ability and college outcomes. This research identifies self-concept of ability as a stronger predictor than implicit theories of mathematics motivation and behavior across several years: math self-concept is critical to sustained engagement in mathematics.
Price of anarchy is maximized at the percolation threshold.
Skinner, Brian
2015-05-01
When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called price of anarchy (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly placed congestible and incongestible links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold.
Stand-alone error characterisation of microwave satellite soil moisture using a Fourier method
USDA-ARS?s Scientific Manuscript database
Error characterisation of satellite-retrieved soil moisture (SM) is crucial for maximizing their utility in research and applications in hydro-meteorology and climatology. Error characteristics can provide insights for retrieval development and validation, and inform suitable strategies for data fus...
Biomass for biorefining: Resources, allocation, utilization, and policies
USDA-ARS?s Scientific Manuscript database
The importance of biomass in the development of renewable energy, the availability and allocation of biomass, its preparation for use in biorefineries, and the policies affecting biomass are discussed in this chapter. Bioenergy development will depend on maximizing the amount of biomass obtained fro...
The Child and Adolescent Psychiatry Trials Network
ERIC Educational Resources Information Center
March, John S.; Silva, Susan G.; Compton, Scott; Anthony, Ginger; DeVeaugh-Geiss, Joseph; Califf, Robert; Krishnan, Ranga
2004-01-01
Objective: The current generation of clinical trials in pediatric psychiatry often fails to maximize clinical utility for practicing clinicians, thereby diluting its impact. Method: To attain maximum clinical relevance and acceptability, the Child and Adolescent Psychiatry Trials Network (CAPTN) will transport to pediatric psychiatry the practical…
Medical Problem-Solving: A Critique of the Literature.
ERIC Educational Resources Information Center
McGuire, Christine H.
1985-01-01
Prescriptive, decision-analysis of medical problem-solving has been based on decision theory that involves calculation and manipulation of complex probability and utility values to arrive at optimal decisions that will maximize patient benefits. The studies offer a methodology for improving clinical judgment. (Author/MLW)
Why is Improving Water Quality in the Gulf of Mexico so Critical?
The EPA regional offices and the Gulf of Mexico Program work with Gulf States to continue to maximize the efficiency and utility of water quality monitoring efforts for local managers by coordinating and standardizing state and federal water quality data
Maximizing internal opportunities for healthcare facilities facing a managed-care environment.
Gillespie, M
1997-01-01
The primary theme of this article concerns the pressures on healthcare facilities to become efficient utilizers of their existing resources. This acute need for efficiency has been extremely obvious since the changing reimbursement patterns of managed care have proliferated across the nation.
Utilizing Partnerships to Maximize Resources in College Counseling Services
ERIC Educational Resources Information Center
Stewart, Allison; Moffat, Meridith; Travers, Heather; Cummins, Douglas
2015-01-01
Research indicates an increasing number of college students are experiencing severe psychological problems that are impacting their academic performance. However, many colleges and universities operate with constrained budgets that limit their ability to provide adequate counseling services for their student population. Moreover, accessing…
Designing advanced biochar products for maximizing greenhouse gas mitigation potential
USDA-ARS?s Scientific Manuscript database
Greenhouse gas (GHG) emissions from agricultural operations continue to increase. Carbon enriched char materials like biochar have been described as a mitigation strategy. Utilization of biochar material as a soil amendment has been demonstrated to provide potentially further soil GHG suppression du...
Paracrine communication maximizes cellular response fidelity in wound signaling
Handly, L Naomi; Pilko, Anna; Wollman, Roy
2015-01-01
Population averaging due to paracrine communication can arbitrarily reduce cellular response variability. Yet, variability is ubiquitously observed, suggesting limits to paracrine averaging. It remains unclear whether and how biological systems may be affected by such limits of paracrine signaling. To address this question, we quantify the signal and noise of Ca2+ and ERK spatial gradients in response to an in vitro wound within a novel microfluidics-based device. We find that while paracrine communication reduces gradient noise, it also reduces the gradient magnitude. Accordingly we predict the existence of a maximum gradient signal to noise ratio. Direct in vitro measurement of paracrine communication verifies these predictions and reveals that cells utilize optimal levels of paracrine signaling to maximize the accuracy of gradient-based positional information. Our results demonstrate the limits of population averaging and show the inherent tradeoff in utilizing paracrine communication to regulate cellular response fidelity. DOI: http://dx.doi.org/10.7554/eLife.09652.001 PMID:26448485
Integrating epidemiology, psychology, and economics to achieve HPV vaccination targets.
Basu, Sanjay; Chapman, Gretchen B; Galvani, Alison P
2008-12-02
Human papillomavirus (HPV) vaccines provide an opportunity to reduce the incidence of cervical cancer. Optimization of cervical cancer prevention programs requires anticipation of the degree to which the public will adhere to vaccination recommendations. To compare vaccination levels driven by public perceptions with levels that are optimal for maximizing the community's overall utility, we develop an epidemiological game-theoretic model of HPV vaccination. The model is parameterized with survey data on actual perceptions regarding cervical cancer, genital warts, and HPV vaccination collected from parents of vaccine-eligible children in the United States. The results suggest that perceptions of survey respondents generate vaccination levels far lower than those that maximize overall health-related utility for the population. Vaccination goals may be achieved by addressing concerns about vaccine risk, particularly those related to sexual activity among adolescent vaccine recipients. In addition, cost subsidizations and shifts in federal coverage plans may compensate for perceived and real costs of HPV vaccination to achieve public health vaccination targets.
Recovery Act: Brea California Combined Cycle Electric Generating Plant Fueled by Waste Landfill Gas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galowitz, Stephen
The primary objective of the Project was to maximize the productive use of the substantial quantities of waste landfill gas generated and collected at the Olinda Landfill near Brea, California. An extensive analysis was conducted and it was determined that utilization of the waste gas for power generation in a combustion turbine combined cycle facility was the highest and best use. The resulting Project reflected a cost effective balance of the following specific sub-objectives: • Meeting the environmental and regulatory requirements, particularly the compliance obligations imposed on the landfill to collect, process and destroy landfill gas • Utilizing proven andmore » reliable technology and equipment • Maximizing electrical efficiency • Maximizing electric generating capacity, consistent with the anticipated quantities of landfill gas generated and collected at the Olinda Landfill • Maximizing equipment uptime • Minimizing water consumption • Minimizing post-combustion emissions • The Project produced and will produce a myriad of beneficial impacts. o The Project created 360 FTE construction and manufacturing jobs and 15 FTE permanent jobs associated with the operation and maintenance of the plant and equipment. o By combining state-of-the-art gas clean up systems with post combustion emissions control systems, the Project established new national standards for best available control technology (BACT). o The Project will annually produce 280,320 MWh’s of clean energy o By destroying the methane in the landfill gas, the Project will generate CO2 equivalent reductions of 164,938 tons annually. The completed facility produces 27.4 MWnet and operates 24 hours a day, seven days a week.« less
The Priority Heuristic: Making Choices Without Trade-Offs
Brandstätter, Eduard; Gigerenzer, Gerd; Hertwig, Ralph
2010-01-01
Bernoulli's framework of expected utility serves as a model for various psychological processes, including motivation, moral sense, attitudes, and decision making. To account for evidence at variance with expected utility, we generalize the framework of fast and frugal heuristics from inferences to preferences. The priority heuristic predicts (i) Allais' paradox, (ii) risk aversion for gains if probabilities are high, (iii) risk seeking for gains if probabilities are low (lottery tickets), (iv) risk aversion for losses if probabilities are low (buying insurance), (v) risk seeking for losses if probabilities are high, (vi) certainty effect, (vii) possibility effect, and (viii) intransitivities. We test how accurately the heuristic predicts people's choices, compared to previously proposed heuristics and three modifications of expected utility theory: security-potential/aspiration theory, transfer-of-attention-exchange model, and cumulative prospect theory. PMID:16637767
Liu, Shu-Ming; Wang, Shi-Jun; Song, Si-Yao; Zou, Yong; Wang, Jun-Ru; Sun, Bing-Yin
Great variations have been found in composition and content of the essential oil of Zanthoxylum bungeanum Maxim. (Rutaceae), resulting from various factors such as harvest time, drying and extraction methods (Huang et al., 2006; Shao et al., 2013), solvent and herbal parts used (Zhang, 1996; Cao and Zhang, 2010; Wang et al., 2011). However, in terms of artificial introduction and cultivation, there is little research on the chemical composition of essential oil extracted from Z. bungeanum Maxim. cultivars, which have been introduced from different origins. In this study, the composition and content of essential oil from six cultivars (I-VI) have been investigated. They were introduced and cultivated for 11 years in the same cultivation conditions. Cultivars were as followings: Qin'an (I) cultivar originally introduced from Qin'an City in Gansu Province; Dahongpao A (II) from She County in Hebei Province; Dahongpao B (III) from Fuping County; Dahongpao C (IV) from Tongchuan City; Meifengjiao (V) from Feng County; and, Shizitou (VI) from Hancheng City, in Shaanxi Province, China. This research is expected to provide a theoretical basis for further introduction, cultivation, and commercial development of Z. bungeanum Maxim.
NASA Astrophysics Data System (ADS)
Atalay, Bora; Berker, A. Nihat
2018-05-01
Discrete-spin systems with maximally random nearest-neighbor interactions that can be symmetric or asymmetric, ferromagnetic or antiferromagnetic, including off-diagonal disorder, are studied, for the number of states q =3 ,4 in d dimensions. We use renormalization-group theory that is exact for hierarchical lattices and approximate (Migdal-Kadanoff) for hypercubic lattices. For all d >1 and all noninfinite temperatures, the system eventually renormalizes to a random single state, thus signaling q ×q degenerate ordering. Note that this is the maximally degenerate ordering. For high-temperature initial conditions, the system crosses over to this highly degenerate ordering only after spending many renormalization-group iterations near the disordered (infinite-temperature) fixed point. Thus, a temperature range of short-range disorder in the presence of long-range order is identified, as previously seen in underfrustrated Ising spin-glass systems. The entropy is calculated for all temperatures, behaves similarly for ferromagnetic and antiferromagnetic interactions, and shows a derivative maximum at the short-range disordering temperature. With a sharp immediate contrast of infinitesimally higher dimension 1 +ɛ , the system is as expected disordered at all temperatures for d =1 .
Tuffaha, Haitham W; Reynolds, Heather; Gordon, Louisa G; Rickard, Claire M; Scuffham, Paul A
2014-12-01
Value of information analysis has been proposed as an alternative to the standard hypothesis testing approach, which is based on type I and type II errors, in determining sample sizes for randomized clinical trials. However, in addition to sample size calculation, value of information analysis can optimize other aspects of research design such as possible comparator arms and alternative follow-up times, by considering trial designs that maximize the expected net benefit of research, which is the difference between the expected cost of the trial and the expected value of additional information. To apply value of information methods to the results of a pilot study on catheter securement devices to determine the optimal design of a future larger clinical trial. An economic evaluation was performed using data from a multi-arm randomized controlled pilot study comparing the efficacy of four types of catheter securement devices: standard polyurethane, tissue adhesive, bordered polyurethane and sutureless securement device. Probabilistic Monte Carlo simulation was used to characterize uncertainty surrounding the study results and to calculate the expected value of additional information. To guide the optimal future trial design, the expected costs and benefits of the alternative trial designs were estimated and compared. Analysis of the value of further information indicated that a randomized controlled trial on catheter securement devices is potentially worthwhile. Among the possible designs for the future trial, a four-arm study with 220 patients/arm would provide the highest expected net benefit corresponding to 130% return-on-investment. The initially considered design of 388 patients/arm, based on hypothesis testing calculations, would provide lower net benefit with return-on-investment of 79%. Cost-effectiveness and value of information analyses were based on the data from a single pilot trial which might affect the accuracy of our uncertainty estimation. Another limitation was that different follow-up durations for the larger trial were not evaluated. The value of information approach allows efficient trial design by maximizing the expected net benefit of additional research. This approach should be considered early in the design of randomized clinical trials. © The Author(s) 2014.
Establishing rational networking using the DL04 quantum secure direct communication protocol
NASA Astrophysics Data System (ADS)
Qin, Huawang; Tang, Wallace K. S.; Tso, Raylin
2018-06-01
The first rational quantum secure direct communication scheme is proposed, in which we use the game theory with incomplete information to model the rational behavior of the participant, and give the strategy space and utility function. The rational participant can get his maximal utility when he performs the protocol faithfully, and then the Nash equilibrium of the protocol can be achieved. Compared to the traditional schemes, our scheme will be more practical in the presence of rational participant.
Maximizing the Impact of e-Therapy and Serious Gaming: Time for a Paradigm Shift.
Fleming, Theresa M; de Beurs, Derek; Khazaal, Yasser; Gaggioli, Andrea; Riva, Giuseppe; Botella, Cristina; Baños, Rosa M; Aschieri, Filippo; Bavin, Lynda M; Kleiboer, Annet; Merry, Sally; Lau, Ho Ming; Riper, Heleen
2016-01-01
Internet interventions for mental health, including serious games, online programs, and apps, hold promise for increasing access to evidence-based treatments and prevention. Many such interventions have been shown to be effective and acceptable in trials; however, uptake and adherence outside of trials is seldom reported, and where it is, adherence at least, generally appears to be underwhelming. In response, an international Collaboration On Maximizing the impact of E-Therapy and Serious Gaming (COMETS) was formed. In this perspectives' paper, we call for a paradigm shift to increase the impact of internet interventions toward the ultimate goal of improved population mental health. We propose four pillars for change: (1) increased focus on user-centered approaches, including both user-centered design of programs and greater individualization within programs, with the latter perhaps utilizing increased modularization; (2) Increased emphasis on engagement utilizing processes such as gaming, gamification, telepresence, and persuasive technology; (3) Increased collaboration in program development, testing, and data sharing, across both sectors and regions, in order to achieve higher quality, more sustainable outcomes with greater reach; and (4) Rapid testing and implementation, including the measurement of reach, engagement, and effectiveness, and timely implementation. We suggest it is time for researchers, clinicians, developers, and end-users to collaborate on these aspects in order to maximize the impact of e-therapies and serious gaming.
Maximizing the Impact of e-Therapy and Serious Gaming: Time for a Paradigm Shift
Fleming, Theresa M.; de Beurs, Derek; Khazaal, Yasser; Gaggioli, Andrea; Riva, Giuseppe; Botella, Cristina; Baños, Rosa M.; Aschieri, Filippo; Bavin, Lynda M.; Kleiboer, Annet; Merry, Sally; Lau, Ho Ming; Riper, Heleen
2016-01-01
Internet interventions for mental health, including serious games, online programs, and apps, hold promise for increasing access to evidence-based treatments and prevention. Many such interventions have been shown to be effective and acceptable in trials; however, uptake and adherence outside of trials is seldom reported, and where it is, adherence at least, generally appears to be underwhelming. In response, an international Collaboration On Maximizing the impact of E-Therapy and Serious Gaming (COMETS) was formed. In this perspectives’ paper, we call for a paradigm shift to increase the impact of internet interventions toward the ultimate goal of improved population mental health. We propose four pillars for change: (1) increased focus on user-centered approaches, including both user-centered design of programs and greater individualization within programs, with the latter perhaps utilizing increased modularization; (2) Increased emphasis on engagement utilizing processes such as gaming, gamification, telepresence, and persuasive technology; (3) Increased collaboration in program development, testing, and data sharing, across both sectors and regions, in order to achieve higher quality, more sustainable outcomes with greater reach; and (4) Rapid testing and implementation, including the measurement of reach, engagement, and effectiveness, and timely implementation. We suggest it is time for researchers, clinicians, developers, and end-users to collaborate on these aspects in order to maximize the impact of e-therapies and serious gaming. PMID:27148094
Mallik, Saurav; Bhadra, Tapas; Maulik, Ujjwal
2017-01-01
Epigenetic Biomarker discovery is an important task in bioinformatics. In this article, we develop a new framework of identifying statistically significant epigenetic biomarkers using maximal-relevance and minimal-redundancy criterion based feature (gene) selection for multi-omics dataset. Firstly, we determine the genes that have both expression as well as methylation values, and follow normal distribution. Similarly, we identify the genes which consist of both expression and methylation values, but do not follow normal distribution. For each case, we utilize a gene-selection method that provides maximal-relevant, but variable-weighted minimum-redundant genes as top ranked genes. For statistical validation, we apply t-test on both the expression and methylation data consisting of only the normally distributed top ranked genes to determine how many of them are both differentially expressed andmethylated. Similarly, we utilize Limma package for performing non-parametric Empirical Bayes test on both expression and methylation data comprising only the non-normally distributed top ranked genes to identify how many of them are both differentially expressed and methylated. We finally report the top-ranking significant gene-markerswith biological validation. Moreover, our framework improves positive predictive rate and reduces false positive rate in marker identification. In addition, we provide a comparative analysis of our gene-selection method as well as othermethods based on classificationperformances obtained using several well-known classifiers.
Understanding the factors that effect maximal fat oxidation.
Purdom, Troy; Kravitz, Len; Dokladny, Karol; Mermier, Christine
2018-01-01
Lipids as a fuel source for energy supply during submaximal exercise originate from subcutaneous adipose tissue derived fatty acids (FA), intramuscular triacylglycerides (IMTG), cholesterol and dietary fat. These sources of fat contribute to fatty acid oxidation (FAox) in various ways. The regulation and utilization of FAs in a maximal capacity occur primarily at exercise intensities between 45 and 65% VO 2max , is known as maximal fat oxidation (MFO), and is measured in g/min. Fatty acid oxidation occurs during submaximal exercise intensities, but is also complimentary to carbohydrate oxidation (CHOox). Due to limitations within FA transport across the cell and mitochondrial membranes, FAox is limited at higher exercise intensities. The point at which FAox reaches maximum and begins to decline is referred to as the crossover point. Exercise intensities that exceed the crossover point (~65% VO 2max ) utilize CHO as the predominant fuel source for energy supply. Training status, exercise intensity, exercise duration, sex differences, and nutrition have all been shown to affect cellular expression responsible for FAox rate. Each stimulus affects the process of FAox differently, resulting in specific adaptions that influence endurance exercise performance. Endurance training, specifically long duration (>2 h) facilitate adaptations that alter both the origin of FAs and FAox rate. Additionally, the influence of sex and nutrition on FAox are discussed. Finally, the role of FAox in the improvement of performance during endurance training is discussed.
Bravo, Diamond Y; Umaña-Taylor, Adriana J; Guimond, Amy B; Updegraff, Kimberly A; Jahromi, Laudan B
2014-07-01
The current longitudinal study examined how familism values and family ethnic socialization impacted Mexican-origin adolescent mothers' (N = 205) educational adjustment (i.e., educational expectations, educational utility), and whether these associations were moderated by adolescent mothers' ethnic centrality. Findings indicated that adolescent mothers' reports of familism values and family ethnic socialization were positively associated with their beliefs about educational utility, but not educational expectations. Ethnic centrality moderated the association between adolescent mothers' familism values and educational utility, such that adolescent mothers' endorsement of familism values during pregnancy were associated with significant increases in educational utility after their transition to parenthood, but only when adolescents reported high levels of ethnic centrality. Moreover, ethnic centrality was positively associated with adolescent mothers' educational expectations. Results highlight the importance of familism, ethnic socialization, and ethnic centrality for promoting Mexican-origin adolescent mothers' educational outcomes. Findings are discussed with respect to understanding adolescent mothers' educational adjustment in the context of family and culture.
Life expectancy calculation in urology: Are we equitably treating older patients?
Bhatt, Nikita R; Davis, Niall F; Breen, Kieran; Flood, Hugh D; Giri, Subhasis K
2017-01-01
The aim of our study was to determine the contemporary practice in the utilization of life expectancy (LE) calculations among urological clinicians. Members of the Irish Society of Urology (ISU) and the British Association of Urological Surgeons (BAUS) completed a questionnaire on LE utilization in urological practice. The survey was delivered to 1251 clinicians and the response rate was 17% (n = 208/1251). The majority (61%, n = 127) of urologists were aware of methods available for estimated LE calculation.Seventy-one percent (n = 148) had never utilized LE analysis in clinical practice and 81% (n = 170) routinely used 'eyeballing' (empiric prediction) for estimating LE. Life expectancy tables were utilized infrequently (12%, n = 25) in making the decision for treatment in the setting of multi-disciplinary meetings. LE is poorly integrated into treatment decision-making; not only for the management of urological patients but also in the multidisciplinary setting. Further education and awareness regarding the importance of LE is vital.
Familism, Family Ethnic Socialization, and Mexican-Origin Adolescent Mothers’ Educational Adjustment
Bravo, Diamond Y.; Umaña-Taylor, Adriana J.; Guimond, Amy B.; Updegraff, Kimberly A.; Jahromi, Laudan B.
2016-01-01
The current longitudinal study examined how familism values and family ethnic socialization impacted Mexican-origin adolescent mothers’ (N = 205) educational adjustment (i.e., educational expectations, educational utility), and whether these associations were moderated by adolescent mothers’ ethnic centrality. Findings indicated that adolescent mothers’ reports of familism values and family ethnic socialization were positively associated with their beliefs about educational utility, but not educational expectations. Ethnic centrality moderated the association between adolescent mothers’ familism values and educational utility, such that adolescent mothers’ endorsement of familism values during pregnancy were associated with significant increases in educational utility after their transition to parenthood, but only when adolescents reported high levels of ethnic centrality. Moreover, ethnic centrality was positively associated with adolescent mothers’ educational expectations. Results highlight the importance of familism, ethnic socialization, and ethnic centrality for promoting Mexican-origin adolescent mothers’ educational outcomes. Findings are discussed with respect to understanding adolescent mothers’ educational adjustment in the context of family and culture. PMID:25045950
[Calculating the optimum size of a hemodialysis unit based on infrastructure potential].
Avila-Palomares, Paula; López-Cervantes, Malaquías; Durán-Arenas, Luis
2010-01-01
To estimate the optimum size for hemodialysis units to maximize production given capital constraints. A national study in Mexico was conducted in 2009. Three possible methods for estimating a units optimum size were analyzed: hemodialysis services production under monopolistic market, under a perfect competitive market and production maximization given capital constraints. The third method was considered best based on the assumptions made in this paper; an optimal size unit should have 16 dialyzers (15 active and one back up dialyzer) and a purifier system able to supply all. It also requires one nephrologist, five nurses per shift, considering four shifts per day. Empirical evidence shows serious inefficiencies in the operation of units throughout the country. Most units fail to maximize production due to not fully utilizing equipment and personnel, particularly their water purifier potential which happens to be the most expensive asset for these units.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Sen; Zhang, Wei; Lian, Jianming
This paper studies a multi-stage pricing problem for a large population of thermostatically controlled loads. The problem is formulated as a reverse Stackelberg game that involves a mean field game in the hierarchy of decision making. In particular, in the higher level, a coordinator needs to design a pricing function to motivate individual agents to maximize the social welfare. In the lower level, the individual utility maximization problem of each agent forms a mean field game coupled through the pricing function that depends on the average of the population control/state. We derive the solution to the reverse Stackelberg game bymore » connecting it to a team problem and the competitive equilibrium, and we show that this solution corresponds to the optimal mean field control that maximizes the social welfare. Realistic simulations are presented to validate the proposed methods.« less
Wireless Sensor Network-Based Service Provisioning by a Brokering Platform
Guijarro, Luis; Pla, Vicent; Vidal, Jose R.; Naldi, Maurizio; Mahmoodi, Toktam
2017-01-01
This paper proposes a business model for providing services based on the Internet of Things through a platform that intermediates between human users and Wireless Sensor Networks (WSNs). The platform seeks to maximize its profit through posting both the price charged to each user and the price paid to each WSN. A complete analysis of the profit maximization problem is performed in this paper. We show that the service provider maximizes its profit by incentivizing all users and all Wireless Sensor Infrastructure Providers (WSIPs) to join the platform. This is true not only when the number of users is high, but also when it is moderate, provided that the costs that the users bear do not trespass a cost ceiling. This cost ceiling depends on the number of WSIPs, on the value of the intrinsic value of the service and on the externality that the WSIP has on the user utility. PMID:28498347
Wireless Sensor Network-Based Service Provisioning by a Brokering Platform.
Guijarro, Luis; Pla, Vicent; Vidal, Jose R; Naldi, Maurizio; Mahmoodi, Toktam
2017-05-12
This paper proposes a business model for providing services based on the Internet of Things through a platform that intermediates between human users and Wireless Sensor Networks (WSNs). The platform seeks to maximize its profit through posting both the price charged to each user and the price paid to each WSN. A complete analysis of the profit maximization problem is performed in this paper. We show that the service provider maximizes its profit by incentivizing all users and all Wireless Sensor Infrastructure Providers (WSIPs) to join the platform. This is true not only when the number of users is high, but also when it is moderate, provided that the costs that the users bear do not trespass a cost ceiling. This cost ceiling depends on the number of WSIPs, on the value of the intrinsic value of the service and on the externality that the WSIP has on the user utility.
NASA Astrophysics Data System (ADS)
Zhang, Cuihua; Xing, Peng
2015-08-01
In recent years, Chinese service industry is developing rapidly. Compared with developed countries, service quality should be the bottleneck for Chinese service industry. On the background of three major telecommunications service providers in China, the functions of customer perceived utilities are established. With the goal of consumer's perceived utility maximization, the classic Nash equilibrium solution and quantum equilibrium solution are obtained. Then a numerical example is studied and the changing trend of service quality and customer perceived utility is further analyzed by the influence of the entanglement operator. Finally, it is proved that quantum game solution is better than Nash equilibrium solution.
Collins, Stephen C; Kim, Soorin; Chan, Esther
2017-11-29
Religion can have a significant influence on the experience of infertility. However, it is unclear how many US women turn to religion when facing infertility. Here, we examine the utilization of prayer and clergy counsel among a nationally representative sample of 1062 infertile US women. Prayer was used by 74.8% of the participants, and clergy counsel was the most common formal support system utilized. Both prayer and clergy counsel were significantly more common among black and Hispanic women. Healthcare providers should acknowledge the spiritual needs of their infertile patients and ally with clergy when possible to provide maximally effective care.
Deriving the expected utility of a predictive model when the utilities are uncertain.
Cooper, Gregory F; Visweswaran, Shyam
2005-01-01
Predictive models are often constructed from clinical databases with the goal of eventually helping make better clinical decisions. Evaluating models using decision theory is therefore natural. When constructing a model using statistical and machine learning methods, however, we are often uncertain about precisely how the model will be used. Thus, decision-independent measures of classification performance, such as the area under an ROC curve, are popular. As a complementary method of evaluation, we investigate techniques for deriving the expected utility of a model under uncertainty about the model's utilities. We demonstrate an example of the application of this approach to the evaluation of two models that diagnose coronary artery disease.
Utilizing the School Health Index to Foster University and Community Engagement
ERIC Educational Resources Information Center
King, Kristi McClary
2010-01-01
A Coordinated School Health Program maximizes a school's positive interaction among health education, physical education, health services, nutrition services, counseling/psychological/social services, health school environment, health promotion for staff, and family and community involvement. The purpose of this semester project is for…
Releases: Is There Still a Place for Their Use by Colleges and Universities?
ERIC Educational Resources Information Center
Connell, Mary Ann; Savage, Frederick G.
2003-01-01
Analyzes the legal principles, facts, and circumstances that govern decisions of courts regarding the validity of written releases, and provides practical advice to higher education lawyers and administrators as they evaluate the utility of releases and seek to maximize their benefit. (EV)
DOT National Transportation Integrated Search
2003-08-01
Over the past half-century, the progress of travel behavior research and travel demand forecasting has been spear : headed and continuously propelled by the micro-economic theories, specifically utility maximization. There is no : denial that the tra...
Innovative Conference Curriculum: Maximizing Learning and Professionalism
ERIC Educational Resources Information Center
Hyland, Nancy; Kranzow, Jeannine
2012-01-01
This action research study evaluated the potential of an innovative curriculum to move 73 graduate students toward professional development. The curriculum was grounded in the professional conference and utilized the motivation and expertise of conference presenters. This innovation required students to be more independent, act as a critical…
Method of optimizing performance of Rankine cycle power plants
Pope, William L.; Pines, Howard S.; Doyle, Padraic A.; Silvester, Lenard F.
1982-01-01
A method for efficiently operating a Rankine cycle power plant (10) to maximize fuel utilization efficiency or energy conversion efficiency or minimize costs by selecting a turbine (22) fluid inlet state which is substantially in the area adjacent and including the transposed critical temperature line (46).
ERIC Educational Resources Information Center
Ewers, Justin
2009-01-01
It seems to happen every day. A meeting is called to outline a new strategy or sales plan. Down go the lights and up goes the PowerPoint. Strange phrases appear--"unlocking shareholder value," "technology-focused innovation," "maximizing utility." Lists of numbers come and go. Bullet point by bullet point, the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitelaw, R.W.
1987-01-01
The market research techniques available now to the electric utility industry have evolved over the last thirty years into a set of sophisticated tools that permit complex behavioral analyses that earlier had been impossible. The marketing questions facing the electric utility industry now are commensurately more complex than ever before. This document was undertaken to present the tools and techniques needed to start or improve the usefulness of market research activities within electric utilities. It describes proven planning and management techniques as well as decision criteria for structuring effective market research functions for each utility's particular needs. The monograph establishesmore » the parameters of sound utility market research given trade-offs between highly centralized or decentralized organizations, research focus, involvement in decision making, and personnel and management skills necessary to maximize the effectiveness of the structure chosen.« less
Condition-dependent mate choice: A stochastic dynamic programming approach.
Frame, Alicia M; Mills, Alex F
2014-09-01
We study how changing female condition during the mating season and condition-dependent search costs impact female mate choice, and what strategies a female could employ in choosing mates to maximize her own fitness. We address this problem via a stochastic dynamic programming model of mate choice. In the model, a female encounters males sequentially and must choose whether to mate or continue searching. As the female searches, her own condition changes stochastically, and she incurs condition-dependent search costs. The female attempts to maximize the quality of the offspring, which is a function of the female's condition at mating and the quality of the male with whom she mates. The mating strategy that maximizes the female's net expected reward is a quality threshold. We compare the optimal policy with other well-known mate choice strategies, and we use simulations to examine how well the optimal policy fares under imperfect information. Copyright © 2014 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Lucas, Christopher M.
2009-01-01
For educators in the field of higher education and judicial affairs, issues are growing. Campus adjudicators must somehow maximize every opportunity for student education and development in the context of declining resources and increasing expectations of public accountability. Numbers of student misconduct cases, including matters of violence and…
Optimizing Experimental Designs Relative to Costs and Effect Sizes.
ERIC Educational Resources Information Center
Headrick, Todd C.; Zumbo, Bruno D.
A general model is derived for the purpose of efficiently allocating integral numbers of units in multi-level designs given prespecified power levels. The derivation of the model is based on a constrained optimization problem that maximizes a general form of a ratio of expected mean squares subject to a budget constraint. This model provides more…
Charles T. Stiff; William F. Stansfield
2004-01-01
Separate thinning guidelines were developed for maximizing land expectation value (LEV), present net worth (PNW), and total sawlog yield (TSY) of existing and future loblolly pine (Pinus taeda L.) plantations in eastern Texas. The guidelines were created using data from simulated stands which were thinned one time during their rotation using a...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-12
... the season through December 31, the end of the fishing year, thus maximizing this sector's opportunity... expected to significantly reduce profits for a substantial number of small entities. This proposed rule... and associated increased profits for for-hire entities associated with the recreational harvest of red...
ERIC Educational Resources Information Center
Bouchet, Francois; Harley, Jason M.; Trevors, Gregory J.; Azevedo, Roger
2013-01-01
In this paper, we present the results obtained using a clustering algorithm (Expectation-Maximization) on data collected from 106 college students learning about the circulatory system with MetaTutor, an agent-based Intelligent Tutoring System (ITS) designed to foster self-regulated learning (SRL). The three extracted clusters were validated and…
USDA-ARS?s Scientific Manuscript database
Water shortages are responsible for the greatest crop losses around the world and are expected to worsen. In arid areas where agriculture is dependent on irrigation, various forms of deficit irrigation management have been suggested to optimize crop yields for available soil water. The relationshi...
Optimizing reserve expansion for disjunct populations of San Joaquin kit fox
Robert G. Haight; Brian Cypher; Patrick A. Kelly; Scott Phillips; Katherine Ralls; Hugh P. Possingham
2004-01-01
Expanding habitat protection is a common strategy for species conservation. We present a model to optimize the expansion of reserves for disjunct populations of an endangered species. The objective is to maximize the expected number of surviving populations subject to budget and habitat constraints. The model accounts for benefits of reserve expansion in terms of...
Benefits of advanced software techniques for mission planning systems
NASA Technical Reports Server (NTRS)
Gasquet, A.; Parrod, Y.; Desaintvincent, A.
1994-01-01
The increasing complexity of modern spacecraft, and the stringent requirement for maximizing their mission return, call for a new generation of Mission Planning Systems (MPS). In this paper, we discuss the requirements for the Space Mission Planning and the benefits which can be expected from Artificial Intelligence techniques through examples of applications developed by Matra Marconi Space.
Benefits of advanced software techniques for mission planning systems
NASA Astrophysics Data System (ADS)
Gasquet, A.; Parrod, Y.; Desaintvincent, A.
1994-10-01
The increasing complexity of modern spacecraft, and the stringent requirement for maximizing their mission return, call for a new generation of Mission Planning Systems (MPS). In this paper, we discuss the requirements for the Space Mission Planning and the benefits which can be expected from Artificial Intelligence techniques through examples of applications developed by Matra Marconi Space.
ERIC Educational Resources Information Center
Köse, Alper
2014-01-01
The primary objective of this study was to examine the effect of missing data on goodness of fit statistics in confirmatory factor analysis (CFA). For this aim, four missing data handling methods; listwise deletion, full information maximum likelihood, regression imputation and expectation maximization (EM) imputation were examined in terms of…
What Influences Young Canadians to Pursue Post-Secondary Studies? Final Report
ERIC Educational Resources Information Center
Dubois, Julie
2002-01-01
This paper uses the theory of human capital to model post-secondary education enrolment decisions. The model is based on the assumption that high school graduates assess the costs and benefits associated with various levels of post-secondary education (college or university) and select the option that maximizes the expected net present value.…
Optimizing the Use of Response Times for Item Selection in Computerized Adaptive Testing
ERIC Educational Resources Information Center
Choe, Edison M.; Kern, Justin L.; Chang, Hua-Hua
2018-01-01
Despite common operationalization, measurement efficiency of computerized adaptive testing should not only be assessed in terms of the number of items administered but also the time it takes to complete the test. To this end, a recent study introduced a novel item selection criterion that maximizes Fisher information per unit of expected response…
ERIC Educational Resources Information Center
Schulze, Pamela A.; Harwood, Robin L.; Schoelmerich, Axel
2001-01-01
Investigated differences in beliefs and practices about infant feeding among middle class Anglo and Puerto Rican mothers. Interviews and observations indicated that Anglo mothers reported earlier attainment of self-feeding and more emphasis on child rearing goals related to self-maximization. Puerto Rican mothers reported later attainment of…
Solar-Energy System for a Commercial Building--Topeka, Kansas
NASA Technical Reports Server (NTRS)
1982-01-01
Report describes a solar-energy system for space heating, cooling and domestic hot water at a 5,600 square-foot (520-square-meter) Topeka, Kansas, commercial building. System is expected to provide 74% of annual cooling load, 47% of heating load, and 95% of domestic hot-water load. System was included in building design to maximize energy conservation.
Magnetic Tape Storage and Handling: A Guide for Libraries and Archives.
ERIC Educational Resources Information Center
Van Bogart, John W. C.
This document provides a guide on how to properly store and care for magnetic media to maximize their life expectancies. An introduction compares magnetic media to paper and film and outlines the scope of the report. The second section discusses things that can go wrong with magnetic media. Binder degradation, magnetic particle instabilities,…
Autonomous entropy-based intelligent experimental design
NASA Astrophysics Data System (ADS)
Malakar, Nabin Kumar
2011-07-01
The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same goal in an automated fashion.
NASA Astrophysics Data System (ADS)
Bauer, Sebastian; Suchaneck, Andre; Puente León, Fernando
2014-01-01
Depending on the actual battery temperature, electrical power demands in general have a varying impact on the life span of a battery. As electrical energy provided by the battery is needed to temper it, the question arises at which temperature which amount of energy optimally should be utilized for tempering. Therefore, the objective function that has to be optimized contains both the goal to maximize life expectancy and to minimize the amount of energy used for obtaining the first goal. In this paper, Pontryagin's maximum principle is used to derive a causal control strategy from such an objective function. The derivation of the causal strategy includes the determination of major factors that rule the optimal solution calculated with the maximum principle. The optimization is calculated offline on a desktop computer for all possible vehicle parameters and major factors. For the practical implementation in the vehicle, it is sufficient to have the values of the major factors determined only roughly in advance and the offline calculation results available. This feature sidesteps the drawback of several optimization strategies that require the exact knowledge of the future power demand. The resulting strategy's application is not limited to batteries in electric vehicles.
Undecidability in macroeconomics
NASA Technical Reports Server (NTRS)
Chandra, Siddharth; Chandra, Tushar Deepak
1993-01-01
In this paper we study the difficulty of solving problems in economics. For this purpose, we adopt the notion of undecidability from recursion theory. We show that certain problems in economics are undecidable, i.e., cannot be solved by a Turing Machine, a device that is at least as powerful as any computational device that can be constructed. In particular, we prove that even in finite closed economies subject to a variable initial condition, in which a social planner knows the behavior of every agent in the economy, certain important social planning problems are undecidable. Thus, it may be impossible to make effective policy decisions. Philosophically, this result formally brings into question the Rational Expectations Hypothesis which assumes that each agent is able to determine what it should do if it wishes to maximize its utility. We show that even when an optimal rational forecast exists for each agency (based on the information currently available to it), agents may lack the ability to make these forecasts. For example, Lucas describes economic models as 'mechanical, artificial world(s), populated by ... interacting robots'. Since any mechanical robot can be at most as computationally powerful as a Turing Machine, such economies are vulnerable to the phenomenon of undecidability.
Continuous-Time Public Good Contribution Under Uncertainty: A Stochastic Control Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferrari, Giorgio, E-mail: giorgio.ferrari@uni-bielefeld.de; Riedel, Frank, E-mail: frank.riedel@uni-bielefeld.de; Steg, Jan-Henrik, E-mail: jsteg@uni-bielefeld.de
In this paper we study continuous-time stochastic control problems with both monotone and classical controls motivated by the so-called public good contribution problem. That is the problem of n economic agents aiming to maximize their expected utility allocating initial wealth over a given time period between private consumption and irreversible contributions to increase the level of some public good. We investigate the corresponding social planner problem and the case of strategic interaction between the agents, i.e. the public good contribution game. We show existence and uniqueness of the social planner’s optimal policy, we characterize it by necessary and sufficient stochasticmore » Kuhn–Tucker conditions and we provide its expression in terms of the unique optional solution of a stochastic backward equation. Similar stochastic first order conditions prove to be very useful for studying any Nash equilibria of the public good contribution game. In the symmetric case they allow us to prove (qualitative) uniqueness of the Nash equilibrium, which we again construct as the unique optional solution of a stochastic backward equation. We finally also provide a detailed analysis of the so-called free rider effect.« less
Starlings uphold principles of economic rationality for delay and probability of reward.
Monteiro, Tiago; Vasconcelos, Marco; Kacelnik, Alex
2013-04-07
Rationality principles are the bedrock of normative theories of decision-making in biology and microeconomics, but whereas in microeconomics, consistent choice underlies the notion of utility; in biology, the assumption of consistent selective pressures justifies modelling decision mechanisms as if they were designed to maximize fitness. In either case, violations of consistency contradict expectations and attract theoretical interest. Reported violations of rationality in non-humans include intransitivity (i.e. circular preferences) and lack of independence of irrelevant alternatives (changes in relative preference between options when embedded in different choice sets), but the extent to which these observations truly represent breaches of rationality is debatable. We tested both principles with starlings (Sturnus vulgaris), training subjects either with five options differing in food delay (exp. 1) or with six options differing in reward probability (exp. 2), before letting them choose repeatedly one option out of several binary and trinary sets of options. The starlings conformed to economic rationality on both tests, showing strong stochastic transitivity and no violation of the independence principle. These results endorse the rational choice and optimality approaches used in behavioural ecology, and highlight the need for functional and mechanistic enquiring when apparent violations of such principles are observed.
Starlings uphold principles of economic rationality for delay and probability of reward
Monteiro, Tiago; Vasconcelos, Marco; Kacelnik, Alex
2013-01-01
Rationality principles are the bedrock of normative theories of decision-making in biology and microeconomics, but whereas in microeconomics, consistent choice underlies the notion of utility; in biology, the assumption of consistent selective pressures justifies modelling decision mechanisms as if they were designed to maximize fitness. In either case, violations of consistency contradict expectations and attract theoretical interest. Reported violations of rationality in non-humans include intransitivity (i.e. circular preferences) and lack of independence of irrelevant alternatives (changes in relative preference between options when embedded in different choice sets), but the extent to which these observations truly represent breaches of rationality is debatable. We tested both principles with starlings (Sturnus vulgaris), training subjects either with five options differing in food delay (exp. 1) or with six options differing in reward probability (exp. 2), before letting them choose repeatedly one option out of several binary and trinary sets of options. The starlings conformed to economic rationality on both tests, showing strong stochastic transitivity and no violation of the independence principle. These results endorse the rational choice and optimality approaches used in behavioural ecology, and highlight the need for functional and mechanistic enquiring when apparent violations of such principles are observed. PMID:23390098
Simulating Operations at a Spaceport
NASA Technical Reports Server (NTRS)
Nevins, Michael R.
2007-01-01
SPACESIM is a computer program for detailed simulation of operations at a spaceport. SPACESIM is being developed to greatly improve existing spaceports and to aid in designing, building, and operating future spaceports, given that there is a worldwide trend in spaceport operations from very expensive, research- oriented launches to more frequent commercial launches. From an operational perspective, future spaceports are expected to resemble current airports and seaports, for which it is necessary to resolve issues of safety, security, efficient movement of machinery and people, cost effectiveness, timeliness, and maximizing effectiveness in utilization of resources. Simulations can be performed, for example, to (1) simultaneously analyze launches of reusable and expendable rockets and identify bottlenecks arising from competition for limited resources or (2) perform what-if scenario analyses to identify optimal scenarios prior to making large capital investments. SPACESIM includes an object-oriented discrete-event-simulation engine. (Discrete- event simulation has been used to assess processes at modern seaports.) The simulation engine is built upon the Java programming language for maximum portability. Extensible Markup Language (XML) is used for storage of data to enable industry-standard interchange of data with other software. A graphical user interface facilitates creation of scenarios and analysis of data.
Distribution of quantum Fisher information in asymmetric cloning machines
Xiao, Xing; Yao, Yao; Zhou, Lei-Ming; Wang, Xiaoguang
2014-01-01
An unknown quantum state cannot be copied and broadcast freely due to the no-cloning theorem. Approximate cloning schemes have been proposed to achieve the optimal cloning characterized by the maximal fidelity between the original and its copies. Here, from the perspective of quantum Fisher information (QFI), we investigate the distribution of QFI in asymmetric cloning machines which produce two nonidentical copies. As one might expect, improving the QFI of one copy results in decreasing the QFI of the other copy. It is perhaps also unsurprising that asymmetric phase-covariant cloning outperforms universal cloning in distributing QFI since a priori information of the input state has been utilized. However, interesting results appear when we compare the distributabilities of fidelity (which quantifies the full information of quantum states), and QFI (which only captures the information of relevant parameters) in asymmetric cloning machines. Unlike the results of fidelity, where the distributability of symmetric cloning is always optimal for any d-dimensional cloning, we find that any asymmetric cloning outperforms symmetric cloning on the distribution of QFI for d ≤ 18, whereas some but not all asymmetric cloning strategies could be worse than symmetric ones when d > 18. PMID:25484234
A clustering-based fuzzy wavelet neural network model for short-term load forecasting.
Kodogiannis, Vassilis S; Amina, Mahdi; Petrounias, Ilias
2013-10-01
Load forecasting is a critical element of power system operation, involving prediction of the future level of demand to serve as the basis for supply and demand planning. This paper presents the development of a novel clustering-based fuzzy wavelet neural network (CB-FWNN) model and validates its prediction on the short-term electric load forecasting of the Power System of the Greek Island of Crete. The proposed model is obtained from the traditional Takagi-Sugeno-Kang fuzzy system by replacing the THEN part of fuzzy rules with a "multiplication" wavelet neural network (MWNN). Multidimensional Gaussian type of activation functions have been used in the IF part of the fuzzyrules. A Fuzzy Subtractive Clustering scheme is employed as a pre-processing technique to find out the initial set and adequate number of clusters and ultimately the number of multiplication nodes in MWNN, while Gaussian Mixture Models with the Expectation Maximization algorithm are utilized for the definition of the multidimensional Gaussians. The results corresponding to the minimum and maximum power load indicate that the proposed load forecasting model provides significantly accurate forecasts, compared to conventional neural networks models.
The DKIST Data Center: Meeting the Data Challenges for Next-Generation, Ground-Based Solar Physics
NASA Astrophysics Data System (ADS)
Davey, A. R.; Reardon, K.; Berukoff, S. J.; Hays, T.; Spiess, D.; Watson, F. T.; Wiant, S.
2016-12-01
The Daniel K. Inouye Solar Telescope (DKIST) is under construction on the summit of Haleakalā in Maui, and scheduled to start science operations in 2020. The DKIST design includes a four-meter primary mirror coupled to an adaptive optics system, and a flexible instrumentation suite capable of delivering high-resolution optical and infrared observations of the solar chromosphere, photosphere, and corona. Through investigator-driven science proposals, the facility will generate an average of 8 TB of data daily, comprised of millions of images and hundreds of millions of metadata elements. The DKIST Data Center is responsible for the long-term curation and calibration of data received from the DKIST, and for distributing it to the user community for scientific use. Two key elements necessary to meet the inherent big data challenge are the development of flexible public/private cloud computing and coupled relational and non-relational data storage mechanisms. We discuss how this infrastructure is being designed to meet the significant expectation of automatic and manual calibration of ground-based solar physics data, and the maximization the data's utility through efficient, long-term data management practices implemented with prudent process definition and technology exploitation.
Research on the feature set construction method for spherical stereo vision
NASA Astrophysics Data System (ADS)
Zhu, Junchao; Wan, Li; Röning, Juha; Feng, Weijia
2015-01-01
Spherical stereo vision is a kind of stereo vision system built by fish-eye lenses, which discussing the stereo algorithms conform to the spherical model. Epipolar geometry is the theory which describes the relationship of the two imaging plane in cameras for the stereo vision system based on perspective projection model. However, the epipolar in uncorrected fish-eye image will not be a line but an arc which intersects at the poles. It is polar curve. In this paper, the theory of nonlinear epipolar geometry will be explored and the method of nonlinear epipolar rectification will be proposed to eliminate the vertical parallax between two fish-eye images. Maximally Stable Extremal Region (MSER) utilizes grayscale as independent variables, and uses the local extremum of the area variation as the testing results. It is demonstrated in literatures that MSER is only depending on the gray variations of images, and not relating with local structural characteristics and resolution of image. Here, MSER will be combined with the nonlinear epipolar rectification method proposed in this paper. The intersection of the rectified epipolar and the corresponding MSER region is determined as the feature set of spherical stereo vision. Experiments show that this study achieved the expected results.
Hoan, Tran-Nhut-Khai; Hiep, Vu-Van; Koo, In-Soo
2016-03-31
This paper considers cognitive radio networks (CRNs) utilizing multiple time-slotted primary channels in which cognitive users (CUs) are powered by energy harvesters. The CUs are under the consideration that hardware constraints on radio devices only allow them to sense and transmit on one channel at a time. For a scenario where the arrival of harvested energy packets and the battery capacity are finite, we propose a scheme to optimize (i) the channel-sensing schedule (consisting of finding the optimal action (silent or active) and sensing order of channels) and (ii) the optimal transmission energy set corresponding to the channels in the sensing order for the operation of the CU in order to maximize the expected throughput of the CRN over multiple time slots. Frequency-switching delay, energy-switching cost, correlation in spectrum occupancy across time and frequency and errors in spectrum sensing are also considered in this work. The performance of the proposed scheme is evaluated via simulation. The simulation results show that the throughput of the proposed scheme is greatly improved, in comparison to related schemes in the literature. The collision ratio on the primary channels is also investigated.
Sylvia, Louisa G; Hay, Aleena; Ostacher, Michael J; Miklowitz, David J; Nierenberg, Andrew A; Thase, Michael E; Sachs, Gary S; Deckersbach, Thilo; Perlis, Roy H
2013-06-01
We sought to understand the association of specific aspects of care satisfaction, such as patients' perceived relationship with their psychiatrist and access to their psychiatrist and staff, and therapeutic alliance with participants' likelihood to adhere to their medication regimens among patients with bipolar disorder. We examined data from the multicenter Systematic Treatment Enhancement Program for Bipolar Disorder, an effectiveness study investigating the course and treatment of bipolar disorder. We expected that participants (n = 3037) with positive perceptions of their relationship with their psychiatrist and quality of psychopharmacologic care, as assessed by the Helping Alliance Questionnaire and Care Satisfaction Questionnaire, would be associated with better medication adherence. We utilized logistic regression models controlling for already established factors associated with poor adherence. Patients' perceptions of collaboration, empathy, and accessibility were significantly associated with adherence to treatment in individuals with bipolar disorder completing at least 1 assessment. Patients' perceptions of their psychiatrists' experience, as well as of their degree of discussing medication risks and benefits, were not associated with medication adherence. Patients' perceived therapeutic alliance and treatment environment impact their adherence to pharmacotherapy recommendations. This study may enable psychopharmacologists' practices to be structured to maximize features associated with greater medication adherence.
Hope in terminal illness: an evolutionary concept analysis.
Johnson, Sarah
2007-09-01
to clarify the concept of hope as perceived by patients with a terminal illness, to develop hope as an evidence-based nursing concept, to contribute new knowledge and insights about hope to the relatively new field of palliative care; endeavouring to maximize the quality of life of terminally ill patients in the future. utilizing Rodgers' (2000a) evolutionary concept analysis methodology and thematic content analysis, 17 pieces of research-based literature on hope as perceived by adult patients with any terminal illness pathology, from the disciplines of nursing and medicine have been reviewed and analyzed. An exemplary case of the concept in action is presented along with the evolution of the concept hope in terminal illness. Ten essential attributes of the concept were identified: positive expectation; personal qualities; spirituality; goals; comfort; help/caring; interpersonal relationships; control; legacy; and life review. Patients' hopes and goals are scaled down and refocused in order to live in the present and enjoy the time they have left with loved ones. By completing all the steps to Rodgers' (2000a) evolutionary view of concept analysis, a working definition and clarification of the concept in its current use has been achieved. This provides a solid conceptual foundation for further study.
Multivariate-$t$ nonlinear mixed models with application to censored multi-outcome AIDS studies.
Lin, Tsung-I; Wang, Wan-Lun
2017-10-01
In multivariate longitudinal HIV/AIDS studies, multi-outcome repeated measures on each patient over time may contain outliers, and the viral loads are often subject to a upper or lower limit of detection depending on the quantification assays. In this article, we consider an extension of the multivariate nonlinear mixed-effects model by adopting a joint multivariate-$t$ distribution for random effects and within-subject errors and taking the censoring information of multiple responses into account. The proposed model is called the multivariate-$t$ nonlinear mixed-effects model with censored responses (MtNLMMC), allowing for analyzing multi-outcome longitudinal data exhibiting nonlinear growth patterns with censorship and fat-tailed behavior. Utilizing the Taylor-series linearization method, a pseudo-data version of expectation conditional maximization either (ECME) algorithm is developed for iteratively carrying out maximum likelihood estimation. We illustrate our techniques with two data examples from HIV/AIDS studies. Experimental results signify that the MtNLMMC performs favorably compared to its Gaussian analogue and some existing approaches. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Xiao, Hu; Cui, Rongxin; Xu, Demin
2018-06-01
This paper presents a cooperative multiagent search algorithm to solve the problem of searching for a target on a 2-D plane under multiple constraints. A Bayesian framework is used to update the local probability density functions (PDFs) of the target when the agents obtain observation information. To obtain the global PDF used for decision making, a sampling-based logarithmic opinion pool algorithm is proposed to fuse the local PDFs, and a particle sampling approach is used to represent the continuous PDF. Then the Gaussian mixture model (GMM) is applied to reconstitute the global PDF from the particles, and a weighted expectation maximization algorithm is presented to estimate the parameters of the GMM. Furthermore, we propose an optimization objective which aims to guide agents to find the target with less resource consumptions, and to keep the resource consumption of each agent balanced simultaneously. To this end, a utility function-based optimization problem is put forward, and it is solved by a gradient-based approach. Several contrastive simulations demonstrate that compared with other existing approaches, the proposed one uses less overall resources and shows a better performance of balancing the resource consumption.
Ohno, Hajime; Matsubae, Kazuyo; Nakajima, Kenichi; Kondo, Yasushi; Nakamura, Shinichiro; Fukushima, Yasuhiro; Nagasaka, Tetsuya
2017-11-21
Importance of end-of-life vehicles (ELVs) as an urban mine is expected to grow, as more people in developing countries are experiencing increased standards of living, while the automobiles are increasingly made using high-quality materials to meet stricter environmental and safety requirements. While most materials in ELVs, particularly steel, have been recycled at high rates, quality issues have not been adequately addressed due to the complex use of automobile materials, leading to considerable losses of valuable alloying elements. This study highlights the maximal potential of quality-oriented recycling of ELV steel, by exploring the utilization methods of scrap, sorted by parts, to produce electric-arc-furnace-based crude alloy steel with minimal losses of alloying elements. Using linear programming on the case of Japanese economy in 2005, we found that adoption of parts-based scrap sorting could result in the recovery of around 94-98% of the alloying elements occurring in parts scrap (manganese, chromium, nickel, and molybdenum), which may replace 10% of the virgin sources in electric arc furnace-based crude alloy steel production.
Solving delay differential equations in S-ADAPT by method of steps.
Bauer, Robert J; Mo, Gary; Krzyzanski, Wojciech
2013-09-01
S-ADAPT is a version of the ADAPT program that contains additional simulation and optimization abilities such as parametric population analysis. S-ADAPT utilizes LSODA to solve ordinary differential equations (ODEs), an algorithm designed for large dimension non-stiff and stiff problems. However, S-ADAPT does not have a solver for delay differential equations (DDEs). Our objective was to implement in S-ADAPT a DDE solver using the methods of steps. The method of steps allows one to solve virtually any DDE system by transforming it to an ODE system. The solver was validated for scalar linear DDEs with one delay and bolus and infusion inputs for which explicit analytic solutions were derived. Solutions of nonlinear DDE problems coded in S-ADAPT were validated by comparing them with ones obtained by the MATLAB DDE solver dde23. The estimation of parameters was tested on the MATLB simulated population pharmacodynamics data. The comparison of S-ADAPT generated solutions for DDE problems with the explicit solutions as well as MATLAB produced solutions which agreed to at least 7 significant digits. The population parameter estimates from using importance sampling expectation-maximization in S-ADAPT agreed with ones used to generate the data. Published by Elsevier Ireland Ltd.
Cobb, Nathan K; Jacobs, Megan A; Wileyto, Paul; Valente, Thomas; Graham, Amanda L
2016-06-01
To examine the diffusion of an evidence-based smoking cessation application ("app") through Facebook social networks and identify specific intervention components that accelerate diffusion. Between December 2012 and October 2013, we recruited adult US smokers ("seeds") via Facebook advertising and randomized them to 1 of 12 app variants using a factorial design. App variants targeted components of diffusion: duration of use (t), "contagiousness" (β), and number of contacts (Z). The primary outcome was the reproductive ratio (R), defined as the number of individuals installing the app ("descendants") divided by the number of a seed participant's Facebook friends. We randomized 9042 smokers. App utilization metrics demonstrated between-variant differences in expected directions. The highest level of diffusion (R = 0.087) occurred when we combined active contagion strategies with strategies to increase duration of use (incidence rate ratio = 9.99; 95% confidence interval = 5.58, 17.91; P < .001). Involving nonsmokers did not affect diffusion. The maximal R value (0.087) is sufficient to increase the numbers of individuals receiving treatment if applied on a large scale. Online interventions can be designed a priori to spread through social networks.
Kim, Dae-Young; Seo, Byoung-Do; Choi, Pan-Am
2014-04-01
[Purpose] This study was conducted to determine the influence of Taekwondo as security martial arts training on anaerobic threshold, cardiorespiratory fitness, and blood lactate recovery. [Subjects and Methods] Fourteen healthy university students were recruited and divided into an exercise group and a control group (n = 7 in each group). The subjects who participated in the experiment were subjected to an exercise loading test in which anaerobic threshold, value of ventilation, oxygen uptake, maximal oxygen uptake, heart rate, and maximal values of ventilation / heart rate were measured during the exercise, immediately after maximum exercise loading, and at 1, 3, 5, 10, and 15 min of recovery. [Results] At the anaerobic threshold time point, the exercise group showed a significantly longer time to reach anaerobic threshold. The exercise group showed significantly higher values for the time to reach VO2max, maximal values of ventilation, maximal oxygen uptake and maximal values of ventilation / heart rate. Significant changes were observed in the value of ventilation volumes at the 1- and 5-min recovery time points within the exercise group; oxygen uptake and maximal oxygen uptake were significantly different at the 5- and 10-min time points; heart rate was significantly different at the 1- and 3-min time points; and maximal values of ventilation / heart rate was significantly different at the 5-min time point. The exercise group showed significant decreases in blood lactate levels at the 15- and 30-min recovery time points. [Conclusion] The study results revealed that Taekwondo as a security martial arts training increases the maximal oxygen uptake and anaerobic threshold and accelerates an individual's recovery to the normal state of cardiorespiratory fitness and blood lactate level. These results are expected to contribute to the execution of more effective security services in emergencies in which violence can occur.
Method of Calibrating a Force Balance
NASA Technical Reports Server (NTRS)
Parker, Peter A. (Inventor); Rhew, Ray D. (Inventor); Johnson, Thomas H. (Inventor); Landman, Drew (Inventor)
2015-01-01
A calibration system and method utilizes acceleration of a mass to generate a force on the mass. An expected value of the force is calculated based on the magnitude and acceleration of the mass. A fixture is utilized to mount the mass to a force balance, and the force balance is calibrated to provide a reading consistent with the expected force determined for a given acceleration. The acceleration can be varied to provide different expected forces, and the force balance can be calibrated for different applied forces. The acceleration may result from linear acceleration of the mass or rotational movement of the mass.
Maximizing profitability in a hospital outpatient pharmacy.
Jorgenson, J A; Kilarski, J W; Malatestinic, W N; Rudy, T A
1989-07-01
This paper describes the strategies employed to increase the profitability of an existing ambulatory pharmacy operated by the hospital. Methods to generate new revenue including implementation of a home parenteral therapy program, a home enteral therapy program, a durable medical equipment service, and home care disposable sales are described. Programs to maximize existing revenue sources such as increasing the capture rate on discharge prescriptions, increasing "walk-in" prescription traffic and increasing HMO prescription volumes are discussed. A method utilized to reduce drug expenditures is also presented. By minimizing expenses and increasing the revenues for the ambulatory pharmacy operation, net profit increased from +26,000 to over +140,000 in one year.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imam, Neena; Barhen, Jacob; Glover, Charles Wayne
2012-01-01
Multi-sensor networks may face resource limitations in a dynamically evolving multiple target tracking scenario. It is necessary to task the sensors efficiently so that the overall system performance is maximized within the system constraints. The central sensor resource manager may control the sensors to meet objective functions that are formulated to meet system goals such as minimization of track loss, maximization of probability of target detection, and minimization of track error. This paper discusses the variety of techniques that may be utilized to optimize sensor performance for either near term gain or future reward over a longer time horizon.
Sok, J; Hogeveen, H; Elbers, A R W; Velthuis, A G J; Oude Lansink, A G J M
2014-08-01
In order to put a halt to the Bluetongue virus serotype 8 (BTV-8) epidemic in 2008, the European Commission promoted vaccination at a transnational level as a new measure to combat BTV-8. Most European member states opted for a mandatory vaccination campaign, whereas the Netherlands, amongst others, opted for a voluntary campaign. For the latter to be effective, the farmer's willingness to vaccinate should be high enough to reach satisfactory vaccination coverage to stop the spread of the disease. This study looked at a farmer's expected utility of vaccination, which is expected to have a positive impact on the willingness to vaccinate. Decision analysis was used to structure the vaccination decision problem into decisions, events and payoffs, and to define the relationships among these elements. Two scenarios were formulated to distinguish farmers' mindsets, based on differences in dairy heifer management. For each of the scenarios, a decision tree was run for two years to study vaccination behaviour over time. The analysis was done based on the expected utility criterion. This allows to account for the effect of a farmer's risk preference on the vaccination decision. Probabilities were estimated by experts, payoffs were based on an earlier published study. According to the results of the simulation, the farmer decided initially to vaccinate against BTV-8 as the net expected utility of vaccination was positive. Re-vaccination was uncertain due to less expected costs of a continued outbreak. A risk averse farmer in this respect is more likely to re-vaccinate. When heifers were retained for export on the farm, the net expected utility of vaccination was found to be generally larger and thus was re-vaccination more likely to happen. For future animal health programmes that rely on a voluntary approach, results show that the provision of financial incentives can be adjusted to the farmers' willingness to vaccinate over time. Important in this respect are the decision moment and the characteristics of the disease. Farmers' perceptions of the disease risk and about the efficacy of available control options cannot be neglected. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Yinan; Qiao, Youming; Wang, Xin; Duan, Runyao
2018-03-01
We study the problem of transforming a tripartite pure state to a bipartite one using stochastic local operations and classical communication (SLOCC). It is known that the tripartite-to-bipartite SLOCC convertibility is characterized by the maximal Schmidt rank of the given tripartite state, i.e. the largest Schmidt rank over those bipartite states lying in the support of the reduced density operator. In this paper, we further study this problem and exhibit novel results in both multi-copy and asymptotic settings, utilizing powerful results from the structure of matrix spaces. In the multi-copy regime, we observe that the maximal Schmidt rank is strictly super-multiplicative, i.e. the maximal Schmidt rank of the tensor product of two tripartite pure states can be strictly larger than the product of their maximal Schmidt ranks. We then provide a full characterization of those tripartite states whose maximal Schmidt rank is strictly super-multiplicative when taking tensor product with itself. Notice that such tripartite states admit strict advantages in tripartite-to-bipartite SLOCC transformation when multiple copies are provided. In the asymptotic setting, we focus on determining the tripartite-to-bipartite SLOCC entanglement transformation rate. Computing this rate turns out to be equivalent to computing the asymptotic maximal Schmidt rank of the tripartite state, defined as the regularization of its maximal Schmidt rank. Despite the difficulty caused by the super-multiplicative property, we provide explicit formulas for evaluating the asymptotic maximal Schmidt ranks of two important families of tripartite pure states by resorting to certain results of the structure of matrix spaces, including the study of matrix semi-invariants. These formulas turn out to be powerful enough to give a sufficient and necessary condition to determine whether a given tripartite pure state can be transformed to the bipartite maximally entangled state under SLOCC, in the asymptotic setting. Applying the recent progress on the non-commutative rank problem, we can verify this condition in deterministic polynomial time.
Utility franchises reconsidered
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weidner, B.
It is easier to obtain a public utility franchise than one for a fast food store because companies like Burger King value the profit share and control available with a franchise arrangement. The investor-owned utilities (IOUs) in Chicago and elsewhere gets little financial or regulatory benefit, although they do have an alternative because the franchise can be taken over by the city with a one-year notice. As IOUs evolved, the annual franchise fee has been incorporated into the rate in a move that taxes ratepayers and maximizes profits. Cities that found franchising unsatisfactory are looking for ways to terminate themore » franchise and finance a takeover, but limited-term and indeterminate franchises may offer a better mechanism when public needs and utility aims diverge. A directory lists franchised utilities by state and comments on their legal status. (DCK)« less
Action Being Character: A Promising Perspective on the Solution Concept of Game Theory
Deng, Kuiying; Chu, Tianguang
2011-01-01
The inconsistency of predictions from solution concepts of conventional game theory with experimental observations is an enduring question. These solution concepts are based on the canonical rationality assumption that people are exclusively self-regarding utility maximizers. In this article, we think this assumption is problematic and, instead, assume that rational economic agents act as if they were maximizing their implicit utilities, which turns out to be a natural extension of the canonical rationality assumption. Implicit utility is defined by a player's character to reflect his personal weighting between cooperative, individualistic, and competitive social value orientations. The player who actually faces an implicit game chooses his strategy based on the common belief about the character distribution for a general player and the self-estimation of his own character, and he is not concerned about which strategies other players will choose and will never feel regret about his decision. It is shown by solving five paradigmatic games, the Dictator game, the Ultimatum game, the Prisoner's Dilemma game, the Public Goods game, and the Battle of the Sexes game, that the framework of implicit game and its corresponding solution concept, implicit equilibrium, based on this alternative assumption have potential for better explaining people's actual behaviors in social decision making situations. PMID:21573055
Action being character: a promising perspective on the solution concept of game theory.
Deng, Kuiying; Chu, Tianguang
2011-05-09
The inconsistency of predictions from solution concepts of conventional game theory with experimental observations is an enduring question. These solution concepts are based on the canonical rationality assumption that people are exclusively self-regarding utility maximizers. In this article, we think this assumption is problematic and, instead, assume that rational economic agents act as if they were maximizing their implicit utilities, which turns out to be a natural extension of the canonical rationality assumption. Implicit utility is defined by a player's character to reflect his personal weighting between cooperative, individualistic, and competitive social value orientations. The player who actually faces an implicit game chooses his strategy based on the common belief about the character distribution for a general player and the self-estimation of his own character, and he is not concerned about which strategies other players will choose and will never feel regret about his decision. It is shown by solving five paradigmatic games, the Dictator game, the Ultimatum game, the Prisoner's Dilemma game, the Public Goods game, and the Battle of the Sexes game, that the framework of implicit game and its corresponding solution concept, implicit equilibrium, based on this alternative assumption have potential for better explaining people's actual behaviors in social decision making situations.
Moore, Daniel R; Phillips, Stuart M; Babraj, John A; Smith, Kenneth; Rennie, Michael J
2005-06-01
We aimed to determine whether there were differences in the extent and time course of skeletal muscle myofibrillar protein synthesis (MPS) and muscle collagen protein synthesis (CPS) in human skeletal muscle in an 8.5-h period after bouts of maximal muscle shortening (SC; average peak torque = 225 +/- 7 N.m, means +/- SE) or lengthening contractions (LC; average peak torque = 299 +/- 18 N.m) with equivalent work performed in each mode. Eight healthy young men (21.9 +/- 0.6 yr, body mass index 24.9 +/- 1.3 kg/m2) performed 6 sets of 10 maximal unilateral LC of the knee extensors on an isokinetic dynamometer. With the contralateral leg, they then performed 6 sets of maximal unilateral SC with work matched to the total work performed during LC (10.9 +/- 0.7 vs. 10.9 +/- 0.8 kJ, P = 0.83). After exercise, the participants consumed small intermittent meals to provide 0.1 g.kg(-1).h(-1) of protein and carbohydrate. Prior exercise elevated MPS above rest in both conditions, but there was a more rapid rise after LC (P < 0.01). The increases (P < 0.001) in CPS above rest were identical for both SC and LC and likely represent a remodeling of the myofibrillar basement membrane. Therefore, a more rapid rise in MPS after maximal LC could translate into greater protein accretion and muscle hypertrophy during chronic resistance training utilizing maximal LC.
Product-line selection and pricing with remanufacturing under availability constraints
NASA Astrophysics Data System (ADS)
Aras, Necati; Esenduran, G.÷k.‡e.; Altinel, I. Kuban
2004-12-01
Product line selection and pricing are two crucial decisions for the profitability of a manufacturing firm. Remanufacturing, on the other hand, may be a profitable strategy that captures the remaining value in used products. In this paper we develop a mixed-integer nonlinear programming model form the perspective of an original equipment manufacturer (OEM). The objective of the OEM is to select products to manufacture and remanufacture among a set of given alternatives and simultaneously determine their prices so as to maximize its profit. It is assumed that the probability a customer selects a product is proportional to its utility and inversely proportional to its price. The utility of a product is an increasing function of its perceived quality. In our base model, products are discriminated by their unit production costs and utilities. We also analyze a case where remanufacturing is limited by the available quantity of collected remanufacturable products. We show that the resulting problem is decomposed into the pricing and product line selection subproblems. Pricing problem is solved by a variant of the simplex search procedure which can also handle constraints, while complete enumeration and a genetic algorithm are used for the solution of the product line selection problem. A number of experiments are carried out to identify conditions under which it is economically viable for the firm to sell remanufactured products. We also determine the optimal utility and unit production cost values of a remanufactured product, which maximizes the total profit of the OEM.
Hatam, Nahid; Askarian, Mehrdad; Javan-Noghabi, Javad; Ahmadloo, Niloofar; Mohammadianpanah, Mohammad
2015-01-01
A cost-utility analysis was performed to assess the cost-utility of neoadjuvant chemotherapy regimens containing doxorubicin and cyclophosphamide (AC) versus paclitaxel and gemcitabine (PG) for locally advanced breast cancer patients in Iran. This cross-sectional study in Namazi hospital in Shiraz, in the south of Iran covered 64 breast cancer patients. According to the random numbers, the patients were divided into two groups, 32 receiving AC and 32 PG. Costs were identified and measured from a community perspective. These items included medical and non-medical direct and indirect costs. In this study, a data collection form was used. To assess the utility of the two regimens, the European Organization for Research and Treatment of Cancer Quality of Life Questionnaire-Core30 (EORTC QLQ-C30) was applied. Using a decision tree, we calculated the expected costs and quality adjusted life years (QALYs) for both methods; also, the incremental cost-effectiveness ratio was assessed. The results of the decision tree showed that in the AC arm, the expected cost was 39,170 US$ and the expected QALY was 3.39 and in the PG arm, the expected cost was 43,336 dollars and the expected QALY was 2.64. Sensitivity analysis showed the cost effectiveness of the AC and ICER=-5535 US$. Overall, the results showed that AC to be superior to PG in treatment of patients with breast cancer, being less costly and more effective.
A History of Educational Facilities Laboratories (EFL)
ERIC Educational Resources Information Center
Marks, Judy
2009-01-01
The Educational Facilities Laboratories (EFL), an independent research organization established by the Ford Foundation, opened its doors in 1958 under the direction of Harold B. Gores, a distinguished educator. Its purpose was to help schools and colleges maximize the quality and utility of their facilities, stimulate research, and disseminate…
Fusing corn nitrogen recommendation tools for an improved canopy reflectance sensor performance
USDA-ARS?s Scientific Manuscript database
Nitrogen (N) rate recommendation tools are utilized to help producers maximize corn grain yield production. Many of these tools provide recommendations at field scales but often fail when corn N requirements are variable across the field. Canopy reflectance sensors are capable of capturing within-fi...
ERIC Educational Resources Information Center
Spiegel, U.; Templeman, J.
1996-01-01
Applies the literature of bundling, tie-in sales, and vertical integration to higher education. Students are often required to purchase a package of courses, some of which are unrelated to their major. This kind of bundling policy can be utilized as a profit-maximizing strategy for universities exercising a degree of monopolistic power. (12…