Science.gov

Sample records for expected utility maximization

  1. Expected Power-Utility Maximization Under Incomplete Information and with Cox-Process Observations

    SciTech Connect

    Fujimoto, Kazufumi; Nagai, Hideo; Runggaldier, Wolfgang J.

    2013-02-15

    We consider the problem of maximization of expected terminal power utility (risk sensitive criterion). The underlying market model is a regime-switching diffusion model where the regime is determined by an unobservable factor process forming a finite state Markov process. The main novelty is due to the fact that prices are observed and the portfolio is rebalanced only at random times corresponding to a Cox process where the intensity is driven by the unobserved Markovian factor process as well. This leads to a more realistic modeling for many practical situations, like in markets with liquidity restrictions; on the other hand it considerably complicates the problem to the point that traditional methodologies cannot be directly applied. The approach presented here is specific to the power-utility. For log-utilities a different approach is presented in Fujimoto et al. (Preprint, 2012).

  2. Why Contextual Preference Reversals Maximize Expected Value

    PubMed Central

    2016-01-01

    Contextual preference reversals occur when a preference for one option over another is reversed by the addition of further options. It has been argued that the occurrence of preference reversals in human behavior shows that people violate the axioms of rational choice and that people are not, therefore, expected value maximizers. In contrast, we demonstrate that if a person is only able to make noisy calculations of expected value and noisy observations of the ordinal relations among option features, then the expected value maximizing choice is influenced by the addition of new options and does give rise to apparent preference reversals. We explore the implications of expected value maximizing choice, conditioned on noisy observations, for a range of contextual preference reversal types—including attraction, compromise, similarity, and phantom effects. These preference reversal types have played a key role in the development of models of human choice. We conclude that experiments demonstrating contextual preference reversals are not evidence for irrationality. They are, however, a consequence of expected value maximization given noisy observations. PMID:27337391

  3. Why contextual preference reversals maximize expected value.

    PubMed

    Howes, Andrew; Warren, Paul A; Farmer, George; El-Deredy, Wael; Lewis, Richard L

    2016-07-01

    Contextual preference reversals occur when a preference for one option over another is reversed by the addition of further options. It has been argued that the occurrence of preference reversals in human behavior shows that people violate the axioms of rational choice and that people are not, therefore, expected value maximizers. In contrast, we demonstrate that if a person is only able to make noisy calculations of expected value and noisy observations of the ordinal relations among option features, then the expected value maximizing choice is influenced by the addition of new options and does give rise to apparent preference reversals. We explore the implications of expected value maximizing choice, conditioned on noisy observations, for a range of contextual preference reversal types-including attraction, compromise, similarity, and phantom effects. These preference reversal types have played a key role in the development of models of human choice. We conclude that experiments demonstrating contextual preference reversals are not evidence for irrationality. They are, however, a consequence of expected value maximization given noisy observations. (PsycINFO Database Record

  4. Why contextual preference reversals maximize expected value.

    PubMed

    Howes, Andrew; Warren, Paul A; Farmer, George; El-Deredy, Wael; Lewis, Richard L

    2016-07-01

    Contextual preference reversals occur when a preference for one option over another is reversed by the addition of further options. It has been argued that the occurrence of preference reversals in human behavior shows that people violate the axioms of rational choice and that people are not, therefore, expected value maximizers. In contrast, we demonstrate that if a person is only able to make noisy calculations of expected value and noisy observations of the ordinal relations among option features, then the expected value maximizing choice is influenced by the addition of new options and does give rise to apparent preference reversals. We explore the implications of expected value maximizing choice, conditioned on noisy observations, for a range of contextual preference reversal types-including attraction, compromise, similarity, and phantom effects. These preference reversal types have played a key role in the development of models of human choice. We conclude that experiments demonstrating contextual preference reversals are not evidence for irrationality. They are, however, a consequence of expected value maximization given noisy observations. (PsycINFO Database Record PMID:27337391

  5. Classical subjective expected utility.

    PubMed

    Cerreia-Vioglio, Simone; Maccheroni, Fabio; Marinacci, Massimo; Montrucchio, Luigi

    2013-04-23

    We consider decision makers who know that payoff-relevant observations are generated by a process that belongs to a given class M, as postulated in Wald [Wald A (1950) Statistical Decision Functions (Wiley, New York)]. We incorporate this Waldean piece of objective information within an otherwise subjective setting à la Savage [Savage LJ (1954) The Foundations of Statistics (Wiley, New York)] and show that this leads to a two-stage subjective expected utility model that accounts for both state and model uncertainty. PMID:23559375

  6. Using explicit decision rules to manage issues of justice, risk, and ethics in decision analysis: when is it not rational to maximize expected utility?

    PubMed

    Deber, R B; Goel, V

    1990-01-01

    Concepts of justice, risk, and ethics can be merged with decision analysis by requiring the analyst to specify explicity a decision rule or sequence of rules. Decision rules are categorized by whether they consider: 1) aspects of outcome distributions beyond central tendencies; 2) probabilities as well as utilities of outcomes; and 3) means as well as ends. This formulation suggests that distribution-based decision rules could address both risk (for an individual) and justice (for the population). Rational choice under risk if choices are one-time only (vs. repeated events) or if one branch contains unlikely but disastrous outcomes might ignore probability information. Incorporating risk attitude into decision rules rather than utilities could facilitate use of multiattribute approaches to measuring outcomes. Certain ethical concerns could be addressed by prior specification of rules for allowing particular branches. Examples, including selection of polio vaccine strategies, are discussed, and theoretical and practical implications of a decision rule approach noted. PMID:2196412

  7. Maximizing Resource Utilization in Video Streaming Systems

    ERIC Educational Resources Information Center

    Alsmirat, Mohammad Abdullah

    2013-01-01

    Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…

  8. Inexact Matching of Ontology Graphs Using Expectation-Maximization

    PubMed Central

    Doshi, Prashant; Kolli, Ravikanth; Thomas, Christopher

    2009-01-01

    We present a new method for mapping ontology schemas that address similar domains. The problem of ontology matching is crucial since we are witnessing a decentralized development and publication of ontological data. We formulate the problem of inferring a match between two ontologies as a maximum likelihood problem, and solve it using the technique of expectation-maximization (EM). Specifically, we adopt directed graphs as our model for ontology schemas and use a generalized version of EM to arrive at a map between the nodes of the graphs. We exploit the structural, lexical and instance similarity between the graphs, and differ from the previous approaches in the way we utilize them to arrive at, a possibly inexact, match. Inexact matching is the process of finding a best possible match between the two graphs when exact matching is not possible or is computationally difficult. In order to scale the method to large ontologies, we identify the computational bottlenecks and adapt the generalized EM by using a memory bounded partitioning scheme. We provide comparative experimental results in support of our method on two well-known ontology alignment benchmarks and discuss their implications. PMID:20160892

  9. Blood detection in wireless capsule endoscopy using expectation maximization clustering

    NASA Astrophysics Data System (ADS)

    Hwang, Sae; Oh, JungHwan; Cox, Jay; Tang, Shou Jiang; Tibbals, Harry F.

    2006-03-01

    Wireless Capsule Endoscopy (WCE) is a relatively new technology (FDA approved in 2002) allowing doctors to view most of the small intestine. Other endoscopies such as colonoscopy, upper gastrointestinal endoscopy, push enteroscopy, and intraoperative enteroscopy could be used to visualize up to the stomach, duodenum, colon, and terminal ileum, but there existed no method to view most of the small intestine without surgery. With the miniaturization of wireless and camera technologies came the ability to view the entire gestational track with little effort. A tiny disposable video capsule is swallowed, transmitting two images per second to a small data receiver worn by the patient on a belt. During an approximately 8-hour course, over 55,000 images are recorded to a worn device and then downloaded to a computer for later examination. Typically, a medical clinician spends more than two hours to analyze a WCE video. Research has been attempted to automatically find abnormal regions (especially bleeding) to reduce the time needed to analyze the videos. The manufacturers also provide the software tool to detect the bleeding called Suspected Blood Indicator (SBI), but its accuracy is not high enough to replace human examination. It was reported that the sensitivity and the specificity of SBI were about 72% and 85%, respectively. To address this problem, we propose a technique to detect the bleeding regions automatically utilizing the Expectation Maximization (EM) clustering algorithm. Our experimental results indicate that the proposed bleeding detection method achieves 92% and 98% of sensitivity and specificity, respectively.

  10. Weighted EMPCA: Weighted Expectation Maximization Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Bailey, Stephen

    2016-09-01

    Weighted EMPCA performs principal component analysis (PCA) on noisy datasets with missing values. Estimates of the measurement error are used to weight the input data such that the resulting eigenvectors, when compared to classic PCA, are more sensitive to the true underlying signal variations rather than being pulled by heteroskedastic measurement noise. Missing data are simply limiting cases of weight = 0. The underlying algorithm is a noise weighted expectation maximization (EM) PCA, which has additional benefits of implementation speed and flexibility for smoothing eigenvectors to reduce the noise contribution.

  11. Expected Utility Distributions for Flexible, Contingent Execution

    NASA Technical Reports Server (NTRS)

    Bresina, John L.; Washington, Richard

    2000-01-01

    This paper presents a method for using expected utility distributions in the execution of flexible, contingent plans. A utility distribution maps the possible start times of an action to the expected utility of the plan suffix starting with that action. The contingent plan encodes a tree of possible courses of action and includes flexible temporal constraints and resource constraints. When execution reaches a branch point, the eligible option with the highest expected utility at that point in time is selected. The utility distributions make this selection sensitive to the runtime context, yet still efficient. Our approach uses predictions of action duration uncertainty as well as expectations of resource usage and availability to determine when an action can execute and with what probability. Execution windows and probabilities inevitably change as execution proceeds, but such changes do not invalidate the cached utility distributions, thus, dynamic updating of utility information is minimized.

  12. An Expectation-Maximization Method for Calibrating Synchronous Machine Models

    SciTech Connect

    Meng, Da; Zhou, Ning; Lu, Shuai; Lin, Guang

    2013-07-21

    The accuracy of a power system dynamic model is essential to its secure and efficient operation. Lower confidence in model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, this paper proposes an expectation-maximization (EM) method to calibrate the synchronous machine model using phasor measurement unit (PMU) data. First, an extended Kalman filter (EKF) is applied to estimate the dynamic states using measurement data. Then, the parameters are calculated based on the estimated states using maximum likelihood estimation (MLE) method. The EM method iterates over the preceding two steps to improve estimation accuracy. The proposed EM method’s performance is evaluated using a single-machine infinite bus system and compared with a method where both state and parameters are estimated using an EKF method. Sensitivity studies of the parameter calibration using EM method are also presented to show the robustness of the proposed method for different levels of measurement noise and initial parameter uncertainty.

  13. Expectation-Maximization Binary Clustering for Behavioural Annotation

    PubMed Central

    2016-01-01

    The growing capacity to process and store animal tracks has spurred the development of new methods to segment animal trajectories into elementary units of movement. Key challenges for movement trajectory segmentation are to (i) minimize the need of supervision, (ii) reduce computational costs, (iii) minimize the need of prior assumptions (e.g. simple parametrizations), and (iv) capture biologically meaningful semantics, useful across a broad range of species. We introduce the Expectation-Maximization binary Clustering (EMbC), a general purpose, unsupervised approach to multivariate data clustering. The EMbC is a variant of the Expectation-Maximization Clustering (EMC), a clustering algorithm based on the maximum likelihood estimation of a Gaussian mixture model. This is an iterative algorithm with a closed form step solution and hence a reasonable computational cost. The method looks for a good compromise between statistical soundness and ease and generality of use (by minimizing prior assumptions and favouring the semantic interpretation of the final clustering). Here we focus on the suitability of the EMbC algorithm for behavioural annotation of movement data. We show and discuss the EMbC outputs in both simulated trajectories and empirical movement trajectories including different species and different tracking methodologies. We use synthetic trajectories to assess the performance of EMbC compared to classic EMC and Hidden Markov Models. Empirical trajectories allow us to explore the robustness of the EMbC to data loss and data inaccuracies, and assess the relationship between EMbC output and expert label assignments. Additionally, we suggest a smoothing procedure to account for temporal correlations among labels, and a proper visualization of the output for movement trajectories. Our algorithm is available as an R-package with a set of complementary functions to ease the analysis. PMID:27002631

  14. TRLFS: Analysing spectra with an expectation-maximization (EM) algorithm

    NASA Astrophysics Data System (ADS)

    Steinborn, A.; Taut, S.; Brendler, V.; Geipel, G.; Flach, B.

    2008-12-01

    A new approach for fitting statistical models to time-resolved laser-induced fluorescence spectroscopy (TRLFS) spectra is presented. Such spectra result from counting emitted photons in defined intervals. Any photon can be described by emission time and wavelength as observable attributes and by component and peak affiliation as hidden ones. Understanding the attribute values of the emitted photons as drawn from a probability density distribution, the model estimation problem can be described as a statistical problem with incomplete data. To solve the maximum likelihood task, an expectation-maximization (EM) algorithm is derived and tested. In contrast to the well known least squares method, the advantage of the new approach is its ability to decompose the spectrum into its components and peaks using the revealed hidden attributes of the photons as well as the ability to decompose a background-superimposed spectrum into the exploitable signal of the fluorescent chemical species and the background. This facilitates new possibilities for evaluation of the resulting model parameters. The simultaneous detection of temporal and spectral model parameters provides a mutually consistent description of TRLFS spectra.

  15. Robust Utility Maximization Under Convex Portfolio Constraints

    SciTech Connect

    Matoussi, Anis; Mezghani, Hanen Mnif, Mohamed

    2015-04-15

    We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle.

  16. Parallel expectation-maximization algorithms for PET image reconstruction

    NASA Astrophysics Data System (ADS)

    Jeng, Wei-Min

    1999-10-01

    Image reconstruction using Positron Emission Tomography (PET) involves estimating an unknown number of photon pairs emitted from the radiopharmaceuticals within the tissues of the patient's body. The generation of the photons can be described as a Poisson process, and the difficulty of image reconstruction involves approximating the parameter of the tissue density distribution function. A significant amount of artifactual noise exists in the reconstructed image with the convolution back projection method. Using the Maximum Likelihood (ML) formulation, a better estimate can be made for the unknown image information. Despite the better quality of images, the Expectation Maximization (EM) iterative algorithm is not being used in practice due to the tremendous processing time. This research proposes new techniques in designing parallel algorithms in order to speed the reconstruction process. Using the EM algorithm as an example, several general parallel techniques were studied for both distributed-memory architecture and message-passing programming paradigm. Both intra- and inter-iteration latency-hiding schemes were designed to effectively reduce the communication time. Dependencies that exist in and between iterations were rearranged by overlap communication and computation with MPI's non-blocking collective reduction operation. A performance model was established to estimate the processing time of the algorithms and was found to agree with the experimental results. A second strategy, the sparse matrix compaction technique, was developed to reduce the computational time of the computation-bound EM algorithm with better use of PET system geometry. The proposed techniques are generally applicable to many scientific computation problems that involve sparse matrix operations as well as iterative types, of algorithms.

  17. Matching Pupils and Teachers to Maximize Expected Outcomes.

    ERIC Educational Resources Information Center

    Ward, Joe H., Jr.; And Others

    To achieve a good teacher-pupil match, it is necessary (1) to predict the learning outcomes that will result when each student is instructed by each teacher, (2) to use the predicted performance to compute an Optimality Index for each teacher-pupil combination to indicate the quality of each combination toward maximizing learning for all students,…

  18. Maximizing the clinical utility of comparative effectiveness research.

    PubMed

    Umscheid, C A

    2010-12-01

    Providers, consumers, payers, and policy makers are awash in choices when it comes to medical decision making and need better evidence to inform their decisions. Large federal investments in comparative effectiveness research (CER) aim to fill this need. But how do we ensure the clinical utility of CER? Here, I define comparative effectiveness and clinical utility, outline metrics to evaluate clinical utility, and suggest methods for maximizing the clinical utility of CER for the various stakeholders.

  19. An expected utility maximizer walks into a bar…

    PubMed

    Burghart, Daniel R; Glimcher, Paul W; Lazzaro, Stephanie C

    2013-06-01

    We conducted field experiments at a bar to test whether blood alcohol concentration (BAC) correlates with violations of the generalized axiom of revealed preference (GARP) and the independence axiom. We found that individuals with BACs well above the legal limit for driving adhere to GARP and independence at rates similar to those who are sober. This finding led to the fielding of a third experiment to explore how risk preferences might vary as a function of BAC. We found gender-specific effects: Men did not exhibit variations in risk preferences across BACs. In contrast, women were more risk averse than men at low BACs but exhibited increasing tolerance towards risks as BAC increased. Based on our estimates, men and women's risk preferences are predicted to be identical at BACs nearly twice the legal limit for driving. We discuss the implications for policy-makers. PMID:24244072

  20. AREM: Aligning Short Reads from ChIP-Sequencing by Expectation Maximization

    NASA Astrophysics Data System (ADS)

    Newkirk, Daniel; Biesinger, Jacob; Chon, Alvin; Yokomori, Kyoko; Xie, Xiaohui

    High-throughput sequencing coupled to chromatin immunoprecipitation (ChIP-Seq) is widely used in characterizing genome-wide binding patterns of transcription factors, cofactors, chromatin modifiers, and other DNA binding proteins. A key step in ChIP-Seq data analysis is to map short reads from high-throughput sequencing to a reference genome and identify peak regions enriched with short reads. Although several methods have been proposed for ChIP-Seq analysis, most existing methods only consider reads that can be uniquely placed in the reference genome, and therefore have low power for detecting peaks located within repeat sequences. Here we introduce a probabilistic approach for ChIP-Seq data analysis which utilizes all reads, providing a truly genome-wide view of binding patterns. Reads are modeled using a mixture model corresponding to K enriched regions and a null genomic background. We use maximum likelihood to estimate the locations of the enriched regions, and implement an expectation-maximization (E-M) algorithm, called AREM (aligning reads by expectation maximization), to update the alignment probabilities of each read to different genomic locations. We apply the algorithm to identify genome-wide binding events of two proteins: Rad21, a component of cohesin and a key factor involved in chromatid cohesion, and Srebp-1, a transcription factor important for lipid/cholesterol homeostasis. Using AREM, we were able to identify 19,935 Rad21 peaks and 1,748 Srebp-1 peaks in the mouse genome with high confidence, including 1,517 (7.6%) Rad21 peaks and 227 (13%) Srebp-1 peaks that were missed using only uniquely mapped reads. The open source implementation of our algorithm is available at http://sourceforge.net/projects/arem

  1. Coding for Parallel Links to Maximize the Expected Value of Decodable Messages

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew A.; Chang, Christopher S.

    2011-01-01

    When multiple parallel communication links are available, it is useful to consider link-utilization strategies that provide tradeoffs between reliability and throughput. Interesting cases arise when there are three or more available links. Under the model considered, the links have known probabilities of being in working order, and each link has a known capacity. The sender has a number of messages to send to the receiver. Each message has a size and a value (i.e., a worth or priority). Messages may be divided into pieces arbitrarily, and the value of each piece is proportional to its size. The goal is to choose combinations of messages to send on the links so that the expected value of the messages decodable by the receiver is maximized. There are three parts to the innovation: (1) Applying coding to parallel links under the model; (2) Linear programming formulation for finding the optimal combinations of messages to send on the links; and (3) Algorithms for assisting in finding feasible combinations of messages, as support for the linear programming formulation. There are similarities between this innovation and methods developed in the field of network coding. However, network coding has generally been concerned with either maximizing throughput in a fixed network, or robust communication of a fixed volume of data. In contrast, under this model, the throughput is expected to vary depending on the state of the network. Examples of error-correcting codes that are useful under this model but which are not needed under previous models have been found. This model can represent either a one-shot communication attempt, or a stream of communications. Under the one-shot model, message sizes and link capacities are quantities of information (e.g., measured in bits), while under the communications stream model, message sizes and link capacities are information rates (e.g., measured in bits/second). This work has the potential to increase the value of data returned from

  2. What utilities should expect from competitive intelligence

    SciTech Connect

    Fuld, L.M.; Borska, D.L.

    1995-03-01

    Electric utilities are informationally dysfunctional. In a survey of electric utility managers, we found that while employees may possess the necessary information for decision-making, they may not understand how the information is used, why it is important, or who needs to know it. Utility managers feel that their organizations suffer from intelligence deficits in the following areas: (1) Customer Retention-Customer (rather than competitor) intelligence is desparately needed, (2) Competitor Costs-As prices drive markets, utilities must learn how competitors use the technology to gain a cost advantage, and (3) Market Savvy-Recognizing threats means more that just crunching the numbers. It means converting raw data into a strategy that will expose a competitor`s weakness. The complex economics will require companies to apply all types of intelligence to solve competitive problems. This coherent approach requires changes in the way both management and the organization handle vital intelligence.

  3. Optimal weight based on energy imbalance and utility maximization

    NASA Astrophysics Data System (ADS)

    Sun, Ruoyan

    2016-01-01

    This paper investigates the optimal weight for both male and female using energy imbalance and utility maximization. Based on the difference of energy intake and expenditure, we develop a state equation that reveals the weight gain from this energy gap. We ​construct an objective function considering food consumption, eating habits and survival rate to measure utility. Through applying mathematical tools from optimal control methods and qualitative theory of differential equations, we obtain some results. For both male and female, the optimal weight is larger than the physiologically optimal weight calculated by the Body Mass Index (BMI). We also study the corresponding trajectories to steady state weight respectively. Depending on the value of a few parameters, the steady state can either be a saddle point with a monotonic trajectory or a focus with dampened oscillations.

  4. Expectation maximization and total variation-based model for computed tomography reconstruction from undersampled data

    NASA Astrophysics Data System (ADS)

    Yan, Ming; Vese, Luminita A.

    2011-03-01

    Computerized tomography (CT) plays an important role in medical imaging, especially for diagnosis and therapy. However, higher radiation dose from CT will result in increasing of radiation exposure in the population. Therefore, the reduction of radiation from CT is an essential issue. Expectation maximization (EM) is an iterative method used for CT image reconstruction that maximizes the likelihood function under Poisson noise assumption. Total variation regularization is a technique used frequently in image restoration to preserve edges, given the assumption that most images are piecewise constant. Here, we propose a method combining expectation maximization and total variation regularization, called EM+TV. This method can reconstruct a better image using fewer views in the computed tomography setting, thus reducing the overall dose of radiation. The numerical results in two and three dimensions show the efficiency of the proposed EM+TV method by comparison with those obtained by filtered back projection (FBP) or by EM only.

  5. A compact formulation for maximizing the expected number of transplants in kidney exchange programs

    NASA Astrophysics Data System (ADS)

    Alvelos, Filipe; Klimentova, Xenia; Rais, Abdur; Viana, Ana

    2015-05-01

    Kidney exchange programs (KEPs) allow the exchange of kidneys between incompatible donor-recipient pairs. Optimization approaches can help KEPs in defining which transplants should be made among all incompatible pairs according to some objective. The most common objective is to maximize the number of transplants. In this paper, we propose an integer programming model which addresses the objective of maximizing the expected number of transplants, given that there are equal probabilities of failure associated with vertices and arcs. The model is compact, i.e. has a polynomial number of decision variables and constraints, and therefore can be solved directly by a general purpose integer programming solver (e.g. Cplex).

  6. Single-Trial Extraction of Pure Somatosensory Evoked Potential Based on Expectation Maximization Approach.

    PubMed

    Chen, Wei; Chang, Chunqi; Hu, Yong

    2016-01-01

    It is of great importance for intraoperative monitoring to accurately extract somatosensory evoked potentials (SEPs) and track its changes fast. Currently, multi-trial averaging is widely adopted for SEP signal extraction. However, because of the loss of variations related to SEP features across different trials, the estimated SEPs in such a way are not suitable for the purpose of real-time monitoring of every single trial of SEP. In order to handle this issue, a number of single-trial SEP extraction approaches have been developed in the literature, such as ARX and SOBI, but most of them have their performance limited due to not sufficient utilization of multi-trial and multi-condition structures of the signals. In this paper, a novel Bayesian model of SEP signals is proposed to make systemic use of multi-trial and multi-condition priors and other structural information in the signal by integrating both a cortical source propagation model and a SEP basis components model, and an Expectation Maximization (EM) algorithm is developed for single-trial SEP estimation under this model. Numerical simulations demonstrate that the developed method can provide reasonably good single-trial estimations of SEP as long as signal-to-noise ratio (SNR) of the measurements is no worse than -25 dB. The effectiveness of the proposed method is further verified by its application to real SEP measurements of a number of different subjects during spinal surgeries. It is observed that using the proposed approach the main SEP features (i.e., latencies) can be reliably estimated at single-trial basis, and thus the variation of latencies in different trials can be traced, which provides a solid support for surgical intraoperative monitoring. PMID:26742104

  7. The predictive validity of prospect theory versus expected utility in health utility measurement.

    PubMed

    Abellan-Perpiñan, Jose Maria; Bleichrodt, Han; Pinto-Prades, Jose Luis

    2009-12-01

    Most health care evaluations today still assume expected utility even though the descriptive deficiencies of expected utility are well known. Prospect theory is the dominant descriptive alternative for expected utility. This paper tests whether prospect theory leads to better health evaluations than expected utility. The approach is purely descriptive: we explore how simple measurements together with prospect theory and expected utility predict choices and rankings between more complex stimuli. For decisions involving risk prospect theory is significantly more consistent with rankings and choices than expected utility. This conclusion no longer holds when we use prospect theory utilities and expected utilities to predict intertemporal decisions. The latter finding cautions against the common assumption in health economics that health state utilities are transferable across decision contexts. Our results suggest that the standard gamble and algorithms based on, should not be used to value health. PMID:19833400

  8. The predictive validity of prospect theory versus expected utility in health utility measurement.

    PubMed

    Abellan-Perpiñan, Jose Maria; Bleichrodt, Han; Pinto-Prades, Jose Luis

    2009-12-01

    Most health care evaluations today still assume expected utility even though the descriptive deficiencies of expected utility are well known. Prospect theory is the dominant descriptive alternative for expected utility. This paper tests whether prospect theory leads to better health evaluations than expected utility. The approach is purely descriptive: we explore how simple measurements together with prospect theory and expected utility predict choices and rankings between more complex stimuli. For decisions involving risk prospect theory is significantly more consistent with rankings and choices than expected utility. This conclusion no longer holds when we use prospect theory utilities and expected utilities to predict intertemporal decisions. The latter finding cautions against the common assumption in health economics that health state utilities are transferable across decision contexts. Our results suggest that the standard gamble and algorithms based on, should not be used to value health.

  9. Power Dependence in Individual Bargaining: The Expected Utility of Influence.

    ERIC Educational Resources Information Center

    Lawler, Edward J.; Bacharach, Samuel B.

    1979-01-01

    This study uses power-dependence theory as a framework for examining whether and how parties use information on each other's dependence to estimate the utility of an influence attempt. The effect of dependence in expected utilities is investigated (by role playing) in bargaining between employer and employee for a pay raise. (MF)

  10. Disconfirmation of Expectations of Utility in e-Learning

    ERIC Educational Resources Information Center

    Cacao, Rosario

    2013-01-01

    Using pre-training and post-training paired surveys in e-learning based training courses, we have compared the "expectations of utility," measured at the beginning of an e-learning course, with the "perceptions of utility," measured at the end of the course, and related it with the trainees' motivation. We have concluded…

  11. Wobbling and LSF-based maximum likelihood expectation maximization reconstruction for wobbling PET

    NASA Astrophysics Data System (ADS)

    Kim, Hang-Keun; Son, Young-Don; Kwon, Dae-Hyuk; Joo, Yohan; Cho, Zang-Hee

    2016-04-01

    Positron emission tomography (PET) is a widely used imaging modality; however, the PET spatial resolution is not yet satisfactory for precise anatomical localization of molecular activities. Detector size is the most important factor because it determines the intrinsic resolution, which is approximately half of the detector size and determines the ultimate PET resolution. Detector size, however, cannot be made too small because both the decreased detection efficiency and the increased septal penetration effect degrade the image quality. A wobbling and line spread function (LSF)-based maximum likelihood expectation maximization (WL-MLEM) algorithm, which combined the MLEM iterative reconstruction algorithm with wobbled sampling and LSF-based deconvolution using the system matrix, was proposed for improving the spatial resolution of PET without reducing the scintillator or detector size. The new algorithm was evaluated using a simulation, and its performance was compared with that of the existing algorithms, such as conventional MLEM and LSF-based MLEM. Simulations demonstrated that the WL-MLEM algorithm yielded higher spatial resolution and image quality than the existing algorithms. The WL-MLEM algorithm with wobbling PET yielded substantially improved resolution compared with conventional algorithms with stationary PET. The algorithm can be easily extended to other iterative reconstruction algorithms, such as maximum a priori (MAP) and ordered subset expectation maximization (OSEM). The WL-MLEM algorithm with wobbling PET may offer improvements in both sensitivity and resolution, the two most sought-after features in PET design.

  12. Joint state and parameter estimation of the hemodynamic model by particle smoother expectation maximization method

    NASA Astrophysics Data System (ADS)

    Aslan, Serdar; Taylan Cemgil, Ali; Akın, Ata

    2016-08-01

    Objective. In this paper, we aimed for the robust estimation of the parameters and states of the hemodynamic model by using blood oxygen level dependent signal. Approach. In the fMRI literature, there are only a few successful methods that are able to make a joint estimation of the states and parameters of the hemodynamic model. In this paper, we implemented a maximum likelihood based method called the particle smoother expectation maximization (PSEM) algorithm for the joint state and parameter estimation. Main results. Former sequential Monte Carlo methods were only reliable in the hemodynamic state estimates. They were claimed to outperform the local linearization (LL) filter and the extended Kalman filter (EKF). The PSEM algorithm is compared with the most successful method called square-root cubature Kalman smoother (SCKS) for both state and parameter estimation. SCKS was found to be better than the dynamic expectation maximization (DEM) algorithm, which was shown to be a better estimator than EKF, LL and particle filters. Significance. PSEM was more accurate than SCKS for both the state and the parameter estimation. Hence, PSEM seems to be the most accurate method for the system identification and state estimation for the hemodynamic model inversion literature. This paper do not compare its results with Tikhonov-regularized Newton—CKF (TNF-CKF), a recent robust method which works in filtering sense.

  13. Subjective Expected Utility: A Model of Decision-Making.

    ERIC Educational Resources Information Center

    Fischoff, Baruch; And Others

    1981-01-01

    Outlines a model of decision making known to researchers in the field of behavioral decision theory (BDT) as subjective expected utility (SEU). The descriptive and predictive validity of the SEU model, probability and values assessment using SEU, and decision contexts are examined, and a 54-item reference list is provided. (JL)

  14. Implementation and evaluation of an expectation maximization reconstruction algorithm for gamma emission breast tomosynthesis

    PubMed Central

    Gong, Zongyi; Klanian, Kelly; Patel, Tushita; Sullivan, Olivia; Williams, Mark B.

    2012-01-01

    Purpose: We are developing a dual modality tomosynthesis breast scanner in which x-ray transmission tomosynthesis and gamma emission tomosynthesis are performed sequentially with the breast in a common configuration. In both modalities projection data are obtained over an angular range of less than 180° from one side of the mildly compressed breast resulting in incomplete and asymmetrical sampling. The objective of this work is to implement and evaluate a maximum likelihood expectation maximization (MLEM) reconstruction algorithm for gamma emission breast tomosynthesis (GEBT). Methods: A combination of Monte Carlo simulations and phantom experiments was used to test the MLEM algorithm for GEBT. The algorithm utilizes prior information obtained from the x-ray breast tomosynthesis scan to partially compensate for the incomplete angular sampling and to perform attenuation correction (AC) and resolution recovery (RR). System spatial resolution, image artifacts, lesion contrast, and signal to noise ratio (SNR) were measured as image quality figures of merit. To test the robustness of the reconstruction algorithm and to assess the relative impacts of correction techniques with changing angular range, simulations and experiments were both performed using acquisition angular ranges of 45°, 90° and 135°. For comparison, a single projection containing the same total number of counts as the full GEBT scan was also obtained to simulate planar breast scintigraphy. Results: The in-plane spatial resolution of the reconstructed GEBT images is independent of source position within the reconstructed volume and independent of acquisition angular range. For 45° acquisitions, spatial resolution in the depth dimension (the direction of breast compression) is degraded with increasing source depth (increasing distance from the collimator surface). Increasing the acquisition angular range from 45° to 135° both greatly reduces this depth dependence and improves the average depth

  15. Is expected utility theory normative for medical decision making?

    PubMed

    Cohen, B J

    1996-01-01

    Expected utility theory is felt by its proponents to be a normative theory of decision making under uncertainty. The theory starts with some simple axioms that are held to be rules that any rational person would follow. It can be shown that if one adheres to these axioms, a numerical quantity, generally referred to as utility, can be assigned to each possible outcome, with the preferred course of action being that which has the highest expected utility. One of these axioms, the independence principle, is controversial, and is frequently violated in experimental situations. Proponents of the theory hold that these violations are irrational. The independence principle is simply an axiom dictating consistency among preferences, in that it dictates that a rational agent should hold a specified preference given another stated preference. When applied to preferences between lotteries, the independence principle can be demonstrated to be a rule that is followed only when preferences are formed in a particular way. The logic of expected utility theory is that this demonstration proves that preferences should be formed in this way. An alternative interpretation is that this demonstrates that the independence principle is not a valid general rule of consistency, but in particular, is a rule that must be followed if one is to consistently apply the decision rule "choose the lottery that has the highest expected utility." This decision rule must be justified on its own terms as a valid rule of rationality by demonstration that violation would lead to decisions that conflict with the decision maker's goals. This rule does not appear to be suitable for medical decisions because often these are one-time decisions in which expectation, a long-run property of a random variable, would not seem to be applicable. This is particularly true for those decisions involving a non-trivial risk of death.

  16. Expectation maximization and the retrieval of the atmospheric extinction coefficients by inversion of Raman lidar data.

    PubMed

    Garbarino, Sara; Sorrentino, Alberto; Massone, Anna Maria; Sannino, Alessia; Boselli, Antonella; Wang, Xuan; Spinelli, Nicola; Piana, Michele

    2016-09-19

    We consider the problem of retrieving the aerosol extinction coefficient from Raman lidar measurements. This is an ill-posed inverse problem that needs regularization, and we propose to use the Expectation-Maximization (EM) algorithm to provide stable solutions. Indeed, EM is an iterative algorithm that imposes a positivity constraint on the solution, and provides regularization if iterations are stopped early enough. We describe the algorithm and propose a stopping criterion inspired by a statistical principle. We then discuss its properties concerning the spatial resolution. Finally, we validate the proposed approach by using both synthetic data and experimental measurements; we compare the reconstructions obtained by EM with those obtained by the Tikhonov method, by the Levenberg-Marquardt method, as well as those obtained by combining data smoothing and numerical derivation. PMID:27661889

  17. Clustering performance comparison using K-means and expectation maximization algorithms

    PubMed Central

    Jung, Yong Gyu; Kang, Min Soo; Heo, Jun

    2014-01-01

    Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K-means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K-means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results. PMID:26019610

  18. Expectation maximization and the retrieval of the atmospheric extinction coefficients by inversion of Raman lidar data

    NASA Astrophysics Data System (ADS)

    Garbarino, Sara; Sorrentino, Alberto; Massone, Anna Maria; Sannino, Alessia; Boselli, Antonella; Wang, Xuan; Spinelli, Nicola; Piana, Michele

    2016-09-01

    We consider the problem of retrieving the aerosol extinction coefficient from Raman lidar measurements. This is an ill--posed inverse problem that needs regularization, and we propose to use the Expectation--Maximization (EM) algorithm to provide stable solutions. Indeed, EM is an iterative algorithm that imposes a positivity constraint on the solution, and provides regularization if iterations are stopped early enough. We describe the algorithm and propose a stopping criterion inspired by a statistical principle. We then discuss its properties concerning the spatial resolution. Finally, we validate the proposed approach by using both synthetic data and experimental measurements; we compare the reconstructions obtained by EM with those obtained by the Tikhonov method, by the Levenberg-Marquardt method, as well as those obtained by combining data smoothing and numerical derivation.

  19. Maximizing Light Utilization Efficiency and Hydrogen Production in Microalgal Cultures

    SciTech Connect

    Melis, Anastasios

    2014-12-31

    The project addressed the following technical barrier from the Biological Hydrogen Production section of the Fuel Cell Technologies Program Multi-Year Research, Development and Demonstration Plan: Low Sunlight Utilization Efficiency in Photobiological Hydrogen Production is due to a Large Photosystem Chlorophyll Antenna Size in Photosynthetic Microorganisms (Barrier AN: Light Utilization Efficiency).

  20. Ecological expected utility and the mythical neural code.

    PubMed

    Feldman, Jerome

    2010-03-01

    Neural spikes are an evolutionarily ancient innovation that remains nature's unique mechanism for rapid, long distance information transfer. It is now known that neural spikes sub serve a wide variety of functions and essentially all of the basic questions about the communication role of spikes have been answered. Current efforts focus on the neural communication of probabilities and utility values involved in decision making. Significant progress is being made, but many framing issues remain. One basic problem is that the metaphor of a neural code suggests a communication network rather than a recurrent computational system like the real brain. We propose studying the various manifestations of neural spike signaling as adaptations that optimize a utility function called ecological expected utility.

  1. Breast reduction utilizing the maximally vascularized central breast pedicle.

    PubMed

    Hester, T R; Bostwick, J; Miller, L; Cunningham, S J

    1985-12-01

    Experience using a maximally vascularized central breast pedicle to nourish the nipple-areola is presented. The pedicle is designed to incorporate vascular contributions from the lateral thoracic artery, intercostal perforators, internal mammary perforators, and thoracoacromial artery by means of the pectoralis major muscle. The basic technique is as follows: First, the areola is incised and 2-cm-thick skin and subcutaneous flaps are dissected medially, laterally, and superiorly, freeing the entire central breast mound. Second, the breast is reduced in a "Christmas tree" manner, being careful not to narrow the base of the pedicle. Third, excess skin and subcutaneous tissue is excised inferomedially and laterally and the nipple is inset into proper locations. The advantages of this technique are (1) large and small reductions can be done, (2) pedicle length does not appear to be a problem, and (3) the central mound gives the forward projection needed for good contour and good aesthetic results. Sixty-five patients with follow-up to 4 years are presented.

  2. Image segmentation with implicit color standardization using spatially constrained expectation maximization: detection of nuclei.

    PubMed

    Monaco, James; Hipp, J; Lucas, D; Smith, S; Balis, U; Madabhushi, Anant

    2012-01-01

    Color nonstandardness--the propensity for similar objects to exhibit different color properties across images--poses a significant problem in the computerized analysis of histopathology. Though many papers propose means for improving color constancy, the vast majority assume image formation via reflective light instead of light transmission as in microscopy, and thus are inappropriate for histological analysis. Previously, we presented a novel Bayesian color segmentation algorithm for histological images that is highly robust to color nonstandardness; this algorithm employed the expectation maximization (EM) algorithm to dynamically estimate for each individual image the probability density functions that describe the colors of salient objects. However, our approach, like most EM-based algorithms, ignored important spatial constraints, such as those modeled by Markov random field (MRFs). Addressing this deficiency, we now present spatially-constrained EM (SCEM), a novel approach for incorporating Markov priors into the EM framework. With respect to our segmentation system, we replace EM with SCEM and then assess its improved ability to segment nuclei in H&E stained histopathology. Segmentation performance is evaluated over seven (nearly) identical sections of gastrointestinal tissue stained using different protocols (simulating severe color nonstandardness). Over this dataset, our system identifies nuclear regions with an area under the receiver operator characteristic curve (AUC) of 0.838. If we disregard spatial constraints, the AUC drops to 0.748.

  3. Colocalization Estimation Using Graphical Modeling and Variational Bayesian Expectation Maximization: Towards a Parameter-Free Approach.

    PubMed

    Awate, Suyash P; Radhakrishnan, Thyagarajan

    2015-01-01

    In microscopy imaging, colocalization between two biological entities (e.g., protein-protein or protein-cell) refers to the (stochastic) dependencies between the spatial locations of the two entities in the biological specimen. Measuring colocalization between two entities relies on fluorescence imaging of the specimen using two fluorescent chemicals, each of which indicates the presence/absence of one of the entities at any pixel location. State-of-the-art methods for estimating colocalization rely on post-processing image data using an adhoc sequence of algorithms with many free parameters that are tuned visually. This leads to loss of reproducibility of the results. This paper proposes a brand-new framework for estimating the nature and strength of colocalization directly from corrupted image data by solving a single unified optimization problem that automatically deals with noise, object labeling, and parameter tuning. The proposed framework relies on probabilistic graphical image modeling and a novel inference scheme using variational Bayesian expectation maximization for estimating all model parameters, including colocalization, from data. Results on simulated and real-world data demonstrate improved performance over the state of the art.

  4. A Local Scalable Distributed Expectation Maximization Algorithm for Large Peer-to-Peer Networks

    NASA Technical Reports Server (NTRS)

    Bhaduri, Kanishka; Srivastava, Ashok N.

    2009-01-01

    This paper offers a local distributed algorithm for expectation maximization in large peer-to-peer environments. The algorithm can be used for a variety of well-known data mining tasks in a distributed environment such as clustering, anomaly detection, target tracking to name a few. This technology is crucial for many emerging peer-to-peer applications for bioinformatics, astronomy, social networking, sensor networks and web mining. Centralizing all or some of the data for building global models is impractical in such peer-to-peer environments because of the large number of data sources, the asynchronous nature of the peer-to-peer networks, and dynamic nature of the data/network. The distributed algorithm we have developed in this paper is provably-correct i.e. it converges to the same result compared to a similar centralized algorithm and can automatically adapt to changes to the data and the network. We show that the communication overhead of the algorithm is very low due to its local nature. This monitoring algorithm is then used as a feedback loop to sample data from the network and rebuild the model when it is outdated. We present thorough experimental results to verify our theoretical claims.

  5. Statistical models of synaptic transmission evaluated using the expectation-maximization algorithm.

    PubMed Central

    Stricker, C; Redman, S

    1994-01-01

    Amplitude fluctuations of evoked synaptic responses can be used to extract information on the probabilities of release at the active sites, and on the amplitudes of the synaptic responses generated by transmission at each active site. The parameters that describe this process must be obtained from an incomplete data set represented by the probability density of the evoked synaptic response. In this paper, the equations required to calculate these parameters using the Expectation-Maximization algorithm and the maximum likelihood criterion have been derived for a variety of statistical models of synaptic transmission. These models are ones where the probabilities associated with the different discrete amplitudes in the evoked responses are a) unconstrained, b) binomial, and c) compound binomial. The discrete amplitudes may be separated by equal (quantal) or unequal amounts, with or without quantal variance. Alternative models have been considered where the variance associated with the discrete amplitudes is sufficiently large such that no quantal amplitudes can be detected. These models involve the sum of a normal distribution (to represent failures) and a unimodal distribution (to represent the evoked responses). The implementation of the algorithm is described in each case, and its accuracy and convergence have been demonstrated. PMID:7948679

  6. An online expectation maximization algorithm for exploring general structure in massive networks

    NASA Astrophysics Data System (ADS)

    Chai, Bianfang; Jia, Caiyan; Yu, Jian

    2015-11-01

    Mixture model and stochastic block model (SBM) for structure discovery employ a broad and flexible definition of vertex classes such that they are able to explore a wide variety of structure. Compared to the existing algorithms based on the SBM (their time complexities are O(mc2) , where m and c are the number of edges and clusters), the algorithms of mixture model are capable of dealing with networks with a large number of communities more efficiently due to their O(mc) time complexity. However, the algorithms of mixture model using expectation maximization (EM) technique are still too slow to deal with real million-node networks, since they compute hidden variables on the entire network in each iteration. In this paper, an online variational EM algorithm is designed to improve the efficiency of the EM algorithms. In each iteration, our online algorithm samples a node and estimates its cluster memberships only by its adjacency links, and model parameters are then estimated by the memberships of the sampled node and old model parameters obtained in the previous iteration. The provided online algorithm updates model parameters subsequently by the links of a new sampled node and explores the general structure of massive and growing networks with millions of nodes and hundreds of clusters in hours. Compared to the relevant algorithms on synthetic and real networks, the proposed online algorithm costs less with little or no degradation of accuracy. Results illustrate that the presented algorithm offers a good trade-off between precision and efficiency.

  7. An Expectation Maximization based Method for Subcellular Particle Tracking using Multi-angle TIRF Microscopy*

    PubMed Central

    Liang, Liang; Shen, Hongying; De Camilli, Pietro; Toomre, Derek K.; Duncan, James S.

    2013-01-01

    Multi-angle total internal reflection fluorescence microscopy (MA-TIRFM) is a new generation of TIRF microscopy to study cellular processes near dorsal cell membrane in 4 dimensions (3D+t). To perform quantitative analysis using MA-TIRFM, it is necessary to track subcellular particles in these processes. In this paper, we propose a method based on a MAP framework for automatic particle tracking and apply it to track clathrin coated pits (CCPs). The expectation maximization (EM) algorithm is employed to solve the MAP problem. To provide the initial estimations for the EM algorithm, we develop a forward filter based on the most probable trajectory (MPT) filter. Multiple linear models are used to model particle dynamics. For CCP tracking, we use two linear models to describe constrained Brownian motion and fluorophore variation according to CCP properties. The tracking method is evaluated on synthetic data and results show that it has high accuracy. The result on real data confirmed by human expert cell biologists is also presented. PMID:22003671

  8. Bandwidth utilization maximization of scientific RF communication systems

    SciTech Connect

    Rey, D.; Ryan, W.; Ross, M.

    1997-01-01

    A method for more efficiently utilizing the frequency bandwidth allocated for data transmission is presented. Current space and range communication systems use modulation and coding schemes that transmit 0.5 to 1.0 bits per second per Hertz of radio frequency bandwidth. The goal in this LDRD project is to increase the bandwidth utilization by employing advanced digital communications techniques. This is done with little or no increase in the transmit power which is usually very limited on airborne systems. Teaming with New Mexico State University, an implementation of trellis coded modulation (TCM), a coding and modulation scheme pioneered by Ungerboeck, was developed for this application and simulated on a computer. TCM provides a means for reliably transmitting data while simultaneously increasing bandwidth efficiency. The penalty is increased receiver complexity. In particular, the trellis decoder requires high-speed, application-specific digital signal processing (DSP) chips. A system solution based on the QualComm Viterbi decoder and the Graychip DSP receiver chips is presented.

  9. 76 FR 51060 - Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility, and Integrity of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-17

    ... FR 8452-8460), pursuant to section 515 of the Treasury and General Government Appropriations Act for... FR 8452-8460) that direct each federal agency to (1) Issue its own guidelines ensuring and maximizing... June 2011 (76 FR 37376) intended to ensure and maximize the quality, objectivity, utility,...

  10. Computational rationality: linking mechanism and behavior through bounded utility maximization.

    PubMed

    Lewis, Richard L; Howes, Andrew; Singh, Satinder

    2014-04-01

    We propose a framework for including information-processing bounds in rational analyses. It is an application of bounded optimality (Russell & Subramanian, 1995) to the challenges of developing theories of mechanism and behavior. The framework is based on the idea that behaviors are generated by cognitive mechanisms that are adapted to the structure of not only the environment but also the mind and brain itself. We call the framework computational rationality to emphasize the incorporation of computational mechanism into the definition of rational action. Theories are specified as optimal program problems, defined by an adaptation environment, a bounded machine, and a utility function. Such theories yield different classes of explanation, depending on the extent to which they emphasize adaptation to bounds, and adaptation to some ecology that differs from the immediate local environment. We illustrate this variation with examples from three domains: visual attention in a linguistic task, manual response ordering, and reasoning. We explore the relation of this framework to existing "levels" approaches to explanation, and to other optimality-based modeling approaches.

  11. Optimization in the utility maximization framework for conservation planning: a comparison of solution procedures in a study of multifunctional agriculture.

    PubMed

    Kreitler, Jason; Stoms, David M; Davis, Frank W

    2014-01-01

    Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.

  12. Optimization in the utility maximization framework for conservation planning: a comparison of solution procedures in a study of multifunctional agriculture

    PubMed Central

    Stoms, David M.; Davis, Frank W.

    2014-01-01

    Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management. PMID:25538868

  13. Optimization in the utility maximization framework for conservation planning: a comparison of solution procedures in a study of multifunctional agriculture

    USGS Publications Warehouse

    Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.

    2014-01-01

    Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.

  14. Expecting the unexpected: applying the Develop-Distort Dilemma to maximize positive market impacts in health.

    PubMed

    Peters, David H; Paina, Ligia; Bennett, Sara

    2012-10-01

    Although health interventions start with good intentions to develop services for disadvantaged populations, they often distort the health market, making the delivery or financing of services difficult once the intervention is over: a condition called the 'Develop-Distort Dilemma' (DDD). In this paper, we describe how to examine whether a proposed intervention may develop or distort the health market. Our goal is to produce a tool that facilitates meaningful and systematic dialogue for practitioners and researchers to ensure that well-intentioned health interventions lead to productive health systems while reducing the undesirable distortions of such efforts. We apply the DDD tool to plan for development rather than distortions in health markets, using intervention research being conducted under the Future Health Systems consortium in Bangladesh, China and Uganda. Through a review of research proposals and interviews with principal investigators, we use the DDD tool to systematically understand how a project fits within the broader health market system, and to identify gaps in planning for sustainability. We found that while current stakeholders and funding sources for activities were easily identified, future ones were not. The implication is that the projects could raise community expectations that future services will be available and paid for, despite this actually being uncertain. Each project addressed the 'rules' of the health market system differently. The China research assesses changes in the formal financing rules, whereas Bangladesh and Uganda's projects involve influencing community level providers, where informal rules are more important. In each case, we recognize the importance of building trust between providers, communities and government officials. Each project could both develop and distort local health markets. Anyone intervening in the health market must recognize the main market perturbations, whether positive or negative, and manage them so

  15. Children's utilization of emotion expectancies in moral decision-making.

    PubMed

    Hertz, Steven G; Krettenauer, Tobias

    2014-09-01

    This study investigated the relevance of emotion expectancies for children's moral decision-making. The sample included 131 participants from three different grade levels (M = 8.39 years, SD = 2.45, range 4.58-12.42). Participants were presented a set of scenarios that described various emotional outcomes of (im)moral actions and asked to decide what they would do if they were in the protagonists' shoes. Overall, it was found that the anticipation of moral emotions predicted an increased likelihood of moral choices in antisocial and prosocial contexts. In younger children, anticipated moral emotions predicted moral choice for prosocial actions, but not for antisocial actions. Older children showed evidence for the utilization of anticipated emotions in both prosocial and antisocial behaviours. Moreover, for older children, the decision to act prosocially was less likely in the presence of non-moral emotions. Findings suggest that the impact of emotion expectancies on children's moral decision-making increases with age. Contrary to happy victimizer research, the study does not support the notion that young children use moral emotion expectancies for moral decision-making in the context of antisocial actions.

  16. Expected utility theory and risky choices with health outcomes.

    PubMed

    Hellinger, F J

    1989-03-01

    Studies of people's attitude towards risk in the health sector often involve a comparison of the desirability of alternative medical treatments. Since the outcome of a medical treatment cannot be known with certainty, patients and physicians must make a choice that involves risk. Each medical treatment may be characterized as a gamble (or risky option) with a set of outcomes and associated probabilities. Expected utility theory (EUT) is the standard method to predict people's choices under uncertainty. The author presents the results of a survey that suggests people are very risk averse towards gambles involving health-related outcomes. The survey also indicates that there is significant variability in the risk attitudes across individuals for any given gamble and that there is significant variability in the risk attitudes of a given individual across gambles. The variability of risk attitudes of a given individual suggests that risk attitudes are not absolute but are functions of the parameters in the gamble. PMID:2927183

  17. Direct reconstruction of the source intensity distribution of a clinical linear accelerator using a maximum likelihood expectation maximization algorithm.

    PubMed

    Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J

    2016-02-01

    Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.

  18. Recursive expectation-maximization clustering: A method for identifying buffering mechanisms composed of phenomic modules

    NASA Astrophysics Data System (ADS)

    Guo, Jingyu; Tian, Dehua; McKinney, Brett A.; Hartman, John L.

    2010-06-01

    Interactions between genetic and/or environmental factors are ubiquitous, affecting the phenotypes of organisms in complex ways. Knowledge about such interactions is becoming rate-limiting for our understanding of human disease and other biological phenomena. Phenomics refers to the integrative analysis of how all genes contribute to phenotype variation, entailing genome and organism level information. A systems biology view of gene interactions is critical for phenomics. Unfortunately the problem is intractable in humans; however, it can be addressed in simpler genetic model systems. Our research group has focused on the concept of genetic buffering of phenotypic variation, in studies employing the single-cell eukaryotic organism, S. cerevisiae. We have developed a methodology, quantitative high throughput cellular phenotyping (Q-HTCP), for high-resolution measurements of gene-gene and gene-environment interactions on a genome-wide scale. Q-HTCP is being applied to the complete set of S. cerevisiae gene deletion strains, a unique resource for systematically mapping gene interactions. Genetic buffering is the idea that comprehensive and quantitative knowledge about how genes interact with respect to phenotypes will lead to an appreciation of how genes and pathways are functionally connected at a systems level to maintain homeostasis. However, extracting biologically useful information from Q-HTCP data is challenging, due to the multidimensional and nonlinear nature of gene interactions, together with a relative lack of prior biological information. Here we describe a new approach for mining quantitative genetic interaction data called recursive expectation-maximization clustering (REMc). We developed REMc to help discover phenomic modules, defined as sets of genes with similar patterns of interaction across a series of genetic or environmental perturbations. Such modules are reflective of buffering mechanisms, i.e., genes that play a related role in the maintenance

  19. Deriving the Expected Utility of a Predictive Model When the Utilities Are Uncertain

    PubMed Central

    Cooper, Gregory F.; Visweswaran, Shyam

    2005-01-01

    Predictive models are often constructed from clinical databases with the goal of eventually helping make better clinical decisions. Evaluating models using decision theory is therefore natural. When constructing a model using statistical and machine learning methods, however, we are often uncertain about precisely how a model will be used. Thus, decision-independent measures of classification performance, such as the area under an ROC curve, are popular. As a complementary method of evaluation, we investigate techniques for deriving the expected utility of a model under uncertainty about the model's utilities. We demonstrate an example of the application of this approach to the evaluation of two models that diagnose coronary artery disease. PMID:16779022

  20. Very slow search and reach: failure to maximize expected gain in an eye-hand coordination task.

    PubMed

    Zhang, Hang; Morvan, Camille; Etezad-Heydari, Louis-Alexandre; Maloney, Laurence T

    2012-01-01

    We examined an eye-hand coordination task where optimal visual search and hand movement strategies were inter-related. Observers were asked to find and touch a target among five distractors on a touch screen. Their reward for touching the target was reduced by an amount proportional to how long they took to locate and reach to it. Coordinating the eye and the hand appropriately would markedly reduce the search-reach time. Using statistical decision theory we derived the sequence of interrelated eye and hand movements that would maximize expected gain and we predicted how hand movements should change as the eye gathered further information about target location. We recorded human observers' eye movements and hand movements and compared them with the optimal strategy that would have maximized expected gain. We found that most observers failed to adopt the optimal search-reach strategy. We analyze and describe the strategies they did adopt.

  1. 76 FR 49473 - Petition to Maximize Practical Utility of List 1 Chemicals Screened Through EPA's Endocrine...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-10

    ... AGENCY Petition to Maximize Practical Utility of List 1 Chemicals Screened Through EPA's Endocrine... decisions on data received in response to the test orders issued under the Endocrine Disruptor Screening...'' system, which means EPA will not know your identity or contact information unless you provide it in...

  2. Whole-body direct 4D parametric PET imaging employing nested generalized Patlak expectation-maximization reconstruction

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib

    2016-08-01

    Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10-20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were

  3. Whole-body direct 4D parametric PET imaging employing nested generalized Patlak expectation-maximization reconstruction.

    PubMed

    Karakatsanis, Nicolas A; Casey, Michael E; Lodge, Martin A; Rahmim, Arman; Zaidi, Habib

    2016-08-01

    Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible (18)F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published (18)F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10-20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were

  4. An Expectation-Maximization Method for Spatio-Temporal Blind Source Separation Using an AR-MOG Source Model

    PubMed Central

    Hild, Kenneth E.; Attias, Hagai T.; Nagarajan, Srikantan S.

    2009-01-01

    In this paper, we develop a maximum-likelihood (ML) spatio-temporal blind source separation (BSS) algorithm, where the temporal dependencies are explained by assuming that each source is an autoregressive (AR) process and the distribution of the associated independent identically distributed (i.i.d.) inovations process is described using a mixture of Gaussians. Unlike most ML methods, the proposed algorithm takes into account both spatial and temporal information, optimization is performed using the expectation-maximization (EM) method, the source model is adapted to maximize the likelihood, and the update equations have a simple, analytical form. The proposed method, which we refer to as autoregressive mixture of Gaussians (AR-MOG), outperforms nine other methods for artificial mixtures of real audio. We also show results for using AR-MOG to extract the fetal cardiac signal from real magnetocardiographic (MCG) data. PMID:18334368

  5. OPTUM : Optimum Portfolio Tool for Utility Maximization documentation and user's guide.

    SciTech Connect

    VanKuiken, J. C.; Jusko, M. J.; Samsa, M. E.; Decision and Information Sciences

    2008-09-30

    The Optimum Portfolio Tool for Utility Maximization (OPTUM) is a versatile and powerful tool for selecting, optimizing, and analyzing portfolios. The software introduces a compact interface that facilitates problem definition, complex constraint specification, and portfolio analysis. The tool allows simple comparisons between user-preferred choices and optimized selections. OPTUM uses a portable, efficient, mixed-integer optimization engine (lp-solve) to derive the optimal mix of projects that satisfies the constraints and maximizes the total portfolio utility. OPTUM provides advanced features, such as convenient menus for specifying conditional constraints and specialized graphical displays of the optimal frontier and alternative solutions to assist in sensitivity visualization. OPTUM can be readily applied to other nonportfolio, resource-constrained optimization problems.

  6. Measurement and utilization of healthy life expectancy: conceptual issues.

    PubMed

    Robine, J M; Michel, J P; Branch, L G

    1992-01-01

    The periodic calculation of healthy life expectancies permits the evaluation of the impact of new health policies at a given moment, as well as the assessment of trends under changing health conditions. In spite of their apparent simplicity, the results obtained will have to be interpreted by experts. Useful reference values can be provided by international comparisons. However, several choices remain to be made, such as (i) the types of morbidity and disability data to be associated with mortality data; (ii) the multiple indicators available; (iii) the type of observations to be recorded, i.e., "abilities" or "performances"; (iv) whether or not the recovery of lost functions should be considered; (v) the mode of computation, i.e., life expectancy before the first morbid event or global healthy life expectancy; and (vi) the determination of thresholds based on either relative or absolute criteria.

  7. Maternal Immunization Earlier in Pregnancy Maximizes Antibody Transfer and Expected Infant Seropositivity Against Pertussis

    PubMed Central

    Eberhardt, Christiane S.; Blanchard-Rohner, Geraldine; Lemaître, Barbara; Boukrid, Meriem; Combescure, Christophe; Othenin-Girard, Véronique; Chilin, Antonina; Petre, Jean; de Tejada, Begoña Martinez; Siegrist, Claire-Anne

    2016-01-01

    Background. Maternal immunization against pertussis is currently recommended after the 26th gestational week (GW). Data on the optimal timing of maternal immunization are inconsistent. Methods. We conducted a prospective observational noninferiority study comparing the influence of second-trimester (GW 13–25) vs third-trimester (≥GW 26) tetanus-diphtheria-acellular pertussis (Tdap) immunization in pregnant women who delivered at term. Geometric mean concentrations (GMCs) of cord blood antibodies to recombinant pertussis toxin (PT) and filamentous hemagglutinin (FHA) were assessed by enzyme-linked immunosorbent assay. The primary endpoint were GMCs and expected infant seropositivity rates, defined by birth anti-PT >30 enzyme-linked immunosorbent assay units (EU)/mL to confer seropositivity until 3 months of age. Results. We included 335 women (mean age, 31.0 ± 5.1 years; mean gestational age, 39.3 ± 1.3 GW) previously immunized with Tdap in the second (n = 122) or third (n = 213) trimester. Anti-PT and anti-FHA GMCs were higher following second- vs third-trimester immunization (PT: 57.1 EU/mL [95% confidence interval {CI}, 47.8–68.2] vs 31.1 EU/mL [95% CI, 25.7–37.7], P < .001; FHA: 284.4 EU/mL [95% CI, 241.3–335.2] vs 140.2 EU/mL [95% CI, 115.3–170.3], P < .001). The adjusted GMC ratios after second- vs third-trimester immunization differed significantly (PT: 1.9 [95% CI, 1.4–2.5]; FHA: 2.2 [95% CI, 1.7–3.0], P < .001). Expected infant seropositivity rates reached 80% vs 55% following second- vs third-trimester immunization (adjusted odds ratio, 3.7 [95% CI, 2.1–6.5], P < .001). Conclusions. Early second-trimester maternal Tdap immunization significantly increased neonatal antibodies. Recommending immunization from the second trimester onward would widen the immunization opportunity window and could improve seroprotection. PMID:26797213

  8. Expectation maximization classification and Laplacian based thickness measurement for cerebral cortex thickness estimation

    NASA Astrophysics Data System (ADS)

    Holden, Mark; Moreno-Vallecillo, Rafael; Harris, Anthony; Gomes, Lavier J.; Diep, Than-Mei; Bourgeat, Pierrick T.; Ourselin, Sébastien

    2007-03-01

    We describe a new framework for measuring cortical thickness from MR human brain images. This involves the integration of a method of tissue classification with one to estimate thickness in 3D. We have determined an additional boundary detection step to facilitate this. The classification stage utlizes the Expectation Maximisation (EM) algorithm to classify voxels associated with the tissue types that interface with cortical grey matter (GM, WM and CSF). This uses a Gaussian mixture and the EM algorithm to estimate the position and and width of the Gaussians that model the intensity distributions of the GM, WM and CSF tissue classes. The boundary detection stage uses the GM, WM and CSF classifications and finds connected components, fills holes and then applies a geodesic distance transform to determine the GM/WM interface. Finally the thickness of the cortical grey matter is estimated by solving Laplace's equation and determining the streamlines that connect the inner and outer boundaries. The contribution of this work is the adaptation of the classification and thickness measurement steps, neither requiring manual initialisation, and also the validation strategy. The resultant algorithm is fully automatic and avoids the computational expense associated with preserving the cortical surface topology. We have devised a validation strategy that indicates the cortical segmentation of a gold standard brain atlas has a similarity index of 0.91, thickness estimation has subvoxel accuracy evaluated using a synthetic image and precision of the combined segmentation and thickness measurement of 1.54mm using three clinical images.

  9. Crustacean hemolymph microbiota: Endemic, tightly controlled, and utilization expectable.

    PubMed

    Wang, Xian-Wei; Wang, Jin-Xing

    2015-12-01

    Increasing number of evidence suggests that the hemolymph of numerous apparently healthy invertebrates is unsterile. Investigation of hemolymph microbiota properties and the homeostasis between host and bacteria is helpful to reveal bacteria pathogenesis, host immunity, and possible utilization in disease control. Crustaceans represent a large family of aquatic animals. Therefore, crustacean fishery is of important economic value worldwide. Research related to crustacean hemolymph microbiota has been performed over the years. In the present study, we conclude currently available information and present a comprehensive analysis regarding homeostasis between host and bacteria. In general, the presence of microbiota in crustacean hemolymph is an endemic event and can be influenced by internal and external factors. Opportunistic bacteria may have generated some changes or mutations under hemolymph stress. Meanwhile, hosts suppress hemolymph microbiota proliferation with the help of some critical antimicrobial peptides and lectins. The hemolymph microbiota may be beneficial for hosts as resistance against external damages. In addition, the hemolymph microbiota may be utilized in aquaculture.

  10. Expectation maximization SPECT reconstruction with a content-adaptive singularity-based mesh-domain image model

    NASA Astrophysics Data System (ADS)

    Lu, Yao; Ye, Hongwei; Xu, Yuesheng; Hu, Xiaofei; Vogelsang, Levon; Shen, Lixin; Feiglin, David; Lipson, Edward; Krol, Andrzej

    2008-03-01

    To improve the speed and quality of ordered-subsets expectation-maximization (OSEM) SPECT reconstruction, we have implemented a content-adaptive, singularity-based, mesh-domain, image model (CASMIM) with an accurate algorithm for estimation of the mesh-domain system matrix. A preliminary image, used to initialize CASMIM reconstruction, was obtained using pixel-domain OSEM. The mesh-domain representation of the image was produced by a 2D wavelet transform followed by Delaunay triangulation to obtain joint estimation of nodal locations and their activity values. A system matrix with attenuation compensation was investigated. Digital chest phantom SPECT was simulated and reconstructed. The quality of images reconstructed with OSEM-CASMIM is comparable to that from pixel-domain OSEM, but images are obtained five times faster by the CASMIM method.

  11. Expected Utility Illustrated: A Graphical Analysis of Gambles with More than Two Possible Outcomes

    ERIC Educational Resources Information Center

    Chen, Frederick H.

    2010-01-01

    The author presents a simple geometric method to graphically illustrate the expected utility from a gamble with more than two possible outcomes. This geometric result gives economics students a simple visual aid for studying expected utility theory and enables them to analyze a richer set of decision problems under uncertainty compared to what…

  12. Iterative three-dimensional expectation maximization restoration of single photon emission computed tomography images: Application in striatal imaging

    SciTech Connect

    Gantet, Pierre; Payoux, Pierre; Celler, Anna; Majorel, Cynthia; Gourion, Daniel; Noll, Dominikus; Esquerre, Jean-Paul

    2006-01-15

    Single photon emission computed tomography imaging suffers from poor spatial resolution and high statistical noise. Consequently, the contrast of small structures is reduced, the visual detection of defects is limited and precise quantification is difficult. To improve the contrast, it is possible to include the spatially variant point spread function of the detection system into the iterative reconstruction algorithm. This kind of method is well known to be effective, but time consuming. We have developed a faster method to account for the spatial resolution loss in three dimensions, based on a postreconstruction restoration method. The method uses two steps. First, a noncorrected iterative ordered subsets expectation maximization (OSEM) reconstruction is performed and, in the second step, a three-dimensional (3D) iterative maximum likelihood expectation maximization (ML-EM) a posteriori spatial restoration of the reconstructed volume is done. In this paper, we compare to the standard OSEM-3D method, in three studies (two in simulation and one from experimental data). In the two first studies, contrast, noise, and visual detection of defects are studied. In the third study, a quantitative analysis is performed from data obtained with an anthropomorphic striatal phantom filled with 123-I. From the simulations, we demonstrate that contrast as a function of noise and lesion detectability are very similar for both OSEM-3D and OSEM-R methods. In the experimental study, we obtained very similar values of activity-quantification ratios for different regions in the brain. The advantage of OSEM-R compared to OSEM-3D is a substantial gain of processing time. This gain depends on several factors. In a typical situation, for a 128x128 acquisition of 120 projections, OSEM-R is 13 or 25 times faster than OSEM-3D, depending on the calculation method used in the iterative restoration. In this paper, the OSEM-R method is tested with the approximation of depth independent

  13. Novel hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization estimation method for population pharmacokinetic data analysis.

    PubMed

    Ng, C M

    2013-10-01

    The development of a population PK/PD model, an essential component for model-based drug development, is both time- and labor-intensive. A graphical-processing unit (GPU) computing technology has been proposed and used to accelerate many scientific computations. The objective of this study was to develop a hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization (MCPEM) estimation algorithm for population PK data analysis. A hybrid GPU-CPU implementation of the MCPEM algorithm (MCPEMGPU) and identical algorithm that is designed for the single CPU (MCPEMCPU) were developed using MATLAB in a single computer equipped with dual Xeon 6-Core E5690 CPU and a NVIDIA Tesla C2070 GPU parallel computing card that contained 448 stream processors. Two different PK models with rich/sparse sampling design schemes were used to simulate population data in assessing the performance of MCPEMCPU and MCPEMGPU. Results were analyzed by comparing the parameter estimation and model computation times. Speedup factor was used to assess the relative benefit of parallelized MCPEMGPU over MCPEMCPU in shortening model computation time. The MCPEMGPU consistently achieved shorter computation time than the MCPEMCPU and can offer more than 48-fold speedup using a single GPU card. The novel hybrid GPU-CPU implementation of parallelized MCPEM algorithm developed in this study holds a great promise in serving as the core for the next-generation of modeling software for population PK/PD analysis.

  14. Development of a fully 3D system model in iterative expectation-maximization reconstruction for cone-beam SPECT

    NASA Astrophysics Data System (ADS)

    Ye, Hongwei; Vogelsang, Levon; Feiglin, David H.; Lipson, Edward D.; Krol, Andrzej

    2008-03-01

    In order to improve reconstructed image quality for cone-beam collimator SPECT, we have developed and implemented a fully 3D reconstruction, using an ordered subsets expectation maximization (OSEM) algorithm, along with a volumetric system model - cone-volume system model (CVSM), a modified attenuation compensation, and a 3D depth- and angle-dependent resolution and sensitivity correction. SPECT data were acquired in a 128×128 matrix, in 120 views with a single circular orbit. Two sets of numerical Defrise phantoms were used to simulate CBC SPECT scans, and low noise and scatter-free projection datasets were obtained using the SimSET Monte Carlo package. The reconstructed images, obtained using OSEM with a line-length system model (LLSM) and a 3D Gaussian post-filter, and OSEM with FVSM and a 3D Gaussian post-filter were quantitatively studied. Overall improvement in the image quality has been observed, including better transaxial resolution, higher contrast-to-noise ratio between hot and cold disks, and better accuracy and lower bias in OSEM-CVSM, compared with OSEM-LLSM.

  15. Comparison of ordered subsets expectation maximization and Chang's attenuation correction method in quantitative cardiac SPET: a phantom study.

    PubMed

    Dey, D; Slomka, P J; Hahn, L J; Kloiber, R

    1998-12-01

    Photon attenuation is one of the primary causes of artifacts in cardiac single photon emission tomography (SPET). Several attenuation correction algorithms have been proposed. The aim of this study was to compare the effect of using the ordered subsets expectation maximization (OSEM) reconstruction algorithm and Chang's non-uniform attenuation correction method on quantitative cardiac SPET. We performed SPET scans of an anthropomorphic phantom simulating normal and abnormal myocardial studies. Attenuation maps of the phantom were obtained from computed tomographic images. The SPET projection data were corrected for attenuation using OSEM reconstruction, as well as Chang's method. For each defect scan and attenuation correction method, we calculated three quantitative parameters: average radial maximum (ARM) ratio of the defect-to-normal area, maximum defect contrast (MDC) and defect volume, using automated three-dimensional quantitation. The differences between the two methods were less than 4% for defect-to-normal ARM ratio, 19% for MDC and 13% for defect volume. These differences are within the range of estimated statistical variation of SPET. The calculation times of the two methods were comparable. For all SPET studies, OSEM attenuation correction gave a more correct activity distribution, with respect to both the homogeneity of the radiotracer and the shape of the cardiac insert. The difference in uniformity between OSEM and Chang's method was quantified by segmental analysis and found to be less than 8% for the normal study. In conclusion, OSEM and Chang's attenuation correction are quantitatively equivalent, with comparable calculation times. OSEM reconstruction gives a more correct activity distribution and is therefore preferred.

  16. Expectation-maximization algorithms for learning a finite mixture of univariate survival time distributions from partially specified class values

    SciTech Connect

    Lee, Youngrok

    2013-05-15

    Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates of nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.

  17. The role of data assimilation in maximizing the utility of geospace observations (Invited)

    NASA Astrophysics Data System (ADS)

    Matsuo, T.

    2013-12-01

    Data assimilation can facilitate maximizing the utility of existing geospace observations by offering an ultimate marriage of inductive (data-driven) and deductive (first-principles based) approaches to addressing critical questions in space weather. Assimilative approaches that incorporate dynamical models are, in particular, capable of making a diverse set of observations consistent with physical processes included in a first-principles model, and allowing unobserved physical states to be inferred from observations. These points will be demonstrated in the context of the application of an ensemble Kalman filter (EnKF) to a thermosphere and ionosphere general circulation model. An important attribute of this approach is that the feedback between plasma and neutral variables is self-consistently treated both in the forecast model as well as in the assimilation scheme. This takes advantage of the intimate coupling between the thermosphere and ionosphere described in general circulation models to enable the inference of unobserved thermospheric states from the relatively plentiful observations of the ionosphere. Given the ever-growing infrastructure for the global navigation satellite system, this is indeed a promising prospect for geospace data assimilation. In principle, similar approaches can be applied to any geospace observing systems to extract more geophysical information from a given set of observations than would otherwise be possible.

  18. Neurobiological studies of risk assessment: a comparison of expected utility and mean-variance approaches.

    PubMed

    D'Acremont, Mathieu; Bossaerts, Peter

    2008-12-01

    When modeling valuation under uncertainty, economists generally prefer expected utility because it has an axiomatic foundation, meaning that the resulting choices will satisfy a number of rationality requirements. In expected utility theory, values are computed by multiplying probabilities of each possible state of nature by the payoff in that state and summing the results. The drawback of this approach is that all state probabilities need to be dealt with separately, which becomes extremely cumbersome when it comes to learning. Finance academics and professionals, however, prefer to value risky prospects in terms of a trade-off between expected reward and risk, where the latter is usually measured in terms of reward variance. This mean-variance approach is fast and simple and greatly facilitates learning, but it impedes assigning values to new gambles on the basis of those of known ones. To date, it is unclear whether the human brain computes values in accordance with expected utility theory or with mean-variance analysis. In this article, we discuss the theoretical and empirical arguments that favor one or the other theory. We also propose a new experimental paradigm that could determine whether the human brain follows the expected utility or the mean-variance approach. Behavioral results of implementation of the paradigm are discussed.

  19. 76 FR 37376 - Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility, and Integrity of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-27

    ... Management and Budget (67 FR 8452-8460), pursuant to section 515 of the Treasury and General Government... FR 8452-8460) that direct each federal agency to (1) Issue its own guidelines ensuring and maximizing... corruption or falsification. 6. ``Objectivity'' is a measure of whether disseminated information is...

  20. Social and Professional Participation of Individuals Who Are Deaf: Utilizing the Psychosocial Potential Maximization Framework

    ERIC Educational Resources Information Center

    Jacobs, Paul G.; Brown, P. Margaret; Paatsch, Louise

    2012-01-01

    This article documents a strength-based understanding of how individuals who are deaf maximize their social and professional potential. This exploratory study was conducted with 49 adult participants who are deaf (n = 30) and who have typical hearing (n = 19) residing in America, Australia, England, and South Africa. The findings support a…

  1. The temporal derivative of expected utility: a neural mechanism for dynamic decision-making.

    PubMed

    Zhang, Xian; Hirsch, Joy

    2013-01-15

    Real world tasks involving moving targets, such as driving a vehicle, are performed based on continuous decisions thought to depend upon the temporal derivative of the expected utility (∂V/∂t), where the expected utility (V) is the effective value of a future reward. However, the neural mechanisms that underlie dynamic decision-making are not well understood. This study investigates human neural correlates of both V and ∂V/∂t using fMRI and a novel experimental paradigm based on a pursuit-evasion game optimized to isolate components of dynamic decision processes. Our behavioral data show that players of the pursuit-evasion game adopt an exponential discounting function, supporting the expected utility theory. The continuous functions of V and ∂V/∂t were derived from the behavioral data and applied as regressors in fMRI analysis, enabling temporal resolution that exceeded the sampling rate of image acquisition, hyper-temporal resolution, by taking advantage of numerous trials that provide rich and independent manipulation of those variables. V and ∂V/∂t were each associated with distinct neural activity. Specifically, ∂V/∂t was associated with anterior and posterior cingulate cortices, superior parietal lobule, and ventral pallidum, whereas V was primarily associated with supplementary motor, pre and post central gyri, cerebellum, and thalamus. The association between the ∂V/∂t and brain regions previously related to decision-making is consistent with the primary role of the temporal derivative of expected utility in dynamic decision-making. PMID:22963852

  2. Robust optimal sensor placement for operational modal analysis based on maximum expected utility

    NASA Astrophysics Data System (ADS)

    Li, Binbin; Der Kiureghian, Armen

    2016-06-01

    Optimal sensor placement is essentially a decision problem under uncertainty. The maximum expected utility theory and a Bayesian linear model are used in this paper for robust sensor placement aimed at operational modal identification. To avoid nonlinear relations between modal parameters and measured responses, we choose to optimize the sensor locations relative to identifying modal responses. Since the modal responses contain all the information necessary to identify the modal parameters, the optimal sensor locations for modal response estimation provide at least a suboptimal solution for identification of modal parameters. First, a probabilistic model for sensor placement considering model uncertainty, load uncertainty and measurement error is proposed. The maximum expected utility theory is then applied with this model by considering utility functions based on three principles: quadratic loss, Shannon information, and K-L divergence. In addition, the prior covariance of modal responses under band-limited white-noise excitation is derived and the nearest Kronecker product approximation is employed to accelerate evaluation of the utility function. As demonstration and validation examples, sensor placements in a 16-degrees-of-freedom shear-type building and in Guangzhou TV Tower under ground motion and wind load are considered. Placements of individual displacement meter, velocimeter, accelerometer and placement of mixed sensors are illustrated.

  3. Expected Utility Based Decision Making under Z-Information and Its Application.

    PubMed

    Aliev, Rashad R; Mraiziq, Derar Atallah Talal; Huseynov, Oleg H

    2015-01-01

    Real-world decision relevant information is often partially reliable. The reasons are partial reliability of the source of information, misperceptions, psychological biases, incompetence, and so forth. Z-numbers based formalization of information (Z-information) represents a natural language (NL) based value of a variable of interest in line with the related NL based reliability. What is important is that Z-information not only is the most general representation of real-world imperfect information but also has the highest descriptive power from human perception point of view as compared to fuzzy number. In this study, we present an approach to decision making under Z-information based on direct computation over Z-numbers. This approach utilizes expected utility paradigm and is applied to a benchmark decision problem in the field of economics. PMID:26366163

  4. Expected Utility Based Decision Making under Z-Information and Its Application

    PubMed Central

    Aliev, Rashad R.; Mraiziq, Derar Atallah Talal; Huseynov, Oleg H.

    2015-01-01

    Real-world decision relevant information is often partially reliable. The reasons are partial reliability of the source of information, misperceptions, psychological biases, incompetence, and so forth. Z-numbers based formalization of information (Z-information) represents a natural language (NL) based value of a variable of interest in line with the related NL based reliability. What is important is that Z-information not only is the most general representation of real-world imperfect information but also has the highest descriptive power from human perception point of view as compared to fuzzy number. In this study, we present an approach to decision making under Z-information based on direct computation over Z-numbers. This approach utilizes expected utility paradigm and is applied to a benchmark decision problem in the field of economics. PMID:26366163

  5. Expected Utility Based Decision Making under Z-Information and Its Application.

    PubMed

    Aliev, Rashad R; Mraiziq, Derar Atallah Talal; Huseynov, Oleg H

    2015-01-01

    Real-world decision relevant information is often partially reliable. The reasons are partial reliability of the source of information, misperceptions, psychological biases, incompetence, and so forth. Z-numbers based formalization of information (Z-information) represents a natural language (NL) based value of a variable of interest in line with the related NL based reliability. What is important is that Z-information not only is the most general representation of real-world imperfect information but also has the highest descriptive power from human perception point of view as compared to fuzzy number. In this study, we present an approach to decision making under Z-information based on direct computation over Z-numbers. This approach utilizes expected utility paradigm and is applied to a benchmark decision problem in the field of economics.

  6. Maximizing the diagnostic utility of endoscopic biopsy in dogs and cats with gastrointestinal disease.

    PubMed

    Jergens, Albert E; Willard, Michael D; Allenspach, Karin

    2016-08-01

    Flexible endoscopy has become a valuable tool for the diagnosis of many small animal gastrointestinal (GI) diseases, but the techniques must be performed carefully so that the results are meaningful. This article reviews the current diagnostic utility of flexible endoscopy, including practical/technical considerations for endoscopic biopsy, optimal instrumentation for mucosal specimen collection, the correlation of endoscopic indices to clinical activity and to histopathologic findings, and new developments in the endoscopic diagnosis of GI disease. Recent studies have defined endoscopic biopsy guidelines for the optimal number and quality of diagnostic specimens from different regions of the gut. They also have shown the value of ileal biopsy in the diagnosis of canine and feline chronic enteropathies, and have demonstrated the utility of endoscopic biopsy specimens beyond routine hematoxylin and eosin histopathological analysis, including their use in immunohistochemical, microbiological, and molecular studies. PMID:27387727

  7. Utilizing Maximal Independent Sets as Dominating Sets in Scale-Free Networks

    NASA Astrophysics Data System (ADS)

    Derzsy, N.; Molnar, F., Jr.; Szymanski, B. K.; Korniss, G.

    Dominating sets provide key solution to various critical problems in networked systems, such as detecting, monitoring, or controlling the behavior of nodes. Motivated by graph theory literature [Erdos, Israel J. Math. 4, 233 (1966)], we studied maximal independent sets (MIS) as dominating sets in scale-free networks. We investigated the scaling behavior of the size of MIS in artificial scale-free networks with respect to multiple topological properties (size, average degree, power-law exponent, assortativity), evaluated its resilience to network damage resulting from random failure or targeted attack [Molnar et al., Sci. Rep. 5, 8321 (2015)], and compared its efficiency to previously proposed dominating set selection strategies. We showed that, despite its small set size, MIS provides very high resilience against network damage. Using extensive numerical analysis on both synthetic and real-world (social, biological, technological) network samples, we demonstrate that our method effectively satisfies four essential requirements of dominating sets for their practical applicability on large-scale real-world systems: 1.) small set size, 2.) minimal network information required for their construction scheme, 3.) fast and easy computational implementation, and 4.) resiliency to network damage. Supported by DARPA, DTRA, and NSF.

  8. RETINOPATHY OF PREMATURITY: INVOLUTION, FACTORS PREDISPOSING TO RETINAL DETACHMENT, AND EXPECTED UTILITY OF PREEMPTIVE SURGICAL REINTERVENTION

    PubMed Central

    Coats, David K

    2005-01-01

    Purpose To characterize involution of retinopathy of prematurity (ROP) following treatment at threshold, to identify findings during involution that portend development of retinal detachment, and to assess the potential utility of preemptive vitrectomy for eyes with high-risk features. Methods The probability of ROP involution and of retinal detachment evolution over time was analyzed in 262 treated eyes of 138 infants in a retrospective observational non–case controlled series. Expected utility of preemptive reintervention in eyes with high-risk features was evaluated using decision analysis. Modifications were devised to enhance classification of advanced ROP. Results ROP fully involuted in approximately 80% of eyes within 28 days of treatment. Vitreous organization meeting the study’s clinically important definition was associated with a 31-fold (5.37 to 183.63; P < .0001) and a 13-fold (2.97 to 58.59; P < .0001) increase in the odds for retinal detachment for right and left eyes, respectively. Vitreous hemorrhage defined as clinically important was associated with a 38-fold (2.69 to 551.19; P = .007) and a 15-fold (1.65 to 144.12; P = .02) increase in the odds for retinal detachment for right and left eyes, respectively. As modeled, an expected utility of 0.85 was calculated for preemptive vitrectomy compared with 0.79 for deferred vitrectomy for eyes with clinically important vitreous organization. Conclusions Acute-phase ROP involuted quickly in most eyes. Vitreous organization and vitreous hemorrhage were predictive of eyes that developed a retinal detachment. Decision analysis suggests that preemptive vitrectomy for eyes with vitreous organization meeting specific criteria is not likely to be worse than deferred vitrectomy, and it could be advantageous in some scenarios. PMID:17057808

  9. Maximizing the utility of monitoring to the adaptive management of natural resources

    USGS Publications Warehouse

    Kendall, William L.; Moore, Clinton T.; Gitzen, Robert A.; Cooper, Andrew B.; Millspaugh, Joshua J.; Licht, Daniel S.

    2012-01-01

    Data collection is an important step in any investigation about the structure or processes related to a natural system. In a purely scientific investigation (experiments, quasi-experiments, observational studies), data collection is part of the scientific method, preceded by the identification of hypotheses and the design of any manipulations of the system to test those hypotheses. Data collection and the manipulations that precede it are ideally designed to maximize the information that is derived from the study. That is, such investigations should be designed for maximum power to evaluate the relative validity of the hypotheses posed. When data collection is intended to inform the management of ecological systems, we call it monitoring. Note that our definition of monitoring encompasses a broader range of data-collection efforts than some alternative definitions – e.g. Chapter 3. The purpose of monitoring as we use the term can vary, from surveillance or “thumb on the pulse” monitoring (see Nichols and Williams 2006), intended to detect changes in a system due to any non-specified source (e.g. the North American Breeding Bird Survey), to very specific and targeted monitoring of the results of specific management actions (e.g. banding and aerial survey efforts related to North American waterfowl harvest management). Although a role of surveillance monitoring is to detect unanticipated changes in a system, the same result is possible from a collection of targeted monitoring programs distributed across the same spatial range (Box 4.1). In the face of limited budgets and many specific management questions, tying monitoring as closely as possible to management needs is warranted (Nichols and Williams 2006). Adaptive resource management (ARM; Walters 1986, Williams 1997, Kendall 2001, Moore and Conroy 2006, McCarthy and Possingham 2007, Conroy et al. 2008a) provides a context and specific purpose for monitoring: to evaluate decisions with respect to achievement

  10. A Neurodynamic Approach for Real-Time Scheduling via Maximizing Piecewise Linear Utility.

    PubMed

    Guo, Zhishan; Baruah, Sanjoy K

    2016-02-01

    In this paper, we study a set of real-time scheduling problems whose objectives can be expressed as piecewise linear utility functions. This model has very wide applications in scheduling-related problems, such as mixed criticality, response time minimization, and tardiness analysis. Approximation schemes and matrix vectorization techniques are applied to transform scheduling problems into linear constraint optimization with a piecewise linear and concave objective; thus, a neural network-based optimization method can be adopted to solve such scheduling problems efficiently. This neural network model has a parallel structure, and can also be implemented on circuits, on which the converging time can be significantly limited to meet real-time requirements. Examples are provided to illustrate how to solve the optimization problem and to form a schedule. An approximation ratio bound of 0.5 is further provided. Experimental studies on a large number of randomly generated sets suggest that our algorithm is optimal when the set is nonoverloaded, and outperforms existing typical scheduling strategies when there is overload. Moreover, the number of steps for finding an approximate solution remains at the same level when the size of the problem (number of jobs within a set) increases. PMID:26336153

  11. New method for tuning hyperparameter for the total variation norm in the maximum a posteriori ordered subsets expectation maximization reconstruction in SPECT myocardial perfusion imaging

    NASA Astrophysics Data System (ADS)

    Yang, Zhaoxia; Krol, Andrzej; Xu, Yuesheng; Feiglin, David H.

    2011-03-01

    In order to improve the tradeoff between noise and bias, and to improve uniformity of the reconstructed myocardium while preserving spatial resolution in parallel-beam collimator SPECT myocardial perfusion imaging (MPI) we investigated the most advantageous approach to provide reliable estimate of the optimal value of hyperparameter for the Total Variation (TV) norm in the iterative Bayesian Maximum A Posteriori Ordered Subsets Expectation Maximization (MAP-OSEM) one step late tomographic reconstruction with Gibbs prior. Our aim was to find the optimal value of hyperparameter corresponding to the lowest bias at the lowest noise while maximizing uniformity and spatial resolution for the reconstructed myocardium in SPECT MPI. We found that the L-curve method that is by definition a global technique provides good guidance for selection of the optimal value of the hyperparameter. However, for a heterogeneous object such as human thorax the fine-tuning of the hyperparameter's value can be only accomplished by means of a local method such as the proposed bias-noise distance (BND) curve. We established that our BND-curve method provides accurate optimized hyperparameter's value estimation as long as the region of interest volume for which it is defined is sufficiently large and is located sufficiently close to the myocardium.

  12. Expectation-Maximization Algorithm Based System Identification of Multiscale Stochastic Models for Scale Recursive Estimation of Precipitation: Application to Model Validation and Multisensor Data Fusion

    NASA Astrophysics Data System (ADS)

    Gupta, R.; Venugopal, V.; Foufoula-Georgiou, E.

    2003-12-01

    Owing to the tremendous scale dependent variability of precipitation and discrepancies in scale or resolution among different types/sources of observations, comparing or merging observations at different scales, or validating Quantitative Precipitation Forecast (QPF) with observations is not trivial. Traditional methods of QPF (e.g., point to area) have been found deficient, and to alleviate some of the concerns, a new methodology called scale-recursive estimation (SRE) was introduced recently. This method, which has its root in Kalman filtering, can (i) handle disparate (in scale) measurement sources; (ii) account for observational uncertainty associated with each sensor; and (iii) incorporate a multiscale model (theoretical or empirical) which captures the observed scale-to-scale variability in precipitation. The result is an optimal (unbiased and minimum error variance) estimate at any desired scale along with its error statistics. Our preliminary studies have indicated that lognormal and bounded lognormal multiplicative cascades are the most successful candidates as state-propagation models for precipitation across a range of scales. However, the parameters of these models were found to be highly sensitive to the observed intermittency of precipitation fields. To address this problem, we have chosen to take a "system identification" approach instead of prescribing a priori the type of multiscale model. The first part of this work focuses on the use of Maximum Likelihood (ML) identification for estimating the parameters of a multiscale stochastic state space model directly from the given data. Expectation-Maximization (EM) algorithm is used to iteratively solve for ML estimates. The "expectation" step makes use of a Kalman smoother to estimate the state, while the "maximization" step re-estimates the parameters using these uncertain state estimates. Using high resolution forecast precipitation fields from ARPS (Advanced Regional Prediction System), concurrent

  13. Effects of lung ventilation–perfusion and muscle metabolism–perfusion heterogeneities on maximal O2 transport and utilization

    PubMed Central

    Cano, I; Roca, J; Wagner, P D

    2015-01-01

    Previous models of O2 transport and utilization in health considered diffusive exchange of O2 in lung and muscle, but, reasonably, neglected functional heterogeneities in these tissues. However, in disease, disregarding such heterogeneities would not be justified. Here, pulmonary ventilation–perfusion and skeletal muscle metabolism–perfusion mismatching were added to a prior model of only diffusive exchange. Previously ignored O2 exchange in non-exercising tissues was also included. We simulated maximal exercise in (a) healthy subjects at sea level and altitude, and (b) COPD patients at sea level, to assess the separate and combined effects of pulmonary and peripheral functional heterogeneities on overall muscle O2 uptake ( and on mitochondrial (). In healthy subjects at maximal exercise, the combined effects of pulmonary and peripheral heterogeneities reduced arterial () at sea level by 32 mmHg, but muscle by only 122 ml min−1 (–3.5%). At the altitude of Mt Everest, lung and tissue heterogeneity together reduced by less than 1 mmHg and by 32 ml min−1 (–2.4%). Skeletal muscle heterogeneity led to a wide range of potential among muscle regions, a range that becomes narrower as increases, and in regions with a low ratio of metabolic capacity to blood flow, can exceed that of mixed muscle venous blood. For patients with severe COPD, peak was insensitive to substantial changes in the mitochondrial characteristics for O2 consumption or the extent of muscle heterogeneity. This integrative computational model of O2 transport and utilization offers the potential for estimating profiles of both in health and in diseases such as COPD if the extent for both lung ventilation–perfusion and tissue metabolism–perfusion heterogeneity is known. PMID:25640017

  14. Evaluation of list-mode ordered subset expectation maximization image reconstruction for pixelated solid-state compton gamma camera with large number of channels

    NASA Astrophysics Data System (ADS)

    Kolstein, M.; De Lorenzo, G.; Chmeissani, M.

    2014-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For Compton camera, especially with a large number of readout channels, image reconstruction presents a big challenge. In this work, results are presented for the List-Mode Ordered Subset Expectation Maximization (LM-OSEM) image reconstruction algorithm on simulated data with the VIP Compton camera design. For the simulation, all realistic contributions to the spatial resolution are taken into account, including the Doppler broadening effect. The results show that even with a straightforward implementation of LM-OSEM, good images can be obtained for the proposed Compton camera design. Results are shown for various phantoms, including extended sources and with a distance between the field of view and the first detector plane equal to 100 mm which corresponds to a realistic nuclear medicine environment.

  15. Illustrating Caffeine's Pharmacological and Expectancy Effects Utilizing a Balanced Placebo Design.

    ERIC Educational Resources Information Center

    Lotshaw, Sandra C.; And Others

    1996-01-01

    Hypothesizes that pharmacological and expectancy effects may be two principles that govern caffeine consumption in the same way they affect other drug use. Tests this theory through a balanced placebo design on 100 male undergraduate students. Expectancy set and caffeine content appeared equally powerful, and worked additionally, to affect…

  16. The utility of novelty seeking, harm avoidance, and expectancy in the prediction of drinking.

    PubMed

    Galen, L W; Henderson, M J; Whitman, R D

    1997-01-01

    To test the hypothesis that two temperament scales (Novelty Seeking and Harm Avoidance) are differentially related to alcohol expectancies and drinking patterns, 140 adolescents from an inpatient psychiatric facility completed several self-report questionnaires measuring temperament, alcohol expectancies, and alcohol consumption. Moderated multiple regression analyses indicated that Novelty Seeking was significantly related to frequency of drinking and problem drinking, but that Harm Avoidance was not related to these variables. Results of the MANOVA indicated that high novelty seeking and low harm avoidant (Type 2) individuals had a significantly higher frequency of drinking than did individuals who were high on Harm Avoidance and low on Novelty Seeking (Type 1). Results also showed that expectancy and Novelty Seeking contributed significant independent and overlapping variance in the prediction of amount of drinking. Although Novelty Seeking was related to expectations of social functioning, other hypothesized relationships between temperament and expectancy were not supported.

  17. Comparison between the Health Belief Model and Subjective Expected Utility Theory: predicting incontinence prevention behaviour in post-partum women.

    PubMed

    Dolman, M; Chase, J

    1996-08-01

    A small-scale study was undertaken to test the relative predictive power of the Health Belief Model and Subjective Expected Utility Theory for the uptake of a behaviour (pelvic floor exercises) to reduce post-partum urinary incontinence in primigravida females. A structured questionnaire was used to gather data relevant to both models from a sample antenatal and postnatal primigravida women. Questions examined the perceived probability of becoming incontinent, the perceived (dis)utility of incontinence, the perceived probability of pelvic floor exercises preventing future urinary incontinence, the costs and benefits of performing pelvic floor exercises and sources of information and knowledge about incontinence. Multiple regression analysis focused on whether or not respondents intended to perform pelvic floor exercises and the factors influencing their decisions. Aggregated data were analysed to compare the Health Belief Model and Subjective Expected Utility Theory directly. PMID:9238593

  18. Expect variation in utilization, revenue based on plan affiliation, warns Medicare risk provider.

    PubMed

    1998-06-01

    Anatomy of a Medicare risk contract: A seasoned California Medicare provider shares the details on its risk contracts and warns other providers to expect wide variation in performance based on plan affiliation. Here's the "inside story" on how this provider has garnered 10% of the local market share, but it's sometimes been an uphill struggle.

  19. Interest of the ordered subsets expectation maximization (OS-EM) algorithm in pinhole single-photon emission tomography reconstruction: a phantom study.

    PubMed

    Vanhove, C; Defrise, M; Franken, P R; Everaert, H; Deconinck, F; Bossuyt, A

    2000-02-01

    Pinhole single-photon emission tomography (SPET) has been proposed to improve the trade-off between sensitivity and resolution for small organs located in close proximity to the pinhole aperture. This technique is hampered by artefacts in the non-central slices. These artefacts are caused by truncation and by the fact that the pinhole SPET data collected in a circular orbit do not contain sufficient information for exact reconstruction. The ordered subsets expectation maximization (OS-EM) algorithm is a potential solution to these problems. In this study a three-dimensional OS-EM algorithm was implemented for data acquired on a single-head gamma camera equipped with a pinhole collimator (PH OS-EM). The aim of this study was to compare the PH OS-EM algorithm with the filtered back-projection algorithm of Feldkamp, Davis and Kress (FDK) and with the conventional parallel-hole geometry as a whole, using a line source phantom, Picker's thyroid phantom and a phantom mimicking the human cervical column. Correction for the angular dependency of the sensitivity in the pinhole geometry was based on a uniform flood acquisition. The projection data were shifted according to the measured centre of rotation. No correction was made for attenuation, scatter or distance-dependent camera resolution. The resolution measured with the line source phantom showed a significant improvement with PH OS-EM as compared with FDK, especially in the axial direction. Using Picker's thyroid phantom, one iteration with eight subsets was sufficient to obtain images with similar noise levels in uniform regions of interest to those obtained with the FDK algorithm. With these parameters the reconstruction time was 2.5 times longer than for the FDK method. Furthermore, there was a reduction in the artefacts caused by the circular orbit SPET acquisition. The images obtained from the phantom mimicking the human cervical column indicated that the improvement in image quality with PH OS-EM is relevant for

  20. Pt skin on AuCu intermetallic substrate: a strategy to maximize Pt utilization for fuel cells.

    PubMed

    Wang, Gongwei; Huang, Bing; Xiao, Li; Ren, Zhandong; Chen, Hao; Wang, Deli; Abruña, Héctor D; Lu, Juntao; Zhuang, Lin

    2014-07-01

    The dependence on Pt catalysts has been a major issue of proton-exchange membrane (PEM) fuel cells. Strategies to maximize the Pt utilization in catalysts include two main approaches: to put Pt atoms only at the catalyst surface and to further enhance the surface-specific catalytic activity (SA) of Pt. Thus far there has been no practical design that combines these two features into one single catalyst. Here we report a combined computational and experimental study on the design and implementation of Pt-skin catalysts with significantly improved SA toward the oxygen reduction reaction (ORR). Through screening, using density functional theory (DFT) calculations, a Pt-skin structure on AuCu(111) substrate, consisting of 1.5 monolayers of Pt, is found to have an appropriately weakened oxygen affinity, in comparison to that on Pt(111), which would be ideal for ORR catalysis. Such a structure is then realized by substituting the Cu atoms in three surface layers of AuCu intermetallic nanoparticles (AuCu iNPs) with Pt. The resulting Pt-skinned catalyst (denoted as Pt(S)AuCu iNPs) has been characterized in depth using synchrotron XRD, XPS, HRTEM, and HAADF-STEM/EDX, such that the Pt-skin structure is unambiguously identified. The thickness of the Pt skin was determined to be less than two atomic layers. Finally the catalytic activity of Pt(S)AuCu iNPs toward the ORR was measured via rotating disk electrode (RDE) voltammetry through which it was established that the SA was more than 2 times that of a commercial Pt/C catalyst. Taking into account the ultralow Pt loading in Pt(S)AuCu iNPs, the mass-specific catalytic activity (MA) was determined to be 0.56 A/mg(Pt)@0.9 V, a value that is well beyond the DOE 2017 target for ORR catalysts (0.44 A/mg(Pt)@0.9 V). These findings provide a strategic design and a realizable approach to high-performance and Pt-efficient catalysts for fuel cells.

  1. An eye-tracking investigation into readers' sensitivity to actual versus expected utility in the comprehension of conditionals.

    PubMed

    Haigh, Matthew; Ferguson, Heather J; Stewart, Andrew J

    2014-01-01

    The successful comprehension of a utility conditional (i.e., an "if p, then q" statement where p and/or q is valued by one or more agents) requires the construction of a mental representation of the situation described by that conditional and integration of this representation with prior context. In an eye-tracking experiment, we examined the time course of integrating conditional utility information into the broader discourse model. Specifically, the experiment determined whether readers were sensitive, during rapid heuristic processing, to the congruency between the utility of the consequent clause of a conditional (positive or negative) and a reader's subjective expectations based on prior context. On a number of eye-tracking measures we found that readers were sensitive to conditional utility-conditionals for which the consequent utility mismatched the utility that would be anticipated on the basis of prior context resulted in processing disruption. Crucially, this sensitivity emerged on measures that are accepted to indicate early processing within the language comprehension system and suggests that the evaluation of a conditional's utility informs the early stages of conditional processing.

  2. Expected Utility Theory as a Guide to Contingency (Allowance or Management Reserve) Allocation

    SciTech Connect

    Thibadeau, Barbara M

    2006-01-01

    In this paper, I view a project from the perspective of utility theory. I suggest that, by determining an optimal percent contingency (relative to remaining work) and identifying and enforcing a required change in behavior, from one that is risk-seeking to one that is risk-averse, a project's contingency can be managed more effectively. I argue that early on in a project, risk-seeking behavior dominates. During this period, requests for contingency are less rigorously scrutinized. As the design evolves, more accurate information becomes available. Once the designs have been finalized, the project team must transition from a free-thinking, exploratory mode to an execution mode. If projects do not transition fast enough from a risk-seeking to a risk-averse organization, an inappropriate allocation of project contingency could occur (too much too early in the project). I show that the behavioral patterns used to characterize utility theory are those that exist in the project environment. I define a project's utility and thus, provide project managers with a metric against which all gambles (requests for contingency) can be evaluated. I discuss other research as it relates to utility and project management. From empirical data analysis, I demonstrate that there is a direct correlation between progress on a project's design activities and the rate at which project contingency is allocated and recommend a transition time frame during which the rate of allocation should decrease and the project should transition from risk-seeking to risk-averse. I show that these data are already available from a project's earned value management system and thus, inclusion of this information in the standard monthly reporting suite can enhance a project manager's decision making capability.

  3. Maximizing the utilization and impact of medical educational software by designing for local area network (LAN) implementation.

    PubMed

    Stevens, R; Reber, E

    1993-01-01

    The design, development and implementation of medical education software often occurs without sufficient consideration of the potential benefits that can be realized by making the software network aware. These benefits can be considerable and can greatly enhance the utilization and potential impact of the software. This article details how multiple aspects of the IMMEX problem solving project have benefited from taking maximum advantage of LAN resources.

  4. Implementation of health information technology to maximize efficiency of resource utilization in a geographically dispersed prenatal care delivery system.

    PubMed

    Cochran, Marlo Baker; Snyder, Russell R; Thomas, Elizabeth; Freeman, Daniel H; Hankins, Gary D V

    2012-04-01

    This study investigated the utilization of health information technology (HIT) to enhance resource utilization in a geographically dispersed tertiary care system with extensive outpatient and delivery services. It was initiated as a result of a systems change implemented after Hurricane Ike devastated southeast Texas. A retrospective database and electronic medical record review was performed, which included data collection from all patients evaluated 18 months prior (epoch I) and 18 months following (epoch II) the landfall of Hurricane Ike. The months immediately following the storm were omitted from the analysis, allowing time to establish a new baseline. We analyzed a total of 21,201 patients evaluated in triage at the University of Texas Medical Branch. Epoch I consisted of 11,280 patients and epoch II consisted of 9922 patients. Using HIT, we were able to decrease the number of visits to triage while simultaneously managing more complex patients in the outpatient setting with no clinically significant change in maternal or fetal outcome. This study developed an innovated model of care using constrained resources while providing quality and safety to our patients without additional cost to the health care delivery system.

  5. Great Expectations.

    ERIC Educational Resources Information Center

    Smith, Jana J.

    2000-01-01

    Discusses how some universities are proactively looking to improve, enhance, and increase student housing on-campus through new and renovated residence halls that meet and exceed the expectations of today's students. Renovation improvements related to maximizing security, enhancing a homelike environment; developing a sense of community, and…

  6. Expectant Mothers Maximizing Opportunities: Maternal Characteristics Moderate Multifactorial Prenatal Stress in the Prediction of Birth Weight in a Sample of Children Adopted at Birth

    PubMed Central

    Brotnow, Line; Reiss, David; Stover, Carla S.; Ganiban, Jody; Leve, Leslie D.; Neiderhiser, Jenae M.; Shaw, Daniel S.; Stevens, Hanna E.

    2015-01-01

    Background Mothers’ stress in pregnancy is considered an environmental risk factor in child development. Multiple stressors may combine to increase risk, and maternal personal characteristics may offset the effects of stress. This study aimed to test the effect of 1) multifactorial prenatal stress, integrating objective “stressors” and subjective “distress” and 2) the moderating effects of maternal characteristics (perceived social support, self-esteem and specific personality traits) on infant birthweight. Method Hierarchical regression modeling was used to examine cross-sectional data on 403 birth mothers and their newborns from an adoption study. Results Distress during pregnancy showed a statistically significant association with birthweight (R2 = 0.032, F(2, 398) = 6.782, p = .001). The hierarchical regression model revealed an almost two-fold increase in variance of birthweight predicted by stressors as compared with distress measures (R2Δ = 0.049, F(4, 394) = 5.339, p < .001). Further, maternal characteristics moderated this association (R2Δ = 0.031, F(4, 389) = 3.413, p = .009). Specifically, the expected benefit to birthweight as a function of higher SES was observed only for mothers with lower levels of harm-avoidance and higher levels of perceived social support. Importantly, the results were not better explained by prematurity, pregnancy complications, exposure to drugs, alcohol or environmental toxins. Conclusions The findings support multidimensional theoretical models of prenatal stress. Although both objective stressors and subjectively measured distress predict birthweight, they should be considered distinct and cumulative components of stress. This study further highlights that jointly considering risk factors and protective factors in pregnancy improves the ability to predict birthweight. PMID:26544958

  7. Expectation versus Reality: The Impact of Utility on Emotional Outcomes after Returning Individualized Genetic Research Results in Pediatric Rare Disease Research, a Qualitative Interview Study

    PubMed Central

    Cacioppo, Cara N.; Chandler, Ariel E.; Towne, Meghan C.; Beggs, Alan H.; Holm, Ingrid A.

    2016-01-01

    Purpose Much information on parental perspectives on the return of individual research results (IRR) in pediatric genomic research is based on hypothetical rather than actual IRR. Our aim was to understand how the expected utility to parents who received IRR on their child from a genetic research study compared to the actual utility of the IRR received. Methods We conducted individual telephone interviews with parents who received IRR on their child through participation in the Manton Center for Orphan Disease Research Gene Discovery Core (GDC) at Boston Children’s Hospital (BCH). Results Five themes emerged around the utility that parents expected and actually received from IRR: predictability, management, family planning, finding answers, and helping science and/or families. Parents expressing negative or mixed emotions after IRR return were those who did not receive the utility they expected from the IRR. Conversely, parents who expressed positive emotions were those who received as much or greater utility than expected. Conclusions Discrepancies between expected and actual utility of IRR affect the experiences of parents and families enrolled in genetic research studies. An informed consent process that fosters realistic expectations between researchers and participants may help to minimize any negative impact on parents and families. PMID:27082877

  8. Risk aversion and uncertainty in cost-effectiveness analysis: the expected-utility, moment-generating function approach.

    PubMed

    Elbasha, Elamin H

    2005-05-01

    The availability of patient-level data from clinical trials has spurred a lot of interest in developing methods for quantifying and presenting uncertainty in cost-effectiveness analysis (CEA). Although the majority has focused on developing methods for using sample data to estimate a confidence interval for an incremental cost-effectiveness ratio (ICER), a small strand of the literature has emphasized the importance of incorporating risk preferences and the trade-off between the mean and the variance of returns to investment in health and medicine (mean-variance analysis). This paper shows how the exponential utility-moment-generating function approach is a natural extension to this branch of the literature for modelling choices from healthcare interventions with uncertain costs and effects. The paper assumes an exponential utility function, which implies constant absolute risk aversion, and is based on the fact that the expected value of this function results in a convenient expression that depends only on the moment-generating function of the random variables. The mean-variance approach is shown to be a special case of this more general framework. The paper characterizes the solution to the resource allocation problem using standard optimization techniques and derives the summary measure researchers need to estimate for each programme, when the assumption of risk neutrality does not hold, and compares it to the standard incremental cost-effectiveness ratio. The importance of choosing the correct distribution of costs and effects and the issues related to estimation of the parameters of the distribution are also discussed. An empirical example to illustrate the methods and concepts is provided.

  9. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    NASA Astrophysics Data System (ADS)

    Razali, Azhani Mohd; Abdullah, Jaafar

    2015-04-01

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.

  10. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    SciTech Connect

    Razali, Azhani Mohd Abdullah, Jaafar

    2015-04-29

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.

  11. Preparation of multiband structure with Cu2Se/Ga3Se2/In3Se2 thin films by thermal evaporation technique for maximal solar spectrum utilization

    NASA Astrophysics Data System (ADS)

    Mohan, A.; Rajesh, S.; Gopalakrishnan, M.

    2016-10-01

    The paper investigates and discusses the formation of multiband structure through the Cu2Se-Ga3Se-In3Se2 thin films for maximal solar spectrum utilization. Stacking different semiconductor materials with various band gaps were done by successive evaporation method. Based on the band gap values the layers are arranged (low to high bandgap from the substrate). The XRD results exhibits the formation of CIGS composites through this successive evaporation of Cu2Se/Ga3Se/In3Se2 and treating then with temperature. Scanning Electron Microscope images shows improved crystallinity with the reduction in the larger grain boundary scattering after annealing. Optical spectra shows the stronger absorption in an UV-Visible region and higher transmission in the infrared and near infrared region. The optical band gap values calculated for as prepared films is 2.20 eV and the band gap was split into 1.62, 1.92 eV and 2.27eV for annealed samples. This multiband structures are much needed to utilize the full solar spectrum.

  12. Utilization of Molecular, Phenotypic, and Geographical Diversity to Develop Compact Composite Core Collection in the Oilseed Crop, Safflower (Carthamus tinctorius L.) through Maximization Strategy

    PubMed Central

    Kumar, Shivendra; Ambreen, Heena; Variath, Murali T.; Rao, Atmakuri R.; Agarwal, Manu; Kumar, Amar; Goel, Shailendra; Jagannath, Arun

    2016-01-01

    Safflower (Carthamus tinctorius L.) is a dryland oilseed crop yielding high quality edible oil. Previous studies have described significant phenotypic variability in the crop and used geographical distribution and phenotypic trait values to develop core collections. However, the molecular diversity component was lacking in the earlier collections thereby limiting their utility in breeding programs. The present study evaluated the phenotypic variability for 12 agronomically important traits during two growing seasons (2011–12 and 2012–13) in a global reference collection of 531 safflower accessions, assessed earlier by our group for genetic diversity and population structure using AFLP markers. Significant phenotypic variation was observed for all the agronomic traits in the representative collection. Cluster analysis of phenotypic data grouped the accessions into five major clusters. Accessions from the Indian Subcontinent and America harbored maximal phenotypic variability with unique characters for a few traits. MANOVA analysis indicated significant interaction between genotypes and environment for both the seasons. Initially, six independent core collections (CC1–CC6) were developed using molecular marker and phenotypic data for two seasons through POWERCORE and MSTRAT. These collections captured the entire range of trait variability but failed to include complete genetic diversity represented in 19 clusters reported earlier through Bayesian analysis of population structure (BAPS). Therefore, we merged the three POWERCORE core collections (CC1–CC3) to generate a composite core collection, CartC1 and three MSTRAT core collections (CC4–CC6) to generate another composite core collection, CartC2. The mean difference percentage, variance difference percentage, variable rate of coefficient of variance percentage, coincidence rate of range percentage, Shannon's diversity index, and Nei's gene diversity for CartC1 were 11.2, 43.7, 132.4, 93.4, 0.47, and 0

  13. Expected utility of voluntary vaccination in the middle of an emergent Bluetongue virus serotype 8 epidemic: a decision analysis parameterized for Dutch circumstances.

    PubMed

    Sok, J; Hogeveen, H; Elbers, A R W; Velthuis, A G J; Oude Lansink, A G J M

    2014-08-01

    In order to put a halt to the Bluetongue virus serotype 8 (BTV-8) epidemic in 2008, the European Commission promoted vaccination at a transnational level as a new measure to combat BTV-8. Most European member states opted for a mandatory vaccination campaign, whereas the Netherlands, amongst others, opted for a voluntary campaign. For the latter to be effective, the farmer's willingness to vaccinate should be high enough to reach satisfactory vaccination coverage to stop the spread of the disease. This study looked at a farmer's expected utility of vaccination, which is expected to have a positive impact on the willingness to vaccinate. Decision analysis was used to structure the vaccination decision problem into decisions, events and payoffs, and to define the relationships among these elements. Two scenarios were formulated to distinguish farmers' mindsets, based on differences in dairy heifer management. For each of the scenarios, a decision tree was run for two years to study vaccination behaviour over time. The analysis was done based on the expected utility criterion. This allows to account for the effect of a farmer's risk preference on the vaccination decision. Probabilities were estimated by experts, payoffs were based on an earlier published study. According to the results of the simulation, the farmer decided initially to vaccinate against BTV-8 as the net expected utility of vaccination was positive. Re-vaccination was uncertain due to less expected costs of a continued outbreak. A risk averse farmer in this respect is more likely to re-vaccinate. When heifers were retained for export on the farm, the net expected utility of vaccination was found to be generally larger and thus was re-vaccination more likely to happen. For future animal health programmes that rely on a voluntary approach, results show that the provision of financial incentives can be adjusted to the farmers' willingness to vaccinate over time. Important in this respect are the decision

  14. Managing Expectations: Results from Case Studies of US Water Utilities on Preparing for, Coping with, and Adapting to Extreme Events

    NASA Astrophysics Data System (ADS)

    Beller-Simms, N.; Metchis, K.

    2014-12-01

    Water utilities, reeling from increased impacts of successive extreme events such as floods, droughts, and derechos, are taking a more proactive role in preparing for future incursions. A recent study by Federal and water foundation investigators, reveals how six US water utilities and their regions prepared for, responded to, and coped with recent extreme weather and climate events and the lessons they are using to plan future adaptation and resilience activities. Two case studies will be highlighted. (1) Sonoma County, CA, has had alternating floods and severe droughts. In 2009, this area, home to competing water users, namely, agricultural crops, wineries, tourism, and fisheries faced a three-year drought, accompanied at the end by intense frosts. Competing uses of water threatened the grape harvest, endangered the fish industry and resulted in a series of regulations, and court cases. Five years later, new efforts by partners in the entire watershed have identified mutual opportunities for increased basin sustainability in the face of a changing climate. (2) Washington DC had a derecho in late June 2012, which curtailed water, communications, and power delivery during a record heat spell that impacted hundreds of thousands of residents and lasted over the height of the tourist-intensive July 4th holiday. Lessons from this event were applied three months later in anticipation of an approaching Superstorm Sandy. This study will help other communities in improving their resiliency in the face of future climate extremes. For example, this study revealed that (1) communities are planning with multiple types and occurrences of extreme events which are becoming more severe and frequent and are impacting communities that are expanding into more vulnerable areas and (2) decisions by one sector can not be made in a vacuum and require the scientific, sectoral and citizen communities to work towards sustainable solutions.

  15. Prognostic utility of predischarge dipyridamole-thallium imaging compared to predischarge submaximal exercise electrocardiography and maximal exercise thallium imaging after uncomplicated acute myocardial infarction

    SciTech Connect

    Gimple, L.W.; Hutter, A.M. Jr.; Guiney, T.E.; Boucher, C.A. )

    1989-12-01

    The prognostic value of predischarge dipyridamole-thallium scanning after uncomplicated myocardial infarction was determined by comparison with submaximal exercise electrocardiography and 6-week maximal exercise thallium imaging and by correlation with clinical events. Two endpoints were defined: cardiac events and severe ischemic potential. Of the 40 patients studied, 8 had cardiac events within 6 months (1 died, 3 had myocardial infarction and 4 had unstable angina requiring hospitalization). The finding of any redistribution on dipyridamole-thallium scanning was common (77%) in these patients and had poor specificity (29%). Redistribution outside of the infarct zone, however, had equivalent sensitivity (63%) and better specificity (75%) for events (p less than 0.05). Both predischarge dipyridamole-thallium and submaximal exercise electrocardiography identified 5 of the 8 events (p = 0.04 and 0.07, respectively). The negative predictive accuracy for events for both dipyridamole-thallium and submaximal exercise electrocardiography was 88%. In addition to the 8 patients with events, 16 other patients had severe ischemic potential (6 had coronary bypass surgery, 1 had inoperable 3-vessel disease and 9 had markedly abnormal 6-week maximal exercise tests). Predischarge dipyridamole-thallium and submaximal exercise testing also identified 8 and 7 of these 16 patients with severe ischemic potential, respectively. Six of the 8 cardiac events occurred before 6-week follow-up. A maximal exercise thallium test at 6 weeks identified 1 of the 2 additional events within 6 months correctly. Thallium redistribution after dipyridamole in coronary territories outside the infarct zone is a sensitive and specific predictor of subsequent cardiac events and identifies patients with severe ischemic potential.

  16. Prognostic utility of predischarge dipyridamole-thallium imaging compared to predischarge submaximal exercise electrocardiography and maximal exercise thallium imaging after uncomplicated acute myocardial infarction.

    PubMed

    Gimple, L W; Hutter, A M; Guiney, T E; Boucher, C A

    1989-12-01

    The prognostic value of predischarge dipyridamole-thallium scanning after uncomplicated myocardial infarction was determined by comparison with submaximal exercise electrocardiography and 6-week maximal exercise thallium imaging and by correlation with clinical events. Two endpoints were defined: cardiac events and severe ischemic potential. Of the 40 patients studied, 8 had cardiac events within 6 months (1 died, 3 had myocardial infarction and 4 had unstable angina requiring hospitalization). The finding of any redistribution on dipyridamole-thallium scanning was common (77%) in these patients and had poor specificity (29%). Redistribution outside of the infarct zone, however, had equivalent sensitivity (63%) and better specificity (75%) for events (p less than 0.05). Both predischarge dipyridamole-thallium and submaximal exercise electrocardiography identified 5 of the 8 events (p = 0.04 and 0.07, respectively). The negative predictive accuracy for events for both dipyridamole-thallium and submaximal exercise electrocardiography was 88%. In addition to the 8 patients with events, 16 other patients had severe ischemic potential (6 had coronary bypass surgery, 1 had inoperable 3-vessel disease and 9 had markedly abnormal 6-week maximal exercise tests). Predischarge dipyridamole-thallium and submaximal exercise testing also identified 8 and 7 of these 16 patients with severe ischemic potential, respectively. Six of the 8 cardiac events occurred before 6-week follow-up. A maximal exercise thallium test at 6 weeks identified 1 of the 2 additional events within 6 months correctly. Thallium redistribution after dipyridamole in coronary territories outside the infarct zone is a sensitive and specific predictor of subsequent cardiac events and identifies patients with severe ischemic potential.(ABSTRACT TRUNCATED AT 250 WORDS)

  17. Evidence for surprise minimization over value maximization in choice behavior.

    PubMed

    Schwartenbeck, Philipp; FitzGerald, Thomas H B; Mathys, Christoph; Dolan, Ray; Kronbichler, Martin; Friston, Karl

    2015-01-01

    Classical economic models are predicated on the idea that the ultimate aim of choice is to maximize utility or reward. In contrast, an alternative perspective highlights the fact that adaptive behavior requires agents' to model their environment and minimize surprise about the states they frequent. We propose that choice behavior can be more accurately accounted for by surprise minimization compared to reward or utility maximization alone. Minimizing surprise makes a prediction at variance with expected utility models; namely, that in addition to attaining valuable states, agents attempt to maximize the entropy over outcomes and thus 'keep their options open'. We tested this prediction using a simple binary choice paradigm and show that human decision-making is better explained by surprise minimization compared to utility maximization. Furthermore, we replicated this entropy-seeking behavior in a control task with no explicit utilities. These findings highlight a limitation of purely economic motivations in explaining choice behavior and instead emphasize the importance of belief-based motivations. PMID:26564686

  18. Evidence for surprise minimization over value maximization in choice behavior.

    PubMed

    Schwartenbeck, Philipp; FitzGerald, Thomas H B; Mathys, Christoph; Dolan, Ray; Kronbichler, Martin; Friston, Karl

    2015-11-13

    Classical economic models are predicated on the idea that the ultimate aim of choice is to maximize utility or reward. In contrast, an alternative perspective highlights the fact that adaptive behavior requires agents' to model their environment and minimize surprise about the states they frequent. We propose that choice behavior can be more accurately accounted for by surprise minimization compared to reward or utility maximization alone. Minimizing surprise makes a prediction at variance with expected utility models; namely, that in addition to attaining valuable states, agents attempt to maximize the entropy over outcomes and thus 'keep their options open'. We tested this prediction using a simple binary choice paradigm and show that human decision-making is better explained by surprise minimization compared to utility maximization. Furthermore, we replicated this entropy-seeking behavior in a control task with no explicit utilities. These findings highlight a limitation of purely economic motivations in explaining choice behavior and instead emphasize the importance of belief-based motivations.

  19. Evidence for surprise minimization over value maximization in choice behavior

    PubMed Central

    Schwartenbeck, Philipp; FitzGerald, Thomas H. B.; Mathys, Christoph; Dolan, Ray; Kronbichler, Martin; Friston, Karl

    2015-01-01

    Classical economic models are predicated on the idea that the ultimate aim of choice is to maximize utility or reward. In contrast, an alternative perspective highlights the fact that adaptive behavior requires agents’ to model their environment and minimize surprise about the states they frequent. We propose that choice behavior can be more accurately accounted for by surprise minimization compared to reward or utility maximization alone. Minimizing surprise makes a prediction at variance with expected utility models; namely, that in addition to attaining valuable states, agents attempt to maximize the entropy over outcomes and thus ‘keep their options open’. We tested this prediction using a simple binary choice paradigm and show that human decision-making is better explained by surprise minimization compared to utility maximization. Furthermore, we replicated this entropy-seeking behavior in a control task with no explicit utilities. These findings highlight a limitation of purely economic motivations in explaining choice behavior and instead emphasize the importance of belief-based motivations. PMID:26564686

  20. On deciding to have a lobotomy: either lobotomies were justified or decisions under risk should not always seek to maximise expected utility.

    PubMed

    Cooper, Rachel

    2014-02-01

    In the 1940s and 1950s thousands of lobotomies were performed on people with mental disorders. These operations were known to be dangerous, but thought to offer great hope. Nowadays, the lobotomies of the 1940s and 1950s are widely condemned. The consensus is that the practitioners who employed them were, at best, misguided enthusiasts, or, at worst, evil. In this paper I employ standard decision theory to understand and assess shifts in the evaluation of lobotomy. Textbooks of medical decision making generally recommend that decisions under risk are made so as to maximise expected utility (MEU) I show that using this procedure suggests that the 1940s and 1950s practice of psychosurgery was justifiable. In making sense of this finding we have a choice: Either we can accept that psychosurgery was justified, in which case condemnation of the lobotomists is misplaced. Or, we can conclude that the use of formal decision procedures, such as MEU, is problematic. PMID:24449251

  1. DEVELOPMENT OF A VALIDATED MODEL FOR USE IN MINIMIZING NOx EMISSIONS AND MAXIMIZING CARBON UTILIZATION WHEN CO-FIRING BIOMASS WITH COAL

    SciTech Connect

    Larry G. Felix; P. Vann Bush

    2003-01-29

    This is the ninth Quarterly Technical Report for DOE Cooperative Agreement No. DE-FC26-00NT40895. A statement of the project objectives is included in the Introduction of this report. The pilot-scale testing phase of the project has been completed. Calculations are essentially completed for implementing a modeling approach to combine reaction times and temperature distributions from computational fluid dynamic models of the pilot-scale combustion furnace with char burnout and chemical reaction kinetics to predict NO{sub x} emissions and unburned carbon levels in the furnace exhaust. The REI Configurable Fireside Simulator (CFS) has proven to be an essential component to provide input for these calculations. Niksa Energy Associates expects to deliver their final report in February 2003. Work has continued on the project final report.

  2. Aging and loss decision making: increased risk aversion and decreased use of maximizing information, with correlated rationality and value maximization

    PubMed Central

    Kurnianingsih, Yoanna A.; Sim, Sam K. Y.; Chee, Michael W. L.; Mullette-Gillman, O’Dhaniel A.

    2015-01-01

    We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble) and choice strategies (what gamble information influences choices) within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk, and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning. We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61–80 years old) were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic decision

  3. DEVELOPMENT OF A VALIDATED MODEL FOR USE IN MINIMIZING NOx EMISSIONS AND MAXIMIZING CARBON UTILIZATION WHEN CO-FIRING BIOMASS WITH COAL

    SciTech Connect

    Larry G. Felix; P. Vann Bush; Stephen Niksa

    2003-04-30

    In full-scale boilers, the effect of biomass cofiring on NO{sub x} and unburned carbon (UBC) emissions has been found to be site-specific. Few sets of field data are comparable and no consistent database of information exists upon which cofiring fuel choice or injection system design can be based to assure that NOX emissions will be minimized and UBC be reduced. This report presents the results of a comprehensive project that generated an extensive set of pilot-scale test data that were used to validate a new predictive model for the cofiring of biomass and coal. All testing was performed at the 3.6 MMBtu/hr (1.75 MW{sub t}) Southern Company Services/Southern Research Institute Combustion Research Facility where a variety of burner configurations, coals, biomasses, and biomass injection schemes were utilized to generate a database of consistent, scalable, experimental results (422 separate test conditions). This database was then used to validate a new model for predicting NO{sub x} and UBC emissions from the cofiring of biomass and coal. This model is based on an Advanced Post-Processing (APP) technique that generates an equivalent network of idealized reactor elements from a conventional CFD simulation. The APP reactor network is a computational environment that allows for the incorporation of all relevant chemical reaction mechanisms and provides a new tool to quantify NOx and UBC emissions for any cofired combination of coal and biomass.

  4. Maximizing the utilization of Laminaria japonica as biomass via improvement of alginate lyase activity in a two-phase fermentation system.

    PubMed

    Oh, Yuri; Xu, Xu; Kim, Ji Young; Park, Jong Moon

    2015-08-01

    Brown seaweed contains up to 67% of carbohydrates by dry weight and presents high potential as a polysaccharide feedstock for biofuel production. To effectively use brown seaweed as a biomass, degradation of alginate is the major challenge due to its complicated structure and low solubility in water. This study focuses on the isolation of alginate degrading bacteria, determining of the optimum fermentation conditions, as well as comparing the conventional single fermentation system with the two-phase fermentation system which is separately using alginate and mannitol extracted from Laminaria japonica. Maximum yield of organic acids production and volatile solids reduction obtained were 0.516 g/g and 79.7%, respectively, using the two-phase fermentation system in which alginate fermentation was carried out at pH 7 and mannitol fermentation at pH 8. The two-phase fermentation system increased the yield of organic acids production by 1.14 times and led to a 1.45-times reduction of VS when compared to the conventional single fermentation system at pH 8. The results show that the two-phase fermentation system improved the utilization of alginate by separating alginate from mannitol leading to enhanced alginate lyase activity.

  5. DEVELOPMENT OF A VALIDATED MODEL FOR USE IN MINIMIZING NOx EMISSIONS AND MAXIMIZING CARBON UTILIZATION WHEN CO-FIRING BIOMASS WITH COAL

    SciTech Connect

    Larry G. Felix; P. Vann Bush

    2002-04-30

    This is the sixth Quarterly Technical Report for DOE Cooperative Agreement No. DE-FC26-00NT40895. A statement of the project objectives is included in the Introduction of this report. Two additional biomass co-firing test burns were conducted during this quarter. In the first test (Test 10), up to 20% by weight dry hardwood sawdust and switchgrass was compiled with Galatia coal and injected through the dual-register burner. Galatia coal is a medium-sulfur Illinois Basin coal ({approx}1.0% S). The dual-register burner is a generic low-NO{sub x} burner that incorporates two independent wind boxes. In the second test (Test 11), regular ({approx}70% passing 200 mesh) and finely ground ({approx}90% passing 200 mesh) Pratt Seam coal was injected through the single-register burner to determine if coal grind affects NO{sub x} and unburned carbon emissions. The results of these tests are presented in this quarterly report. Significant progress has been made in implementing a modeling approach to combine reaction times and temperature distributions from computational fluid dynamic models of the pilot-scale combustion furnace with char burnout and chemical reaction kinetics to predict NO{sub x} emissions and unburned carbon levels in the furnace exhaust. No additional results of CFD modeling have been received as delivery of the Configurable Fireside Simulator is expected during the next quarter. Preparations are under way for continued pilot-scale combustion experiments with the single-register burner and a low-volatility bituminous coal. Some delays have been experienced in the acquisition and processing of biomass. Finally, a project review was held at the offices of Southern Research in Birmingham, on February 27, 2002.

  6. Maximally Expressive Modeling

    NASA Technical Reports Server (NTRS)

    Jaap, John; Davis, Elizabeth; Richardson, Lea

    2004-01-01

    Planning and scheduling systems organize tasks into a timeline or schedule. Tasks are logically grouped into containers called models. Models are a collection of related tasks, along with their dependencies and requirements, that when met will produce the desired result. One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed; the information sought is at the cutting edge of scientific endeavor; and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a maximally expressive modeling schema.

  7. Great Expectations.

    ERIC Educational Resources Information Center

    Sullivan, Patricia

    1999-01-01

    Parents must learn to transmit a sense of high expectations to their children (related to behavior and accomplishments) without crushing them with too much pressure. This means setting realistic expectations based on their children's special abilities, listening to their children's feelings about the expectations, and understanding what…

  8. A Column Generation Approach to Solve Multi-Team Influence Maximization Problem for Social Lottery Design

    NASA Astrophysics Data System (ADS)

    Jois, Manjunath Holaykoppa Nanjunda

    The conventional Influence Maximization problem is the problem of finding such a team (a small subset) of seed nodes in a social network that would maximize the spread of influence over the whole network. This paper considers a lottery system aimed at maximizing the awareness spread to promote energy conservation behavior as a stochastic Influence Maximization problem with the constraints ensuring lottery fairness. The resulting Multi-Team Influence Maximization problem involves assigning the probabilities to multiple teams of seeds (interpreted as lottery winners) to maximize the expected awareness spread. Such a variation of the Influence Maximization problem is modeled as a Linear Program; however, enumerating all the possible teams is a hard task considering that the feasible team count grows exponentially with the network size. In order to address this challenge, we develop a column generation based approach to solve the problem with a limited number of candidate teams, where new candidates are generated and added to the problem iteratively. We adopt a piecewise linear function to model the impact of including a new team so as to pick only such teams which can improve the existing solution. We demonstrate that with this approach we can solve such influence maximization problems to optimality, and perform computational study with real-world social network data sets to showcase the efficiency of the approach in finding lottery designs for optimal awareness spread. Lastly, we explore other possible scenarios where this model can be utilized to optimally solve the otherwise hard to solve influence maximization problems.

  9. Maximally Expressive Task Modeling

    NASA Technical Reports Server (NTRS)

    Japp, John; Davis, Elizabeth; Maxwell, Theresa G. (Technical Monitor)

    2002-01-01

    Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiment activities for the Space Station. The equipment used in these experiments is some of the most complex hardware ever developed by mankind, the information sought by these experiments is at the cutting edge of scientific endeavor, and the procedures for executing the experiments are intricate and exacting. Scheduling is made more difficult by a scarcity of space station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling space station experiment operations calls for a "maximally expressive" modeling schema. Modeling even the simplest of activities cannot be automated; no sensor can be attached to a piece of equipment that can discern how to use that piece of equipment; no camera can quantify how to operate a piece of equipment. Modeling is a human enterprise-both an art and a science. The modeling schema should allow the models to flow from the keyboard of the user as easily as works of literature flowed from the pen of Shakespeare. The Ground Systems Department at the Marshall Space Flight Center has embarked on an effort to develop a new scheduling engine that is highlighted by a maximally expressive modeling schema. This schema, presented in this paper, is a synergy of technological advances and domain-specific innovations.

  10. Exceeding Expectations

    ERIC Educational Resources Information Center

    Cannon, John

    2011-01-01

    Awareness of expectations is so important in the facilities business. The author's experiences has taught him that it is essential to understand how expectations impact people's lives as well as those for whom they provide services for every day. This article presents examples and ideas that will provide insight and ideas to help educators…

  11. Maximally Nonlocal Theories Cannot Be Maximally Random

    NASA Astrophysics Data System (ADS)

    de la Torre, Gonzalo; Hoban, Matty J.; Dhara, Chirag; Prettico, Giuseppe; Acín, Antonio

    2015-04-01

    Correlations that violate a Bell inequality are said to be nonlocal; i.e., they do not admit a local and deterministic explanation. Great effort has been devoted to study how the amount of nonlocality (as measured by a Bell inequality violation) serves to quantify the amount of randomness present in observed correlations. In this work we reverse this research program and ask what do the randomness certification capabilities of a theory tell us about the nonlocality of that theory. We find that, contrary to initial intuition, maximal randomness certification cannot occur in maximally nonlocal theories. We go on and show that quantum theory, in contrast, permits certification of maximal randomness in all dichotomic scenarios. We hence pose the question of whether quantum theory is optimal for randomness; i.e., is it the most nonlocal theory that allows maximal randomness certification? We answer this question in the negative by identifying a larger-than-quantum set of correlations capable of this feat. Not only are these results relevant to understanding quantum mechanics' fundamental features, but also put fundamental restrictions on device-independent protocols based on the no-signaling principle.

  12. Maximally nonlocal theories cannot be maximally random.

    PubMed

    de la Torre, Gonzalo; Hoban, Matty J; Dhara, Chirag; Prettico, Giuseppe; Acín, Antonio

    2015-04-24

    Correlations that violate a Bell inequality are said to be nonlocal; i.e., they do not admit a local and deterministic explanation. Great effort has been devoted to study how the amount of nonlocality (as measured by a Bell inequality violation) serves to quantify the amount of randomness present in observed correlations. In this work we reverse this research program and ask what do the randomness certification capabilities of a theory tell us about the nonlocality of that theory. We find that, contrary to initial intuition, maximal randomness certification cannot occur in maximally nonlocal theories. We go on and show that quantum theory, in contrast, permits certification of maximal randomness in all dichotomic scenarios. We hence pose the question of whether quantum theory is optimal for randomness; i.e., is it the most nonlocal theory that allows maximal randomness certification? We answer this question in the negative by identifying a larger-than-quantum set of correlations capable of this feat. Not only are these results relevant to understanding quantum mechanics' fundamental features, but also put fundamental restrictions on device-independent protocols based on the no-signaling principle. PMID:25955039

  13. Maximal combustion temperature estimation

    NASA Astrophysics Data System (ADS)

    Golodova, E.; Shchepakina, E.

    2006-12-01

    This work is concerned with the phenomenon of delayed loss of stability and the estimation of the maximal temperature of safe combustion. Using the qualitative theory of singular perturbations and canard techniques we determine the maximal temperature on the trajectories located in the transition region between the slow combustion regime and the explosive one. This approach is used to estimate the maximal temperature of safe combustion in multi-phase combustion models.

  14. Maximization, learning, and economic behavior

    PubMed Central

    Erev, Ido; Roth, Alvin E.

    2014-01-01

    The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design. PMID:25024182

  15. Maximization, learning, and economic behavior.

    PubMed

    Erev, Ido; Roth, Alvin E

    2014-07-22

    The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design.

  16. Maximization, learning, and economic behavior.

    PubMed

    Erev, Ido; Roth, Alvin E

    2014-07-22

    The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design. PMID:25024182

  17. Inclusive fitness maximization: An axiomatic approach.

    PubMed

    Okasha, Samir; Weymark, John A; Bossert, Walter

    2014-06-01

    Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it.

  18. Great Expectations for "Great Expectations."

    ERIC Educational Resources Information Center

    Ridley, Cheryl

    Designed to make the study of Dickens'"Great Expectations" an appealing and worthwhile experience, this paper presents a unit of study intended to help students gain (1) an appreciation of Dickens' skill at creating realistic human characters; (2) an insight into the problems of a young man confused by false values and unreal ambitions and ways to…

  19. Generation and Transmission Maximization Model

    2001-04-05

    GTMax was developed to study complex marketing and system operational issues facing electric utility power systems. The model maximizes the value of the electric system taking into account not only a single system''s limited energy and transmission resources but also firm contracts, independent power producer (IPP) agreements, and bulk power transaction opportunities on the spot market. GTMax maximizes net revenues of power systems by finding a solution that increases income while keeping expenses at amore » minimum. It does this while ensuring that market transactions and system operations are within the physical and institutional limitations of the power system. When multiple systems are simulated, GTMax identifies utilities that can successfully compete on the market by tracking hourly energy transactions, costs, and revenues. Some limitations that are modeled are power plant seasonal capabilities and terms specified in firm and IPP contracts. GTMax also considers detaile operational limitations such as power plant ramp rates and hydropower reservoir constraints.« less

  20. Maximal Outboxes of Quadrilaterals

    ERIC Educational Resources Information Center

    Zhao, Dongsheng

    2011-01-01

    An outbox of a quadrilateral is a rectangle such that each vertex of the given quadrilateral lies on one side of the rectangle and different vertices lie on different sides. We first investigate those quadrilaterals whose every outbox is a square. Next, we consider the maximal outboxes of rectangles and those quadrilaterals with perpendicular…

  1. Infrared Maximally Abelian Gauge

    SciTech Connect

    Mendes, Tereza; Cucchieri, Attilio; Mihara, Antonio

    2007-02-27

    The confinement scenario in Maximally Abelian gauge (MAG) is based on the concepts of Abelian dominance and of dual superconductivity. Recently, several groups pointed out the possible existence in MAG of ghost and gluon condensates with mass dimension 2, which in turn should influence the infrared behavior of ghost and gluon propagators. We present preliminary results for the first lattice numerical study of the ghost propagator and of ghost condensation for pure SU(2) theory in the MAG.

  2. Quantum-Inspired Maximizer

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2008-01-01

    A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).

  3. MAXIM: The Blackhole Imager

    NASA Technical Reports Server (NTRS)

    Gendreau, Keith; Cash, Webster; Gorenstein, Paul; Windt, David; Kaaret, Phil; Reynolds, Chris

    2004-01-01

    The Beyond Einstein Program in NASA's Office of Space Science Structure and Evolution of the Universe theme spells out the top level scientific requirements for a Black Hole Imager in its strategic plan. The MAXIM mission will provide better than one tenth of a microarcsecond imaging in the X-ray band in order to satisfy these requirements. We will overview the driving requirements to achieve these goals and ultimately resolve the event horizon of a supermassive black hole. We will present the current status of this effort that includes a study of a baseline design as well as two alternative approaches.

  4. Maximal Transcendentality and Integrability

    NASA Astrophysics Data System (ADS)

    Lipatov, L. N.

    2008-09-01

    The Hamiltonian describing possible interactions of the Reggeized gluons in the leading logarithmic approximation (LLA) of the multicolor QCD has the properties of conformal invariance, holomorphic separability and duality. It coincides with the Hamiltonian of the integrable Heisenberg model with the spins being the Möbius group generators. With the use of the Baxter-Sklyanin representation we calculate intercepts of the colorless states constructed from three and four Reggeized gluons and anomalous dimensions of the corresponding high twist operators. The integrability properties of the BFKL equation at a finite temperature are reviewed. Maximal transcendentality is used to construct anomalous dimensions of twist-2 operators up to 4 loops. It is shown that the asymptotic Bethe Ansatz in the 4-loop approximation is not in an agreement with predictions of the BFKL equation in N=4 SUSY.

  5. Violations of transitivity under fitness maximization.

    PubMed

    Houston, Alasdair I; McNamara, John M; Steer, Mark D

    2007-08-22

    We present a novel demonstration that violations of transitive choice can result from decision strategies that maximize fitness. Our results depend on how the available options, including options not currently chosen, influence a decision-maker's expectations about the future. In particular, they depend on how the presence of an option may act as an insurance against a run of bad luck in the future.

  6. Optimal Joint Detection and Estimation That Maximizes ROC-Type Curves.

    PubMed

    Wunderlich, Adam; Goossens, Bart; Abbey, Craig K

    2016-09-01

    Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation.

  7. Smoking Outcome Expectancies among College Students.

    ERIC Educational Resources Information Center

    Brandon, Thomas H.; Baker, Timothy B.

    Alcohol expectancies have been found to predict later onset of drinking among adolescents. This study examined whether the relationship between level of alcohol use and expectancies is paralleled with cigarette smoking, and attempted to identify the content of smoking expectancies. An instrument to measure the subjective expected utility of…

  8. Glutathione is required for maximal transcription of the cobalamin biosynthetic and 1,2-propanediol utilization (cob/pdu) regulon and for the catabolism of ethanolamine, 1,2-propanediol, and propionate in Salmonella typhimurium LT2.

    PubMed Central

    Rondon, M R; Kazmierczak, R; Escalante-Semerena, J C

    1995-01-01

    Transcription of the cob/pdu regulon of Salmonella typhimurium is activated by the PocR regulatory protein in response to 1,2-propanediol (1,2-PDL) in the environment. Nutritional analysis and DNA sequencing confirmed that a strain defective in expression of the cob/pdu regulon in response to 1,2-PDL lacked a functional gshA gene. gshA encodes gamma-glutamylcysteine synthetase (L-glutamate:L-cysteine gamma-ligase [ADP forming]; EC 6.3.2.2), the enzyme that catalyzes the first step in the synthesis of glutathione (GSH). The DNA sequence of gshA was partially determined, and the location of gshA in the chromosome was established by two-factor crosses. P22 cotransduction of gshA with nearby markers showed 21% linkage to srl and 1% linkage to hyd; srl was 9% cotransducible with hyd. In light of these data, the gene order gshA srl hyd is suggested. The level of reduced thiols in the gshA strain was 87% lower than the levels measured in the wild-type strain in both aerobically and anaerobically grown cells. 1,2-PDL-dependent transcription of cob/pdu was studied by using M. Casadaban's Mu-lacZ fusions. In aerobically grown cells, transcription of a cbi-lacZ fusion (the cbi genes are the subset of cob genes that encode functions needed for the synthesis of the corrin ring) was 4-fold lower and transcription of a pdu-lacZ fusion was 10-fold lower in a gshA mutant than in the wild-type strain. Expression of the cob/pdu regulon in response to 1,2-PDL was restored when GSH was included in the medium. In anaerobically grown cells, cbi-lacZ transcription was only 0.4-fold lower than in the gshA+ strain; pdu-lacZ transcription was reduced only by 0.34-fold, despite the lower thiol levels in the mutant. cobA-lacZ transcription was used as negative control of gene whose transcription is not controlled by the PocR/1,2-PDL system; under both conditions, cobA transcription remained unaffected. The gshA mutant strain was unable to utilize 1,2-PDL, ethanolamine, or propionate as a

  9. Maximized Posttest Contrasts: A Clarification.

    ERIC Educational Resources Information Center

    Hollingsworth, Holly

    1980-01-01

    A solution to some problems of maximized contrasts for analysis of variance situations when the cell sizes are unequal is offered. It is demonstrated that a contrast is maximized relative to the analysis used to compute the sum of squares between groups. Interpreting a maximum contrast is discussed. (Author/GK)

  10. [Manufactured baby food: safety expectations].

    PubMed

    Davin, L; Van Egroo, L-D; Galesne, N

    2010-12-01

    Food safety is a concern for parents of infants, and healthcare professionals are often questioned by them about this topic. Baby food European regulation ensures high levels of safety and is more rigorous than common food regulation. Maximal limit for pesticides in baby food demonstrates the high level of requirements. This limit must be below the 10 ppb detection threshold, whatever the chemical used. Other contaminants such as nitrates are also the subject of greater expectations in baby food. Food safety risks control needs a specific know-how that baby food manufacturers have acquired and experienced, more particularly by working with producers of high quality raw material.

  11. Measuring Generalized Expectancies for Negative Mood Regulation.

    ERIC Educational Resources Information Center

    Catanzaro, Salvatore J.; Mearns, Jack

    Research has suggested the utility of studying individual differences in the regulation of negative mood states. Generalized response expectancies for negative mood regulation were defined as expectancies that some overt behavior or cognition would alleviate negative mood states as they occur across situations. The Generalized Expectancy for…

  12. Maximize x(a - x)

    ERIC Educational Resources Information Center

    Lange, L. H.

    1974-01-01

    Five different methods for determining the maximizing condition for x(a - x) are presented. Included is the ancient Greek version and a method attributed to Fermat. None of the proofs use calculus. (LS)

  13. Power Converters Maximize Outputs Of Solar Cell Strings

    NASA Technical Reports Server (NTRS)

    Frederick, Martin E.; Jermakian, Joel B.

    1993-01-01

    Microprocessor-controlled dc-to-dc power converters devised to maximize power transferred from solar photovoltaic strings to storage batteries and other electrical loads. Converters help in utilizing large solar photovoltaic arrays most effectively with respect to cost, size, and weight. Main points of invention are: single controller used to control and optimize any number of "dumb" tracker units and strings independently; power maximized out of converters; and controller in system is microprocessor.

  14. All maximally entangling unitary operators

    SciTech Connect

    Cohen, Scott M.

    2011-11-15

    We characterize all maximally entangling bipartite unitary operators, acting on systems A and B of arbitrary finite dimensions d{sub A}{<=}d{sub B}, when ancillary systems are available to both parties. Several useful and interesting consequences of this characterization are discussed, including an understanding of why the entangling and disentangling capacities of a given (maximally entangling) unitary can differ and a proof that these capacities must be equal when d{sub A}=d{sub B}.

  15. On the maximal diphoton width

    NASA Astrophysics Data System (ADS)

    Salvio, Alberto; Staub, Florian; Strumia, Alessandro; Urbano, Alfredo

    2016-03-01

    Motivated by the 750 GeV diphoton excess found at LHC, we compute the maximal width into γγ that a neutral scalar can acquire through a loop of charged fermions or scalars as function of the maximal scale at which the theory holds, taking into account vacuum (meta)stability bounds. We show how an extra gauge symmetry can qualitatively weaken such bounds, and explore collider probes and connections with Dark Matter.

  16. Expectation and conditioning

    NASA Astrophysics Data System (ADS)

    Coster, Adelle C. F.; Alstrøm, Preben

    2001-02-01

    We present a dynamical model that embodies both classical and instrumental conditioning paradigms in the same framework. The model is based on the formation of expectations of stimuli and of rewards. The expectations of stimuli are formed in a recurrent process called expectation learning in which one activity pattern evokes another. The expectation of rewards or punishments (motivation) is modelled using reinforcement learning.

  17. Maximizing TDRS Command Load Lifetime

    NASA Technical Reports Server (NTRS)

    Brown, Aaron J.

    2002-01-01

    was therefore the key to achieving this goal. This goal was eventually realized through development of an Excel spreadsheet tool called EMMIE (Excel Mean Motion Interactive Estimation). EMMIE utilizes ground ephemeris nodal data to perform a least-squares fit to inferred mean anomaly as a function of time, thus generating an initial estimate for mean motion. This mean motion in turn drives a plot of estimated downtrack position difference versus time. The user can then manually iterate the mean motion, and determine an optimal value that will maximize command load lifetime. Once this optimal value is determined, the mean motion initially calculated by the command builder tool is overwritten with the new optimal value, and the command load is built for uplink to ISS. EMMIE also provides the capability for command load lifetime to be tracked through multiple TORS ephemeris updates. Using EMMIE, TORS command load lifetimes of approximately 30 days have been achieved.

  18. Utilizing Partnerships to Maximize Resources in College Counseling Services

    ERIC Educational Resources Information Center

    Stewart, Allison; Moffat, Meridith; Travers, Heather; Cummins, Douglas

    2015-01-01

    Research indicates an increasing number of college students are experiencing severe psychological problems that are impacting their academic performance. However, many colleges and universities operate with constrained budgets that limit their ability to provide adequate counseling services for their student population. Moreover, accessing…

  19. Algebraic curves of maximal cyclicity

    NASA Astrophysics Data System (ADS)

    Caubergh, Magdalena; Dumortier, Freddy

    2006-01-01

    The paper deals with analytic families of planar vector fields, studying methods to detect the cyclicity of a non-isolated closed orbit, i.e. the maximum number of limit cycles that can locally bifurcate from it. It is known that this multi-parameter problem can be reduced to a single-parameter one, in the sense that there exist analytic curves in parameter space along which the maximal cyclicity can be attained. In that case one speaks about a maximal cyclicity curve (mcc) in case only the number is considered and of a maximal multiplicity curve (mmc) in case the multiplicity is also taken into account. In view of obtaining efficient algorithms for detecting the cyclicity, we investigate whether such mcc or mmc can be algebraic or even linear depending on certain general properties of the families or of their associated Bautin ideal. In any case by well chosen examples we show that prudence is appropriate.

  20. Changing expectancies: cognitive mechanisms and context effects.

    PubMed

    Wiers, Reinout W; Wood, Mark D; Darkes, Jack; Corbin, William R; Jones, Barry T; Sher, Kenneth J

    2003-02-01

    This article presents the proceedings of a symposium at the 2002 RSA Meeting in San Francisco, organized by Reinout W. Wiers and Mark D. Wood. The symposium combined two topics of recent interest in studies of alcohol expectancies: cognitive mechanisms in expectancy challenge studies, and context-related changes of expectancies. With increasing recognition of the substantial role played by alcohol expectancies in drinking, investigators have begun to develop and evaluate expectancy challenge procedures as a potentially promising new prevention strategy. The two major issues addressed in the symposium were whether expectancy challenges result in changes in expectancies that mediate intervention (outcome relations), and the influence of simulated bar environments ("bar labs," in which challenges are usually done) on expectancies. The presentations were (1) An introduction, by Jack Darkes; (2) Investigating the utility of alcohol expectancy challenge with heavy drinking college students, by Mark D. Wood; (3) Effects of an expectancy challenge on implicit and explicit expectancies and drinking, by Reinout W. Wiers; (4) Effects of graphic feedback and simulated bar assessments on alcohol expectancies and consumption, by William R. Corbin; (5) Implicit alcohol associations and context, by Barry T Jones; and (6) A discussion by Kenneth J. Sher, who pointed out that it is important not only to study changes of expectancies in the paradigm of an expectancy challenge but also to consider the role of changing expectancies in natural development and in treatments not explicitly aimed at changing expectancies.

  1. The Naïve Utility Calculus: Computational Principles Underlying Commonsense Psychology.

    PubMed

    Jara-Ettinger, Julian; Gweon, Hyowon; Schulz, Laura E; Tenenbaum, Joshua B

    2016-08-01

    We propose that human social cognition is structured around a basic understanding of ourselves and others as intuitive utility maximizers: from a young age, humans implicitly assume that agents choose goals and actions to maximize the rewards they expect to obtain relative to the costs they expect to incur. This 'naïve utility calculus' allows both children and adults observe the behavior of others and infer their beliefs and desires, their longer-term knowledge and preferences, and even their character: who is knowledgeable or competent, who is praiseworthy or blameworthy, who is friendly, indifferent, or an enemy. We review studies providing support for the naïve utility calculus, and we show how it captures much of the rich social reasoning humans engage in from infancy. PMID:27388875

  2. Do Speakers and Listeners Observe the Gricean Maxim of Quantity?

    ERIC Educational Resources Information Center

    Engelhardt, Paul E.; Bailey, Karl G. D.; Ferreira, Fernanda

    2006-01-01

    The Gricean Maxim of Quantity is believed to govern linguistic performance. Speakers are assumed to provide as much information as required for referent identification and no more, and listeners are believed to expect unambiguous but concise descriptions. In three experiments we examined the extent to which naive participants are sensitive to the…

  3. Client Expectations for Counseling

    ERIC Educational Resources Information Center

    Tinsley, Howard E. A.; Harris, Donna J.

    1976-01-01

    Undergraduate students (N=287) completed an 82-item questionnaire about their expectations of counseling. The respondents' strongest expectations were of seeing an experienced, genuine, expert, and accepting counselor they could trust. Expectancies that the counselor would be understanding and directive were lower. Significant sex differences were…

  4. Marijuana: College Students' Expectations.

    ERIC Educational Resources Information Center

    Rumstein, Regina

    College students' expectations regarding the physiological, psychological, and social effects of marijuana were investigated. A sample of 210 undergraduates stated their expectations about the effect of the drug by answering a series of structured-response type questions. Also, Ss provided background information related to their expectations about…

  5. The Power of Expectations

    ERIC Educational Resources Information Center

    Cross, Neal

    2008-01-01

    Principals want teachers to do more than profess high expectations for their students. Principals want teachers to have the knowledge and skills to realize their expectations for students by using strategies that increase students' attention to their achievement and responsibilities for learning. Current expectancy literature states that teachers…

  6. Expecting the Best

    ERIC Educational Resources Information Center

    DiPaula, John

    2010-01-01

    Educational expectations are psychological constructs that change over time and can be altered or influenced by various factors. The concept of educational expectations refers to how much schooling students realistically believe that they will complete. These expectations are eventually raised or lowered as students see others like themselves…

  7. Maximize Student Time on Task

    ERIC Educational Resources Information Center

    Peters, Erin

    2004-01-01

    Student time on task is the most influential factor in student achievement. High motivation and engagement in learning have consistently been linked to increased levels of student success. At the same time, a lack of interest in schoolwork becomes increasingly common in more and more middle school students. To maximize time on task, teachers need…

  8. Expect No Surprises.

    ERIC Educational Resources Information Center

    Shields, Jeffrey N.

    2002-01-01

    Mary Jo Maydew articulates her priorities as 2002-2003 board chair of the National Association of College and University Business Officers (NACUBO): helping business officers maximize their expanding roles on campus and guiding the association in better serving member needs. (EV)

  9. Cognitive Somatic Behavioral Interventions for Maximizing Gymnastic Performance.

    ERIC Educational Resources Information Center

    Ravizza, Kenneth; Rotella, Robert

    Psychological training programs developed and implemented for gymnasts of a wide range of age and varying ability levels are examined. The programs utilized strategies based on cognitive-behavioral intervention. The approach contends that mental training plays a crucial role in maximizing performance for most gymnasts. The object of the training…

  10. Using Debate to Maximize Learning Potential: A Case Study

    ERIC Educational Resources Information Center

    Firmin, Michael W.; Vaughn, Aaron; Dye, Amanda

    2007-01-01

    Following a review of the literature, an educational case study is provided for the benefit of faculty preparing college courses. In particular, we provide a transcribed debate utilized in a General Psychology course as a best practice example of how to craft a debate which maximizes student learning. The work is presented as a model for the…

  11. Elliptic functions and maximal unitarity

    NASA Astrophysics Data System (ADS)

    Søgaard, Mads; Zhang, Yang

    2015-04-01

    Scattering amplitudes at loop level can be reduced to a basis of linearly independent Feynman integrals. The integral coefficients are extracted from generalized unitarity cuts which define algebraic varieties. The topology of an algebraic variety characterizes the difficulty of applying maximal cuts. In this work, we analyze a novel class of integrals of which the maximal cuts give rise to an algebraic variety with irrational irreducible components. As a phenomenologically relevant example, we examine the two-loop planar double-box contribution with internal massive lines. We derive unique projectors for all four master integrals in terms of multivariate residues along with Weierstrass' elliptic functions. We also show how to generate the leading-topology part of otherwise infeasible integration-by-parts identities analytically from exact meromorphic differential forms.

  12. User Expectations: Nurses' Perspective.

    PubMed

    Gürsel, Güney

    2016-01-01

    Healthcare is a technology-intensive industry. Although all healthcare staff needs qualified computer support, physicians and nurses need more. As nursing practice is an information intensive issue, understanding nurses' expectations from healthcare information systems (HCIS) is a must issue to meet their needs and help them in a better way. In this study perceived importance of nurses' expectations from HCIS is investigated, and two HCIS is evaluated for meeting the expectations of nurses by using fuzzy logic methodologies. PMID:27332398

  13. The maximal exercise ECG in asymptomatic men.

    PubMed

    Cumming, G R; Borysyk, L; Dufresne, C

    1972-03-18

    Lead MC5 bipolar exercise ECG was obtained in 510 asymptomatic males, aged 40 to 65, utilizing the bicycle ergometer, with maximal stress in 71% of the subjects. "Ischemic changes" occurred in 61 subjects, the frequency increasing from 4% at age 40 to 45, to 20% at age 50 to 55, to 37% at age 61 to 65. Subjects having an ischemic type ECG change on exercise had more frequent minor resting ECG changes, more resting hypertension, and a greater incidence of high cholesterol values than subjects with a normal ECG response to exercise, but there was no difference in the incidence of obesity, low fitness, or high systolic blood pressure after exercise. Current evidence suggests that asymptomatic male subjects with an abnormal exercise ECG develop clinical coronary heart disease from 2.5 to over 30 times more frequently than those with a normal exercise ECG.

  14. Learning to maximize reward rate: a model based on semi-Markov decision processes.

    PubMed

    Khodadadi, Arash; Fakhari, Pegah; Busemeyer, Jerome R

    2014-01-01

    WHEN ANIMALS HAVE TO MAKE A NUMBER OF DECISIONS DURING A LIMITED TIME INTERVAL, THEY FACE A FUNDAMENTAL PROBLEM: how much time they should spend on each decision in order to achieve the maximum possible total outcome. Deliberating more on one decision usually leads to more outcome but less time will remain for other decisions. In the framework of sequential sampling models, the question is how animals learn to set their decision threshold such that the total expected outcome achieved during a limited time is maximized. The aim of this paper is to provide a theoretical framework for answering this question. To this end, we consider an experimental design in which each trial can come from one of the several possible "conditions." A condition specifies the difficulty of the trial, the reward, the penalty and so on. We show that to maximize the expected reward during a limited time, the subject should set a separate value of decision threshold for each condition. We propose a model of learning the optimal value of decision thresholds based on the theory of semi-Markov decision processes (SMDP). In our model, the experimental environment is modeled as an SMDP with each "condition" being a "state" and the value of decision thresholds being the "actions" taken in those states. The problem of finding the optimal decision thresholds then is cast as the stochastic optimal control problem of taking actions in each state in the corresponding SMDP such that the average reward rate is maximized. Our model utilizes a biologically plausible learning algorithm to solve this problem. The simulation results show that at the beginning of learning the model choses high values of decision threshold which lead to sub-optimal performance. With experience, however, the model learns to lower the value of decision thresholds till finally it finds the optimal values.

  15. Learning to maximize reward rate: a model based on semi-Markov decision processes

    PubMed Central

    Khodadadi, Arash; Fakhari, Pegah; Busemeyer, Jerome R.

    2014-01-01

    When animals have to make a number of decisions during a limited time interval, they face a fundamental problem: how much time they should spend on each decision in order to achieve the maximum possible total outcome. Deliberating more on one decision usually leads to more outcome but less time will remain for other decisions. In the framework of sequential sampling models, the question is how animals learn to set their decision threshold such that the total expected outcome achieved during a limited time is maximized. The aim of this paper is to provide a theoretical framework for answering this question. To this end, we consider an experimental design in which each trial can come from one of the several possible “conditions.” A condition specifies the difficulty of the trial, the reward, the penalty and so on. We show that to maximize the expected reward during a limited time, the subject should set a separate value of decision threshold for each condition. We propose a model of learning the optimal value of decision thresholds based on the theory of semi-Markov decision processes (SMDP). In our model, the experimental environment is modeled as an SMDP with each “condition” being a “state” and the value of decision thresholds being the “actions” taken in those states. The problem of finding the optimal decision thresholds then is cast as the stochastic optimal control problem of taking actions in each state in the corresponding SMDP such that the average reward rate is maximized. Our model utilizes a biologically plausible learning algorithm to solve this problem. The simulation results show that at the beginning of learning the model choses high values of decision threshold which lead to sub-optimal performance. With experience, however, the model learns to lower the value of decision thresholds till finally it finds the optimal values. PMID:24904252

  16. Second use of transportation batteries: Maximizing the value of batteries for transportation and grid services

    SciTech Connect

    Viswanathan, Vilayanur V.; Kintner-Meyer, Michael CW

    2010-09-30

    Plug-in hybrid electric vehicles (PHEVs) and electric vehicles (EVs) are expected to gain significant market share over the next decade. The economic viability for such vehicles is contingent upon the availability of cost-effective batteries with high power and energy density. For initial commercial success, government subsidies will be highly instrumental in allowing PHEVs to gain a foothold. However, in the long-term, for electric vehicles to be commercially viable, the economics have to be self-sustaining. Towards the end of battery life in the vehicle, the energy capacity left in the battery is not sufficient to provide the designed range for the vehicle. Typically, the automotive manufacturers indicated the need for battery replacement when the remaining energy capacity reaches 70-80%. There is still sufficient power (kW) and energy capacity (kWh) left in the battery to support various grid ancillary services such as balancing, spinning reserve, load following services. As renewable energy penetration increases, the need for such balancing services is expected to increase. This work explores optimality for the replacement of transportation batteries to be subsequently used for grid services. This analysis maximizes the value of an electric vehicle battery to be used as a transportation battery (in its first life) and then as a resource for providing grid services (in its second life). The results are presented across a range of key parameters, such as depth of discharge (DOD), number of batteries used over the life of the vehicle, battery life in vehicle, battery state of health (SOH) at end of life in vehicle and ancillary services rate. The results provide valuable insights for the automotive industry into maximizing the utility and the value of the vehicle batteries in an effort to either reduce the selling price of EVs and PHEVs or maximize the profitability of the emerging electrification of transportation.

  17. Maximizing algebraic connectivity in air transportation networks

    NASA Astrophysics Data System (ADS)

    Wei, Peng

    In air transportation networks the robustness of a network regarding node and link failures is a key factor for its design. An experiment based on the real air transportation network is performed to show that the algebraic connectivity is a good measure for network robustness. Three optimization problems of algebraic connectivity maximization are then formulated in order to find the most robust network design under different constraints. The algebraic connectivity maximization problem with flight routes addition or deletion is first formulated. Three methods to optimize and analyze the network algebraic connectivity are proposed. The Modified Greedy Perturbation Algorithm (MGP) provides a sub-optimal solution in a fast iterative manner. The Weighted Tabu Search (WTS) is designed to offer a near optimal solution with longer running time. The relaxed semi-definite programming (SDP) is used to set a performance upper bound and three rounding techniques are discussed to find the feasible solution. The simulation results present the trade-off among the three methods. The case study on two air transportation networks of Virgin America and Southwest Airlines show that the developed methods can be applied in real world large scale networks. The algebraic connectivity maximization problem is extended by adding the leg number constraint, which considers the traveler's tolerance for the total connecting stops. The Binary Semi-Definite Programming (BSDP) with cutting plane method provides the optimal solution. The tabu search and 2-opt search heuristics can find the optimal solution in small scale networks and the near optimal solution in large scale networks. The third algebraic connectivity maximization problem with operating cost constraint is formulated. When the total operating cost budget is given, the number of the edges to be added is not fixed. Each edge weight needs to be calculated instead of being pre-determined. It is illustrated that the edge addition and the

  18. An Unexpected Expected Value.

    ERIC Educational Resources Information Center

    Schwartzman, Steven

    1993-01-01

    Discusses the surprising result that the expected number of marbles of one color drawn from a set of marbles of two colors after two draws without replacement is the same as the expected number of that color marble after two draws with replacement. Presents mathematical models to help explain this phenomenon. (MDH)

  19. Multivariate residues and maximal unitarity

    NASA Astrophysics Data System (ADS)

    Søgaard, Mads; Zhang, Yang

    2013-12-01

    We extend the maximal unitarity method to amplitude contributions whose cuts define multidimensional algebraic varieties. The technique is valid to all orders and is explicitly demonstrated at three loops in gauge theories with any number of fermions and scalars in the adjoint representation. Deca-cuts realized by replacement of real slice integration contours by higher-dimensional tori encircling the global poles are used to factorize the planar triple box onto a product of trees. We apply computational algebraic geometry and multivariate complex analysis to derive unique projectors for all master integral coefficients and obtain compact analytic formulae in terms of tree-level data.

  20. Knowledge discovery by accuracy maximization.

    PubMed

    Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo

    2014-04-01

    Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold's topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan's presidency and not from its beginning.

  1. Knowledge discovery by accuracy maximization

    PubMed Central

    Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo

    2014-01-01

    Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold’s topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan’s presidency and not from its beginning. PMID:24706821

  2. Health expectancy indicators.

    PubMed Central

    Robine, J. M.; Romieu, I.; Cambois, E.

    1999-01-01

    An outline is presented of progress in the development of health expectancy indicators, which are growing in importance as a means of assessing the health status of populations and determining public health priorities. PMID:10083720

  3. Maximal acceleration and radiative processes

    NASA Astrophysics Data System (ADS)

    Papini, Giorgio

    2015-08-01

    We derive the radiation characteristics of an accelerated, charged particle in a model due to Caianiello in which the proper acceleration of a particle of mass m has the upper limit 𝒜m = 2mc3/ℏ. We find two power laws, one applicable to lower accelerations, the other more suitable for accelerations closer to 𝒜m and to the related physical singularity in the Ricci scalar. Geometrical constraints and power spectra are also discussed. By comparing the power laws due to the maximal acceleration (MA) with that for particles in gravitational fields, we find that the model of Caianiello allows, in principle, the use of charged particles as tools to distinguish inertial from gravitational fields locally.

  4. Performance expectation plan

    SciTech Connect

    Ray, P.E.

    1998-09-04

    This document outlines the significant accomplishments of fiscal year 1998 for the Tank Waste Remediation System (TWRS) Project Hanford Management Contract (PHMC) team. Opportunities for improvement to better meet some performance expectations have been identified. The PHMC has performed at an excellent level in administration of leadership, planning, and technical direction. The contractor has met and made notable improvement of attaining customer satisfaction in mission execution. This document includes the team`s recommendation that the PHMC TWRS Performance Expectation Plan evaluation rating for fiscal year 1998 be an Excellent.

  5. Great Expectations. [Lesson Plan].

    ERIC Educational Resources Information Center

    Devine, Kelley

    Based on Charles Dickens' novel "Great Expectations," this lesson plan presents activities designed to help students understand the differences between totalitarianism and democracy; and a that a writer of a story considers theme, plot, characters, setting, and point of view. The main activity of the lesson involves students working in groups to…

  6. Behavior, Expectations and Status

    ERIC Educational Resources Information Center

    Webster, Jr, Murray; Rashotte, Lisa Slattery

    2010-01-01

    We predict effects of behavior patterns and status on performance expectations and group inequality using an integrated theory developed by Fisek, Berger and Norman (1991). We next test those predictions using new experimental techniques we developed to control behavior patterns as independent variables. In a 10-condition experiment, predictions…

  7. Maintaining High Expectations

    ERIC Educational Resources Information Center

    Williams, Roger; Williams, Sherry

    2014-01-01

    Author and husband, Roger Williams, is hearing and signs fluently, and author and wife, Sherry Williams, is deaf and uses both speech and signs, although she is most comfortable signing. As parents of six children--deaf and hearing--they are determined to encourage their children to do their best, and they always set their expectations high. They…

  8. Parenting with High Expectations

    ERIC Educational Resources Information Center

    Timperlake, Benna Hull; Sanders, Genelle Timperlake

    2014-01-01

    In some ways raising deaf or hard of hearing children is no different than raising hearing children; expectations must be established and periodically tweaked. Benna Hull Timperlake, who with husband Roger, raised two hearing children in addition to their deaf daughter, Genelle Timperlake Sanders, and Genelle, now a deaf professional, share their…

  9. Maximizing the optical network capacity.

    PubMed

    Bayvel, Polina; Maher, Robert; Xu, Tianhua; Liga, Gabriele; Shevchenko, Nikita A; Lavery, Domaniç; Alvarado, Alex; Killey, Robert I

    2016-03-01

    Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity.

  10. Maximizing the optical network capacity

    PubMed Central

    Bayvel, Polina; Maher, Robert; Liga, Gabriele; Shevchenko, Nikita A.; Lavery, Domaniç; Killey, Robert I.

    2016-01-01

    Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity. PMID:26809572

  11. Maximal switchability of centralized networks

    NASA Astrophysics Data System (ADS)

    Vakulenko, Sergei; Morozov, Ivan; Radulescu, Ovidiu

    2016-08-01

    We consider continuous time Hopfield-like recurrent networks as dynamical models for gene regulation and neural networks. We are interested in networks that contain n high-degree nodes preferably connected to a large number of N s weakly connected satellites, a property that we call n/N s -centrality. If the hub dynamics is slow, we obtain that the large time network dynamics is completely defined by the hub dynamics. Moreover, such networks are maximally flexible and switchable, in the sense that they can switch from a globally attractive rest state to any structurally stable dynamics when the response time of a special controller hub is changed. In particular, we show that a decrease of the controller hub response time can lead to a sharp variation in the network attractor structure: we can obtain a set of new local attractors, whose number can increase exponentially with N, the total number of nodes of the nework. These new attractors can be periodic or even chaotic. We provide an algorithm, which allows us to design networks with the desired switching properties, or to learn them from time series, by adjusting the interactions between hubs and satellites. Such switchable networks could be used as models for context dependent adaptation in functional genetics or as models for cognitive functions in neuroscience.

  12. Maximizing the optical network capacity.

    PubMed

    Bayvel, Polina; Maher, Robert; Xu, Tianhua; Liga, Gabriele; Shevchenko, Nikita A; Lavery, Domaniç; Alvarado, Alex; Killey, Robert I

    2016-03-01

    Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity. PMID:26809572

  13. A Maximally Supersymmetric Kondo Model

    SciTech Connect

    Harrison, Sarah; Kachru, Shamit; Torroba, Gonzalo; /Stanford U., Phys. Dept. /SLAC

    2012-02-17

    We study the maximally supersymmetric Kondo model obtained by adding a fermionic impurity to N = 4 supersymmetric Yang-Mills theory. While the original Kondo problem describes a defect interacting with a free Fermi liquid of itinerant electrons, here the ambient theory is an interacting CFT, and this introduces qualitatively new features into the system. The model arises in string theory by considering the intersection of a stack of M D5-branes with a stack of N D3-branes, at a point in the D3 worldvolume. We analyze the theory holographically, and propose a dictionary between the Kondo problem and antisymmetric Wilson loops in N = 4 SYM. We perform an explicit calculation of the D5 fluctuations in the D3 geometry and determine the spectrum of defect operators. This establishes the stability of the Kondo fixed point together with its basic thermodynamic properties. Known supergravity solutions for Wilson loops allow us to go beyond the probe approximation: the D5s disappear and are replaced by three-form flux piercing a new topologically non-trivial S3 in the corrected geometry. This describes the Kondo model in terms of a geometric transition. A dual matrix model reflects the basic properties of the corrected gravity solution in its eigenvalue distribution.

  14. Maximal Oxygen Intake and Maximal Work Performance of Active College Women.

    ERIC Educational Resources Information Center

    Higgs, Susanne L.

    Maximal oxygen intake and associated physiological variables were measured during strenuous exercise on women subjects (N=20 physical education majors). Following assessment of maximal oxygen intake, all subjects underwent a performance test at the work level which had elicited their maximal oxygen intake. Mean maximal oxygen intake was 41.32…

  15. Maximally Expressive Modeling of Operations Tasks

    NASA Technical Reports Server (NTRS)

    Jaap, John; Richardson, Lea; Davis, Elizabeth

    2002-01-01

    Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed, the information sought is at the cutting edge of scientific endeavor, and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a "maximally expressive" modeling schema.

  16. Multiqubit symmetric states with maximally mixed one-qubit reductions

    NASA Astrophysics Data System (ADS)

    Baguette, D.; Bastin, T.; Martin, J.

    2014-09-01

    We present a comprehensive study of maximally entangled symmetric states of arbitrary numbers of qubits in the sense of the maximal mixedness of the one-qubit reduced density operator. A general criterion is provided to easily identify whether given symmetric states are maximally entangled in that respect or not. We show that these maximally entangled symmetric (MES) states are the only symmetric states for which the expectation value of the associated collective spin of the system vanishes, as well as in corollary the dipole moment of the Husimi function. We establish the link between this kind of maximal entanglement, the anticoherence properties of spin states, and the degree of polarization of light fields. We analyze the relationship between the MES states and the classes of states equivalent through stochastic local operations with classical communication (SLOCC). We provide a nonexistence criterion of MES states within SLOCC classes of qubit states and show in particular that the symmetric Dicke state SLOCC classes never contain such MES states, with the only exception of the balanced Dicke state class for even numbers of qubits. The 4-qubit system is analyzed exhaustively and all MES states of this system are identified and characterized. Finally the entanglement content of MES states is analyzed with respect to the geometric and barycentric measures of entanglement, as well as to the generalized N-tangle. We show that the geometric entanglement of MES states is ensured to be larger than or equal to 1/2, but also that MES states are not in general the symmetric states that maximize the investigated entanglement measures.

  17. Does mental exertion alter maximal muscle activation?

    PubMed Central

    Rozand, Vianney; Pageaux, Benjamin; Marcora, Samuele M.; Papaxanthis, Charalambos; Lepers, Romuald

    2014-01-01

    Mental exertion is known to impair endurance performance, but its effects on neuromuscular function remain unclear. The purpose of this study was to test the hypothesis that mental exertion reduces torque and muscle activation during intermittent maximal voluntary contractions of the knee extensors. Ten subjects performed in a randomized order three separate mental exertion conditions lasting 27 min each: (i) high mental exertion (incongruent Stroop task), (ii) moderate mental exertion (congruent Stroop task), (iii) low mental exertion (watching a movie). In each condition, mental exertion was combined with 10 intermittent maximal voluntary contractions of the knee extensor muscles (one maximal voluntary contraction every 3 min). Neuromuscular function was assessed using electrical nerve stimulation. Maximal voluntary torque, maximal muscle activation and other neuromuscular parameters were similar across mental exertion conditions and did not change over time. These findings suggest that mental exertion does not affect neuromuscular function during intermittent maximal voluntary contractions of the knee extensors. PMID:25309404

  18. Post-Secondary Expectations and Educational Attainment

    ERIC Educational Resources Information Center

    Sciarra, Daniel T.; Ambrosino, Katherine E.

    2011-01-01

    This study utilized student, teacher, and parent expectations during high school to analyze their predictive effect on post-secondary education status two years after scheduled graduation. The sample included 5,353 students, parents and teachers who participated in the Educational Longitudinal Study (ELS; 2002-2006). The researchers analyzed data…

  19. Inflation in maximal gauged supergravities

    SciTech Connect

    Kodama, Hideo; Nozawa, Masato

    2015-05-18

    We discuss the dynamics of multiple scalar fields and the possibility of realistic inflation in the maximal gauged supergravity. In this paper, we address this problem in the framework of recently discovered 1-parameter deformation of SO(4,4) and SO(5,3) dyonic gaugings, for which the base point of the scalar manifold corresponds to an unstable de Sitter critical point. In the gauge-field frame where the embedding tensor takes the value in the sum of the 36 and 36’ representations of SL(8), we present a scheme that allows us to derive an analytic expression for the scalar potential. With the help of this formalism, we derive the full potential and gauge coupling functions in analytic forms for the SO(3)×SO(3)-invariant subsectors of SO(4,4) and SO(5,3) gaugings, and argue that there exist no new critical points in addition to those discovered so far. For the SO(4,4) gauging, we also study the behavior of 6-dimensional scalar fields in this sector near the Dall’Agata-Inverso de Sitter critical point at which the negative eigenvalue of the scalar mass square with the largest modulus goes to zero as the deformation parameter s approaches a critical value s{sub c}. We find that when the deformation parameter s is taken sufficiently close to the critical value, inflation lasts more than 60 e-folds even if the initial point of the inflaton allows an O(0.1) deviation in Planck units from the Dall’Agata-Inverso critical point. It turns out that the spectral index n{sub s} of the curvature perturbation at the time of the 60 e-folding number is always about 0.96 and within the 1σ range n{sub s}=0.9639±0.0047 obtained by Planck, irrespective of the value of the η parameter at the critical saddle point. The tensor-scalar ratio predicted by this model is around 10{sup −3} and is close to the value in the Starobinsky model.

  20. Utilization of the Garland Assessment of Graduation Expectations Test Results.

    ERIC Educational Resources Information Center

    Strozeski, Michael W.

    Virtually every school system is concerned with two educational considerations: (1) where the students are academically, and (2) how to get the students to a particular set of points. Minimum competency testing has been proposed as one way to handle these concerns. Competency testing has, however, been criticized for encouraging "teaching to the…

  1. The futility of utility: how market dynamics marginalize Adam Smith

    NASA Astrophysics Data System (ADS)

    McCauley, Joseph L.

    2000-10-01

    Economic theorizing is based on the postulated, nonempiric notion of utility. Economists assume that prices, dynamics, and market equilibria are supposed to be derived from utility. The results are supposed to represent mathematically the stabilizing action of Adam Smith's invisible hand. In deterministic excess demand dynamics I show the following. A utility function generally does not exist mathematically due to nonintegrable dynamics when production/investment are accounted for, resolving Mirowski's thesis. Price as a function of demand does not exist mathematically either. All equilibria are unstable. I then explain how deterministic chaos can be distinguished from random noise at short times. In the generalization to liquid markets and finance theory described by stochastic excess demand dynamics, I also show the following. Market price distributions cannot be rescaled to describe price movements as ‘equilibrium’ fluctuations about a systematic drift in price. Utility maximization does not describe equilibrium. Maximization of the Gibbs entropy of the observed price distribution of an asset would describe equilibrium, if equilibrium could be achieved, but equilibrium does not describe real, liquid markets (stocks, bonds, foreign exchange). There are three inconsistent definitions of equilibrium used in economics and finance, only one of which is correct. Prices in unregulated free markets are unstable against both noise and rising or falling expectations: Adam Smith's stabilizing invisible hand does not exist, either in mathematical models of liquid market data, or in real market data.

  2. Illustrated Examples of the Effects of Risk Preferences and Expectations on Bargaining Outcomes.

    ERIC Educational Resources Information Center

    Dickinson, David L.

    2003-01-01

    Describes bargaining examples that use expected utility theory. Provides example results that are intuitive, shown graphically and algebraically, and offer upper-level student samples that illustrate the usefulness of the expected utility theory. (JEH)

  3. Beyond my wildest expectations.

    PubMed

    Nester, Eugene

    2014-01-01

    With support from my parents, I fulfilled their and my expectations of graduating from college and becoming a scientist. My scientific career has focused on two organisms, Bacillus subtilis and Agrobacterium tumefaciens, and two experimental systems, aromatic amino acid synthesis and DNA transfer in bacteria and plants. Studies on B. subtilis emphasized the genetics and biochemistry of aromatic amino acid synthesis and the characterization of competence in DNA transformation. I carried out both as a postdoc at Stanford with Josh Lederberg. At the University of Washington, I continued these studies and then investigated how Agrobacterium transforms plant cells. In collaboration, Milt Gordon, Mary-Dell Chilton, and I found that this bacterium could transfer a piece of its plasmid into plant cells and thereby modify their properties. This discovery opened up a host of intriguing questions that we have tried to answer over the last 35 years. PMID:25208299

  4. Maximally Entangled Multipartite States: A Brief Survey

    NASA Astrophysics Data System (ADS)

    Enríquez, M.; Wintrowicz, I.; Życzkowski, K.

    2016-03-01

    The problem of identifying maximally entangled quantum states of a composite quantum systems is analyzed. We review some states of multipartite systems distinguished with respect to certain measures of quantum entanglement. Numerical results obtained for 4-qubit pure states illustrate the fact that the notion of maximally entangled state depends on the measure used.

  5. Specificity of a Maximal Step Exercise Test

    ERIC Educational Resources Information Center

    Darby, Lynn A.; Marsh, Jennifer L.; Shewokis, Patricia A.; Pohlman, Roberta L.

    2007-01-01

    To adhere to the principle of "exercise specificity" exercise testing should be completed using the same physical activity that is performed during exercise training. The present study was designed to assess whether aerobic step exercisers have a greater maximal oxygen consumption (max VO sub 2) when tested using an activity specific, maximal step…

  6. Great expectations: what do patients expect and how can expectations be managed?

    PubMed

    Newton, J T; Cunningham, S J

    2013-06-01

    Patients' expectations of their treatment are a key determinant in their satisfaction with treatment. Expectations may encompass not only notions of the outcome of treatment, but also the process of treatment. This article explores the processes by which expectations are formed, differences in expectations across patient groups, and the psychopathology of individuals with unrealistic expectations of treatment manifest in body dysmorphic disorder.

  7. A comparative study of expectant parents ' childbirth expectations.

    PubMed

    Kao, Bi-Chin; Gau, Meei-Ling; Wu, Shian-Feng; Kuo, Bih-Jaw; Lee, Tsorng-Yeh

    2004-09-01

    The purpose of this study was to understand childbirth expectations and differences in childbirth expectations among expectant parents. For convenience sampling, 200 couples willing to participate in this study were chosen from two hospitals in central Taiwan. Inclusion criteria were at least 36 weeks of gestation, aged 18 and above, no prenatal complications, and willing to consent to participate in this study. Instruments used to collect data included basic demographic data and the Childbirth Expectations Questionnaire. Findings of the study revealed that (1) five factors were identified by expectant parents regarding childbirth expectations including the caregiving environment, expectation of labor pain, spousal support, control and participation, and medical and nursing support; (2) no general differences were identified in the childbirth expectations between expectant fathers and expectant mothers; and (3) expectant fathers with a higher socioeconomic status and who had received prenatal (childbirth) education had higher childbirth expectations, whereas mothers displayed no differences in demographic characteristics. The study results may help clinical healthcare providers better understand differences in expectations during labor and birth and childbirth expectations by expectant parents in order to improve the medical and nursing system and promote positive childbirth experiences and satisfaction for expectant parents.

  8. Sociology of Low Expectations

    PubMed Central

    Samuel, Gabrielle; Williams, Clare

    2015-01-01

    Social scientists have drawn attention to the role of hype and optimistic visions of the future in providing momentum to biomedical innovation projects by encouraging innovation alliances. In this article, we show how less optimistic, uncertain, and modest visions of the future can also provide innovation projects with momentum. Scholars have highlighted the need for clinicians to carefully manage the expectations of their prospective patients. Using the example of a pioneering clinical team providing deep brain stimulation to children and young people with movement disorders, we show how clinicians confront this requirement by drawing on their professional knowledge and clinical expertise to construct visions of the future with their prospective patients; visions which are personalized, modest, and tainted with uncertainty. We refer to this vision-constructing work as recalibration, and we argue that recalibration enables clinicians to manage the tension between the highly optimistic and hyped visions of the future that surround novel biomedical interventions, and the exigencies of delivering those interventions in a clinical setting. Drawing on work from science and technology studies, we suggest that recalibration enrolls patients in an innovation alliance by creating a shared understanding of how the “effectiveness” of an innovation shall be judged. PMID:26527846

  9. Expectations and speech intelligibility.

    PubMed

    Babel, Molly; Russell, Jamie

    2015-05-01

    Socio-indexical cues and paralinguistic information are often beneficial to speech processing as this information assists listeners in parsing the speech stream. Associations that particular populations speak in a certain speech style can, however, make it such that socio-indexical cues have a cost. In this study, native speakers of Canadian English who identify as Chinese Canadian and White Canadian read sentences that were presented to listeners in noise. Half of the sentences were presented with a visual-prime in the form of a photo of the speaker and half were presented in control trials with fixation crosses. Sentences produced by Chinese Canadians showed an intelligibility cost in the face-prime condition, whereas sentences produced by White Canadians did not. In an accentedness rating task, listeners rated White Canadians as less accented in the face-prime trials, but Chinese Canadians showed no such change in perceived accentedness. These results suggest a misalignment between an expected and an observed speech signal for the face-prime trials, which indicates that social information about a speaker can trigger linguistic associations that come with processing benefits and costs.

  10. Expectations and speech intelligibility.

    PubMed

    Babel, Molly; Russell, Jamie

    2015-05-01

    Socio-indexical cues and paralinguistic information are often beneficial to speech processing as this information assists listeners in parsing the speech stream. Associations that particular populations speak in a certain speech style can, however, make it such that socio-indexical cues have a cost. In this study, native speakers of Canadian English who identify as Chinese Canadian and White Canadian read sentences that were presented to listeners in noise. Half of the sentences were presented with a visual-prime in the form of a photo of the speaker and half were presented in control trials with fixation crosses. Sentences produced by Chinese Canadians showed an intelligibility cost in the face-prime condition, whereas sentences produced by White Canadians did not. In an accentedness rating task, listeners rated White Canadians as less accented in the face-prime trials, but Chinese Canadians showed no such change in perceived accentedness. These results suggest a misalignment between an expected and an observed speech signal for the face-prime trials, which indicates that social information about a speaker can trigger linguistic associations that come with processing benefits and costs. PMID:25994710

  11. Are all maximally entangled states pure?

    NASA Astrophysics Data System (ADS)

    Cavalcanti, D.; Brandão, F. G. S. L.; Terra Cunha, M. O.

    2005-10-01

    We study if all maximally entangled states are pure through several entanglement monotones. In the bipartite case, we find that the same conditions which lead to the uniqueness of the entropy of entanglement as a measure of entanglement exclude the existence of maximally mixed entangled states. In the multipartite scenario, our conclusions allow us to generalize the idea of the monogamy of entanglement: we establish the polygamy of entanglement, expressing that if a general state is maximally entangled with respect to some kind of multipartite entanglement, then it is necessarily factorized of any other system.

  12. Matching, maximizing, and hill-climbing

    PubMed Central

    Hinson, John M.; Staddon, J. E. R.

    1983-01-01

    In simple situations, animals consistently choose the better of two alternatives. On concurrent variable-interval variable-interval and variable-interval variable-ratio schedules, they approximately match aggregate choice and reinforcement ratios. The matching law attempts to explain the latter result but does not address the former. Hill-climbing rules such as momentary maximizing can account for both. We show that momentary maximizing constrains molar choice to approximate matching; that molar choice covaries with pigeons' momentary-maximizing estimate; and that the “generalized matching law” follows from almost any hill-climbing rule. PMID:16812350

  13. Purification of Gaussian maximally mixed states

    NASA Astrophysics Data System (ADS)

    Jeong, Kabgyun; Lim, Youngrong

    2016-10-01

    We find that the purifications of several Gaussian maximally mixed states (GMMSs) correspond to some Gaussian maximally entangled states (GMESs) in the continuous-variable regime. Here, we consider a two-mode squeezed vacuum (TMSV) state as a purification of the thermal state and construct a general formalism of the Gaussian purification process. Moreover, we introduce other kind of GMESs via the process. All of our purified states of the GMMSs exhibit Gaussian profiles; thus, the states show maximal quantum entanglement in the Gaussian regime.

  14. Are all maximally entangled states pure?

    SciTech Connect

    Cavalcanti, D.; Brandao, F.G.S.L.; Terra Cunha, M.O.

    2005-10-15

    We study if all maximally entangled states are pure through several entanglement monotones. In the bipartite case, we find that the same conditions which lead to the uniqueness of the entropy of entanglement as a measure of entanglement exclude the existence of maximally mixed entangled states. In the multipartite scenario, our conclusions allow us to generalize the idea of the monogamy of entanglement: we establish the polygamy of entanglement, expressing that if a general state is maximally entangled with respect to some kind of multipartite entanglement, then it is necessarily factorized of any other system.

  15. Great expectations: temporal expectation modulates perceptual processing speed.

    PubMed

    Vangkilde, Signe; Coull, Jennifer T; Bundesen, Claus

    2012-10-01

    In a crowded dynamic world, temporal expectations guide our attention in time. Prior investigations have consistently demonstrated that temporal expectations speed motor behavior. We explore effects of temporal expectation on perceptual speed in three nonspeeded, cued recognition paradigms. Different hazard rate functions for the cue-stimulus foreperiod were used to manipulate temporal expectations. By computational modeling we estimated two distinct components of visual attention: the temporal threshold of conscious perception (t₀ ms) and the speed of subsequent encoding into visual short-term memory (v items/s). Notably, these components were measured independently of any motor involvement. The threshold t₀ was unaffected by temporal expectation, but perceptual processing speed v increased with increasing expectation. By employing constant hazard rates to keep expectation constant over time, we further confirmed that the increase in perceptual speed was independent of the cue-stimulus duration. Thus, our results strongly suggest temporal expectations optimize perceptual performance by speeding information processing.

  16. Maximal hypersurfaces in asymptotically stationary spacetimes

    NASA Astrophysics Data System (ADS)

    Chrusciel, Piotr T.; Wald, Robert M.

    1992-12-01

    The purpose of the work is to extend the results on the existence of maximal hypersurfaces to encompass some situations considered by other authors. The existence of maximal hypersurface in asymptotically stationary spacetimes is proven. Existence of maximal surface and of foliations by maximal hypersurfaces is proven in two classes of asymptotically flat spacetimes which possess a one parameter group of isometries whose orbits are timelike 'near infinity'. The first class consists of strongly causal asymptotically flat spacetimes which contain no 'blackhole or white hole' (but may contain 'ergoregions' where the Killing orbits fail to be timelike). The second class of space times possess a black hole and a white hole, with the black and white hole horizon intersecting in a compact 2-surface S.

  17. Great Expectations: Temporal Expectation Modulates Perceptual Processing Speed

    ERIC Educational Resources Information Center

    Vangkilde, Signe; Coull, Jennifer T.; Bundesen, Claus

    2012-01-01

    In a crowded dynamic world, temporal expectations guide our attention in time. Prior investigations have consistently demonstrated that temporal expectations speed motor behavior. We explore effects of temporal expectation on "perceptual" speed in three nonspeeded, cued recognition paradigms. Different hazard rate functions for the cue-stimulus…

  18. AUC-Maximizing Ensembles through Metalearning

    PubMed Central

    LeDell, Erin; van der Laan, Mark J.; Peterson, Maya

    2016-01-01

    Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree. PMID:27227721

  19. AUC-Maximizing Ensembles through Metalearning.

    PubMed

    LeDell, Erin; van der Laan, Mark J; Peterson, Maya

    2016-05-01

    Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree. PMID:27227721

  20. Natural selection and the maximization of fitness.

    PubMed

    Birch, Jonathan

    2016-08-01

    The notion that natural selection is a process of fitness maximization gets a bad press in population genetics, yet in other areas of biology the view that organisms behave as if attempting to maximize their fitness remains widespread. Here I critically appraise the prospects for reconciliation. I first distinguish four varieties of fitness maximization. I then examine two recent developments that may appear to vindicate at least one of these varieties. The first is the 'new' interpretation of Fisher's fundamental theorem of natural selection, on which the theorem is exactly true for any evolving population that satisfies some minimal assumptions. The second is the Formal Darwinism project, which forges links between gene frequency change and optimal strategy choice. In both cases, I argue that the results fail to establish a biologically significant maximization principle. I conclude that it may be a mistake to look for universal maximization principles justified by theory alone. A more promising approach may be to find maximization principles that apply conditionally and to show that the conditions were satisfied in the evolution of particular traits.

  1. Formation Control of the MAXIM L2 Libration Orbit Mission

    NASA Technical Reports Server (NTRS)

    Folta, David; Hartman, Kate; Howell, Kathleen; Marchand, Belinda

    2004-01-01

    The Micro-Arcsecond X-ray Imaging Mission (MAXIM), a proposed concept for the Structure and Evolution of the Universe (SEU) Black Hole Imager mission, is designed to make a ten million-fold improvement in X-ray image clarity of celestial objects by providing better than 0.1 micro-arcsecond imaging. Currently the mission architecture comprises 25 spacecraft, 24 as optics modules and one as the detector, which will form sparse sub-apertures of a grazing incidence X-ray interferometer covering the 0.3-10 keV bandpass. This formation must allow for long duration continuous science observations and also for reconfiguration that permits re-pointing of the formation. To achieve these mission goals, the formation is required to cooperatively point at desired targets. Once pointed, the individual elements of the MAXIM formation must remain stable, maintaining their relative positions and attitudes below a critical threshold. These pointing and formation stability requirements impact the control and design of the formation. In this paper, we provide analysis of control efforts that are dependent upon the stability and the configuration and dimensions of the MAXIM formation. We emphasize the utilization of natural motions in the Lagrangian regions to minimize the control efforts and we address continuous control via input feedback linearization (IFL). Results provide control cost, configuration options, and capabilities as guidelines for the development of this complex mission.

  2. Measuring Alcohol Expectancies in Youth

    ERIC Educational Resources Information Center

    Randolph, Karen A.; Gerend, Mary A.; Miller, Brenda A.

    2006-01-01

    Beliefs about the consequences of using alcohol, alcohol expectancies, are powerful predictors of underage drinking. The Alcohol Expectancies Questionnaire-Adolescent form (AEQ-A) has been widely used to measure expectancies in youth. Despite its broad use, the factor structure of the AEQ-A has not been firmly established. It is also not known…

  3. Resources and energetics determined dinosaur maximal size

    PubMed Central

    McNab, Brian K.

    2009-01-01

    Some dinosaurs reached masses that were ≈8 times those of the largest, ecologically equivalent terrestrial mammals. The factors most responsible for setting the maximal body size of vertebrates are resource quality and quantity, as modified by the mobility of the consumer, and the vertebrate's rate of energy expenditure. If the food intake of the largest herbivorous mammals defines the maximal rate at which plant resources can be consumed in terrestrial environments and if that limit applied to dinosaurs, then the large size of sauropods occurred because they expended energy in the field at rates extrapolated from those of varanid lizards, which are ≈22% of the rates in mammals and 3.6 times the rates of other lizards of equal size. Of 2 species having the same energy income, the species that uses the most energy for mass-independent maintenance of necessity has a smaller size. The larger mass found in some marine mammals reflects a greater resource abundance in marine environments. The presumptively low energy expenditures of dinosaurs potentially permitted Mesozoic communities to support dinosaur biomasses that were up to 5 times those found in mammalian herbivores in Africa today. The maximal size of predatory theropods was ≈8 tons, which if it reflected the maximal capacity to consume vertebrates in terrestrial environments, corresponds in predatory mammals to a maximal mass less than a ton, which is what is observed. Some coelurosaurs may have evolved endothermy in association with the evolution of feathered insulation and a small mass. PMID:19581600

  4. Patient (customer) expectations in hospitals.

    PubMed

    Bostan, Sedat; Acuner, Taner; Yilmaz, Gökhan

    2007-06-01

    The expectations of patient are one of the determining factors of healthcare service. The purpose of this study is to measure the Patients' Expectations, based on Patient's Rights. This study was done with Likert-Survey in Trabzon population. The analyses showed that the level of the expectations of the patient was high on the factor of receiving information and at an acceptable level on the other factors. Statistical meaningfulness was determined between age, sex, education, health insurance, and the income of the family and the expectations of the patients (p<0.05). According to this study, the current legal regulations have higher standards than the expectations of the patients. The reason that the satisfaction of the patients high level is interpreted due to the fact that the level of the expectation is low. It is suggested that the educational and public awareness studies on the patients' rights must be done in order to increase the expectations of the patients. PMID:17028043

  5. Quantum theory allows for absolute maximal contextuality

    NASA Astrophysics Data System (ADS)

    Amaral, Barbara; Cunha, Marcelo Terra; Cabello, Adán

    2015-12-01

    Contextuality is a fundamental feature of quantum theory and a necessary resource for quantum computation and communication. It is therefore important to investigate how large contextuality can be in quantum theory. Linear contextuality witnesses can be expressed as a sum S of n probabilities, and the independence number α and the Tsirelson-like number ϑ of the corresponding exclusivity graph are, respectively, the maximum of S for noncontextual theories and for the theory under consideration. A theory allows for absolute maximal contextuality if it has scenarios in which ϑ /α approaches n . Here we show that quantum theory allows for absolute maximal contextuality despite what is suggested by the examination of the quantum violations of Bell and noncontextuality inequalities considered in the past. Our proof is not constructive and does not single out explicit scenarios. Nevertheless, we identify scenarios in which quantum theory allows for almost-absolute-maximal contextuality.

  6. Massive nonplanar two-loop maximal unitarity

    NASA Astrophysics Data System (ADS)

    Søgaard, Mads; Zhang, Yang

    2014-12-01

    We explore maximal unitarity for nonplanar two-loop integrals with up to four massive external legs. In this framework, the amplitude is reduced to a basis of master integrals whose coefficients are extracted from maximal cuts. The hepta-cut of the nonplanar double box defines a nodal algebraic curve associated with a multiply pinched genus-3 Riemann surface. All possible configurations of external masses are covered by two distinct topological pictures in which the curve decomposes into either six or eight Riemann spheres. The procedure relies on consistency equations based on vanishing of integrals of total derivatives and Levi-Civita contractions. Our analysis indicates that these constraints are governed by the global structure of the maximal cut. Lastly, we present an algorithm for computing generalized cuts of massive integrals with higher powers of propagators based on the Bezoutian matrix method.

  7. The maximal process of nonlinear shot noise

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2009-05-01

    In the nonlinear shot noise system-model shots’ statistics are governed by general Poisson processes, and shots’ decay-dynamics are governed by general nonlinear differential equations. In this research we consider a nonlinear shot noise system and explore the process tracking, along time, the system’s maximal shot magnitude. This ‘maximal process’ is a stationary Markov process following a decay-surge evolution; it is highly robust, and it is capable of displaying both a wide spectrum of statistical behaviors and a rich variety of random decay-surge sample-path trajectories. A comprehensive analysis of the maximal process is conducted, including its Markovian structure, its decay-surge structure, and its correlation structure. All results are obtained analytically and in closed-form.

  8. An information maximization model of eye movements

    NASA Technical Reports Server (NTRS)

    Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra

    2005-01-01

    We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.

  9. VO2max during successive maximal efforts.

    PubMed

    Foster, Carl; Kuffel, Erin; Bradley, Nicole; Battista, Rebecca A; Wright, Glenn; Porcari, John P; Lucia, Alejandro; deKoning, Jos J

    2007-12-01

    The concept of VO(2)max has been a defining paradigm in exercise physiology for >75 years. Within the last decade, this concept has been both challenged and defended. The purpose of this study was to test the concept of VO(2)max by comparing VO(2) during a second exercise bout following a preliminary maximal effort exercise bout. The study had two parts. In Study #1, physically active non-athletes performed incremental cycle exercise. After 1-min recovery, a second bout was performed at a higher power output. In Study #2, competitive runners performed incremental treadmill exercise and, after 3-min recovery, a second bout at a higher speed. In Study #1 the highest VO(2) (bout 1 vs. bout 2) was not significantly different (3.95 +/- 0.75 vs. 4.06 +/- 0.75 l min(-1)). Maximal heart rate was not different (179 +/- 14 vs. 180 +/- 13 bpm) although maximal V(E) was higher in the second bout (141 +/- 36 vs. 151 +/- 34 l min(-1)). In Study #2 the highest VO(2) (bout 1 vs. bout 2) was not significantly different (4.09 +/- 0.97 vs. 4.03 +/- 1.16 l min(-1)), nor was maximal heart rate (184 + 6 vs. 181 +/- 10 bpm) or maximal V(E) (126 +/- 29 vs. 126 +/- 34 l min(-1)). The results support the concept that the highest VO(2) during a maximal incremental exercise bout is unlikely to change during a subsequent exercise bout, despite higher muscular power output. As such, the results support the "classical" view of VO(2)max. PMID:17891414

  10. VO2max during successive maximal efforts.

    PubMed

    Foster, Carl; Kuffel, Erin; Bradley, Nicole; Battista, Rebecca A; Wright, Glenn; Porcari, John P; Lucia, Alejandro; deKoning, Jos J

    2007-12-01

    The concept of VO(2)max has been a defining paradigm in exercise physiology for >75 years. Within the last decade, this concept has been both challenged and defended. The purpose of this study was to test the concept of VO(2)max by comparing VO(2) during a second exercise bout following a preliminary maximal effort exercise bout. The study had two parts. In Study #1, physically active non-athletes performed incremental cycle exercise. After 1-min recovery, a second bout was performed at a higher power output. In Study #2, competitive runners performed incremental treadmill exercise and, after 3-min recovery, a second bout at a higher speed. In Study #1 the highest VO(2) (bout 1 vs. bout 2) was not significantly different (3.95 +/- 0.75 vs. 4.06 +/- 0.75 l min(-1)). Maximal heart rate was not different (179 +/- 14 vs. 180 +/- 13 bpm) although maximal V(E) was higher in the second bout (141 +/- 36 vs. 151 +/- 34 l min(-1)). In Study #2 the highest VO(2) (bout 1 vs. bout 2) was not significantly different (4.09 +/- 0.97 vs. 4.03 +/- 1.16 l min(-1)), nor was maximal heart rate (184 + 6 vs. 181 +/- 10 bpm) or maximal V(E) (126 +/- 29 vs. 126 +/- 34 l min(-1)). The results support the concept that the highest VO(2) during a maximal incremental exercise bout is unlikely to change during a subsequent exercise bout, despite higher muscular power output. As such, the results support the "classical" view of VO(2)max.

  11. A Reward-Maximizing Spiking Neuron as a Bounded Rational Decision Maker.

    PubMed

    Leibfried, Felix; Braun, Daniel A

    2015-08-01

    Rate distortion theory describes how to communicate relevant information most efficiently over a channel with limited capacity. One of the many applications of rate distortion theory is bounded rational decision making, where decision makers are modeled as information channels that transform sensory input into motor output under the constraint that their channel capacity is limited. Such a bounded rational decision maker can be thought to optimize an objective function that trades off the decision maker's utility or cumulative reward against the information processing cost measured by the mutual information between sensory input and motor output. In this study, we interpret a spiking neuron as a bounded rational decision maker that aims to maximize its expected reward under the computational constraint that the mutual information between the neuron's input and output is upper bounded. This abstract computational constraint translates into a penalization of the deviation between the neuron's instantaneous and average firing behavior. We derive a synaptic weight update rule for such a rate distortion optimizing neuron and show in simulations that the neuron efficiently extracts reward-relevant information from the input by trading off its synaptic strengths against the collected reward.

  12. A Reward-Maximizing Spiking Neuron as a Bounded Rational Decision Maker.

    PubMed

    Leibfried, Felix; Braun, Daniel A

    2015-08-01

    Rate distortion theory describes how to communicate relevant information most efficiently over a channel with limited capacity. One of the many applications of rate distortion theory is bounded rational decision making, where decision makers are modeled as information channels that transform sensory input into motor output under the constraint that their channel capacity is limited. Such a bounded rational decision maker can be thought to optimize an objective function that trades off the decision maker's utility or cumulative reward against the information processing cost measured by the mutual information between sensory input and motor output. In this study, we interpret a spiking neuron as a bounded rational decision maker that aims to maximize its expected reward under the computational constraint that the mutual information between the neuron's input and output is upper bounded. This abstract computational constraint translates into a penalization of the deviation between the neuron's instantaneous and average firing behavior. We derive a synaptic weight update rule for such a rate distortion optimizing neuron and show in simulations that the neuron efficiently extracts reward-relevant information from the input by trading off its synaptic strengths against the collected reward. PMID:26079747

  13. On the Relationship between Maximal Reliability and Maximal Validity of Linear Composites

    ERIC Educational Resources Information Center

    Penev, Spiridon; Raykov, Tenko

    2006-01-01

    A linear combination of a set of measures is often sought as an overall score summarizing subject performance. The weights in this composite can be selected to maximize its reliability or to maximize its validity, and the optimal choice of weights is in general not the same for these two optimality criteria. We explore several relationships…

  14. The evolution of utility functions and psychological altruism.

    PubMed

    Clavien, Christine; Chapuisat, Michel

    2016-04-01

    Numerous studies show that humans tend to be more cooperative than expected given the assumption that they are rational maximizers of personal gain. As a result, theoreticians have proposed elaborated formal representations of human decision-making, in which utility functions including "altruistic" or "moral" preferences replace the purely self-oriented "Homo economicus" function. Here we review mathematical approaches that provide insights into the mathematical stability of alternative utility functions. Candidate utility functions may be evaluated with help of game theory, classical modeling of social evolution that focuses on behavioral strategies, and modeling of social evolution that focuses directly on utility functions. We present the advantages of the latter form of investigation and discuss one surprisingly precise result: "Homo economicus" as well as "altruistic" utility functions are less stable than a function containing a preference for the common welfare that is only expressed in social contexts composed of individuals with similar preferences. We discuss the contribution of mathematical models to our understanding of human other-oriented behavior, with a focus on the classical debate over psychological altruism. We conclude that human can be psychologically altruistic, but that psychological altruism evolved because it was generally expressed towards individuals that contributed to the actor's fitness, such as own children, romantic partners and long term reciprocators. PMID:26598465

  15. The evolution of utility functions and psychological altruism.

    PubMed

    Clavien, Christine; Chapuisat, Michel

    2016-04-01

    Numerous studies show that humans tend to be more cooperative than expected given the assumption that they are rational maximizers of personal gain. As a result, theoreticians have proposed elaborated formal representations of human decision-making, in which utility functions including "altruistic" or "moral" preferences replace the purely self-oriented "Homo economicus" function. Here we review mathematical approaches that provide insights into the mathematical stability of alternative utility functions. Candidate utility functions may be evaluated with help of game theory, classical modeling of social evolution that focuses on behavioral strategies, and modeling of social evolution that focuses directly on utility functions. We present the advantages of the latter form of investigation and discuss one surprisingly precise result: "Homo economicus" as well as "altruistic" utility functions are less stable than a function containing a preference for the common welfare that is only expressed in social contexts composed of individuals with similar preferences. We discuss the contribution of mathematical models to our understanding of human other-oriented behavior, with a focus on the classical debate over psychological altruism. We conclude that human can be psychologically altruistic, but that psychological altruism evolved because it was generally expressed towards individuals that contributed to the actor's fitness, such as own children, romantic partners and long term reciprocators.

  16. Labview utilities

    2011-09-30

    The software package provides several utilities written in LabView. These utilities don't form independent programs, but rather can be used as a library or controls in other labview programs. The utilities include several new controls (xcontrols), VIs for input and output routines, as well as other 'helper'-functions not provided in the standard LabView environment.

  17. Ehrenfest's Lottery--Time and Entropy Maximization

    ERIC Educational Resources Information Center

    Ashbaugh, Henry S.

    2010-01-01

    Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…

  18. Maximally informative foraging by Caenorhabditis elegans

    PubMed Central

    Calhoun, Adam J; Chalasani, Sreekanth H; Sharpee, Tatyana O

    2014-01-01

    Animals have evolved intricate search strategies to find new sources of food. Here, we analyze a complex food seeking behavior in the nematode Caenorhabditis elegans (C. elegans) to derive a general theory describing different searches. We show that C. elegans, like many other animals, uses a multi-stage search for food, where they initially explore a small area intensively (‘local search’) before switching to explore a much larger area (‘global search’). We demonstrate that these search strategies as well as the transition between them can be quantitatively explained by a maximally informative search strategy, where the searcher seeks to continuously maximize information about the target. Although performing maximally informative search is computationally demanding, we show that a drift-diffusion model can approximate it successfully with just three neurons. Our study reveals how the maximally informative search strategy can be implemented and adopted to different search conditions. DOI: http://dx.doi.org/10.7554/eLife.04220.001 PMID:25490069

  19. How to Generate Good Profit Maximization Problems

    ERIC Educational Resources Information Center

    Davis, Lewis

    2014-01-01

    In this article, the author considers the merits of two classes of profit maximization problems: those involving perfectly competitive firms with quadratic and cubic cost functions. While relatively easy to develop and solve, problems based on quadratic cost functions are too simple to address a number of important issues, such as the use of…

  20. A Model of College Tuition Maximization

    ERIC Educational Resources Information Center

    Bosshardt, Donald I.; Lichtenstein, Larry; Zaporowski, Mark P.

    2009-01-01

    This paper develops a series of models for optimal tuition pricing for private colleges and universities. The university is assumed to be a profit maximizing, price discriminating monopolist. The enrollment decision of student's is stochastic in nature. The university offers an effective tuition rate, comprised of stipulated tuition less financial…

  1. Does evolution lead to maximizing behavior?

    PubMed

    Lehmann, Laurent; Alger, Ingela; Weibull, Jörgen

    2015-07-01

    A long-standing question in biology and economics is whether individual organisms evolve to behave as if they were striving to maximize some goal function. We here formalize this "as if" question in a patch-structured population in which individuals obtain material payoffs from (perhaps very complex multimove) social interactions. These material payoffs determine personal fitness and, ultimately, invasion fitness. We ask whether individuals in uninvadable population states will appear to be maximizing conventional goal functions (with population-structure coefficients exogenous to the individual's behavior), when what is really being maximized is invasion fitness at the genetic level. We reach two broad conclusions. First, no simple and general individual-centered goal function emerges from the analysis. This stems from the fact that invasion fitness is a gene-centered multigenerational measure of evolutionary success. Second, when selection is weak, all multigenerational effects of selection can be summarized in a neutral type-distribution quantifying identity-by-descent between individuals within patches. Individuals then behave as if they were striving to maximize a weighted sum of material payoffs (own and others). At an uninvadable state it is as if individuals would freely choose their actions and play a Nash equilibrium of a game with a goal function that combines self-interest (own material payoff), group interest (group material payoff if everyone does the same), and local rivalry (material payoff differences).

  2. Faculty Salaries and the Maximization of Prestige

    ERIC Educational Resources Information Center

    Melguizo, Tatiana; Strober, Myra H.

    2007-01-01

    Through the lens of the emerging economic theory of higher education, we look at the relationship between salary and prestige. Starting from the premise that academic institutions seek to maximize prestige, we hypothesize that monetary rewards are higher for faculty activities that confer prestige. We use data from the 1999 National Study of…

  3. Maximizing the Spectacle of Water Fountains

    ERIC Educational Resources Information Center

    Simoson, Andrew J.

    2009-01-01

    For a given initial speed of water from a spigot or jet, what angle of the jet will maximize the visual impact of the water spray in the fountain? This paper focuses on fountains whose spigots are arranged in circular fashion, and couches the measurement of the visual impact in terms of the surface area and the volume under the fountain's natural…

  4. Maximizing the Effective Use of Formative Assessments

    ERIC Educational Resources Information Center

    Riddell, Nancy B.

    2016-01-01

    In the current age of accountability, teachers must be able to produce tangible evidence of students' concept mastery. This article focuses on implementation of formative assessments before, during, and after instruction in order to maximize teachers' ability to effectively monitor student achievement. Suggested strategies are included to help…

  5. The Winning Edge: Maximizing Success in College.

    ERIC Educational Resources Information Center

    Schmitt, David E.

    This book offers college students ideas on how to maximize their success in college by examining the personal management techniques a student needs to succeed. Chapters are as follows: "Getting and Staying Motivated"; "Setting Goals and Tapping Your Resources"; "Conquering Time"; "Think Yourself to College Success"; "Understanding and Remembering…

  6. Understanding violations of Gricean maxims in preschoolers and adults

    PubMed Central

    Okanda, Mako; Asada, Kosuke; Moriguchi, Yusuke; Itakura, Shoji

    2015-01-01

    This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants' understanding of the following maxims was assessed: be informative (first maxim of quantity), avoid redundancy (second maxim of quantity), be truthful (maxim of quality), be relevant (maxim of relation), avoid ambiguity (second maxim of manner), and be polite (maxim of politeness). Sensitivity to violations of Gricean maxims increased with age: 4-year-olds' understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner), and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed. PMID:26191018

  7. Energetics of kayaking at submaximal and maximal speeds.

    PubMed

    Zamparo, P; Capelli, C; Guerrini, G

    1999-01-01

    The energy cost of kayaking per unit distance (C(k), kJ x m(-1)) was assessed in eight middle- to high-class athletes (three males and five females; 45-76 kg body mass; 1.50-1.88 m height; 15-32 years of age) at submaximal and maximal speeds. At submaximal speeds, C(k) was measured by dividing the steady-state oxygen consumption (VO(2), l x s(-1)) by the speed (v, m x s(-1)), assuming an energy equivalent of 20.9 kJ x l O(-1)(2). At maximal speeds, C(k) was calculated from the ratio of the total metabolic energy expenditure (E, kJ) to the distance (d, m). E was assumed to be the sum of three terms, as originally proposed by Wilkie (1980): E = AnS + alphaVO(2max) x t-alphaVO(2max) x tau(1-e(-t x tau(-1))), were alpha is the energy equivalent of O(2) (20.9 kJ x l O(2)(-1)), tau is the time constant with which VO(2max) is attained at the onset of exercise at the muscular level, AnS is the amount of energy derived from anaerobic energy utilization, t is the performance time, and VO(2max) is the net maximal VO(2). Individual VO(2max) was obtained from the VO(2) measured during the last minute of the 1000-m or 2000-m maximal run. The average metabolic power output (E, kW) amounted to 141% and 102% of the individual maximal aerobic power (VO(2max)) from the shortest (250 m) to the longest (2000 m) distance, respectively. The average (SD) power provided by oxidative processes increased with the distance covered [from 0.64 (0.14) kW at 250 m to 1.02 (0.31) kW at 2000 m], whereas that provided by anaerobic sources showed the opposite trend. The net C(k) was a continuous power function of the speed over the entire range of velocities from 2.88 to 4.45 m x s(-1): C(k) = 0.02 x v(2.26) (r = 0.937, n = 32). PMID:10541920

  8. High Hopes and High Expectations!

    ERIC Educational Resources Information Center

    Wilford, Sara

    2006-01-01

    The start of each new school year is an especially hopeful time, and this author has found that clearly communicating expectations for teachers and families can set the stage for a wonderful new school year. This article discusses the expectations of teachers, directors, and families as a new school year begins.

  9. Sibling Status Effects: Adult Expectations.

    ERIC Educational Resources Information Center

    Baskett, Linda Musun

    1985-01-01

    This study attempted to determine what expectations or beliefs adults might hold about a child based on his or her sibling status alone. Ratings on 50 adjective pairs for each of three sibling status types, only, oldest, and youngest child, were assessed in relation to adult expectations, birth order, and parental status of rater. (Author/DST)

  10. Institutional Differences: Expectations and Perceptions.

    ERIC Educational Resources Information Center

    Silver, Harold

    1982-01-01

    The history of higher education has paid scant attention to the attitudes and expectations of its customers, students, and employers of graduates. Recent research on student and employer attitudes toward higher education sectors has not taken into account these expectations in the context of recent higher education history. (Author/MSE)

  11. Genomic medicine: too great expectations?

    PubMed

    O'Rourke, P P

    2013-08-01

    As advances in genomic medicine have captured the interest and enthusiasm of the public, an unintended consequence has been the creation of unrealistic expectations. Because these expectations may have a negative impact on individuals as well as genomics in general, it is important that they be understood and confronted.

  12. Making Your High Expectations Stick

    ERIC Educational Resources Information Center

    Education Digest: Essential Readings Condensed for Quick Review, 2007

    2007-01-01

    Every teacher starts the school year with great expectations for an orderly classroom. An experienced educator has probably tried various approaches to maintain order and create a classroom environment where every student feels comfortable contributing. To let the students know how much is really expected of them, teachers should not simply state…

  13. Expectations of Garland [Junior College].

    ERIC Educational Resources Information Center

    Garland Junior Coll., Boston, MA.

    A survey was conducted at Garland Junior College to determine the educational expectations of 69 new students, 122 parents, and 22 college faculty and administrators. Each group in this private women's college was asked to rank, in terms of expectations they held, the following items: learn job skills, mature in relations with others, become more…

  14. Theoretical maximal storage of hydrogen in zeolitic frameworks.

    PubMed

    Vitillo, Jenny G; Ricchiardi, Gabriele; Spoto, Giuseppe; Zecchina, Adriano

    2005-12-01

    Physisorption and encapsulation of molecular hydrogen in tailored microporous materials are two of the options for hydrogen storage. Among these materials, zeolites have been widely investigated. In these materials, the attained storage capacities vary widely with structure and composition, leading to the expectation that materials with improved binding sites, together with lighter frameworks, may represent efficient storage materials. In this work, we address the problem of the determination of the maximum amount of molecular hydrogen which could, in principle, be stored in a given zeolitic framework, as limited by the size, structure and flexibility of its pore system. To this end, the progressive filling with H2 of 12 purely siliceous models of common zeolite frameworks has been simulated by means of classical molecular mechanics. By monitoring the variation of cell parameters upon progressive filling of the pores, conclusions are drawn regarding the maximum storage capacity of each framework and, more generally, on framework flexibility. The flexible non-pentasils RHO, FAU, KFI, LTA and CHA display the highest maximal capacities, ranging between 2.86-2.65 mass%, well below the targets set for automotive applications but still in an interesting range. The predicted maximal storage capacities correlate well with experimental results obtained at low temperature. The technique is easily extendable to any other microporous structure, and it can provide a method for the screening of hypothetical new materials for hydrogen storage applications.

  15. Price of anarchy is maximized at the percolation threshold.

    PubMed

    Skinner, Brian

    2015-05-01

    When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called price of anarchy (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly placed congestible and incongestible links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold.

  16. The price of anarchy is maximized at the percolation threshold

    NASA Astrophysics Data System (ADS)

    Skinner, Brian

    2015-03-01

    When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called ``price of anarchy'' (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly-placed ``congestible'' and ``incongestible'' links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold.

  17. Price of anarchy is maximized at the percolation threshold.

    PubMed

    Skinner, Brian

    2015-05-01

    When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called price of anarchy (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly placed congestible and incongestible links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold. PMID:26066138

  18. Price of anarchy is maximized at the percolation threshold

    NASA Astrophysics Data System (ADS)

    Skinner, Brian

    2015-05-01

    When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called price of anarchy (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly placed congestible and incongestible links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold.

  19. Utility Static Generation Reliability

    1993-03-05

    PICES (Probabilistic Investigation of Capacity and Energy Shortages) was developed for estimating an electric utility''s expected frequency and duration of capacity deficiencies on a daily on and off-peak basis. In addition to the system loss-of-load probability (LOLP) and loss-of-load expectation (LOLE) indices, PICES calculates the expected frequency and duration of system capacity deficiencies and the probability, expectation, and expected frequency and duration of a range of system reserve margin states. Results are aggregated and printedmore » on a weekly, monthly, or annual basis. The program employs hourly load data and either the two-state (on/off) or a more sophisticated three-state (on/partially on/fully off) generating unit representation. Unit maintenance schedules are determined on a weekly, levelized reserve margin basis. In addition to the 8760-hour annual load record, the user provides the following information for each unit: plant capacity, annual maintenance requirement, two or three-state unit failure and repair rates, and for three-state models, the partial state capacity deficiency. PICES can also supply default failure and repair rate values, based on the Edison Electric Institute''s 1979 Report on Equipment Availability for the Ten-Year Period 1968 Through 1977, for many common plant types. Multi-year analysis can be performed by specifying as input data the annual peak load growth rates and plant addition and retirement schedules for each year in the study.« less

  20. Maximal temperature in a simple thermodynamical system

    NASA Astrophysics Data System (ADS)

    Dai, De-Chang; Stojkovic, Dejan

    2016-06-01

    Temperature in a simple thermodynamical system is not limited from above. It is also widely believed that it does not make sense talking about temperatures higher than the Planck temperature in the absence of the full theory of quantum gravity. Here, we demonstrate that there exist a maximal achievable temperature in a system where particles obey the laws of quantum mechanics and classical gravity before we reach the realm of quantum gravity. Namely, if two particles with a given center of mass energy come at the distance shorter than the Schwarzschild diameter apart, according to classical gravity they will form a black hole. It is possible to calculate that a simple thermodynamical system will be dominated by black holes at a critical temperature which is about three times lower than the Planck temperature. That represents the maximal achievable temperature in a simple thermodynamical system.

  1. Aberration correction by maximizing generalized sharpness metrics.

    PubMed

    Fienup, J R; Miller, J J

    2003-04-01

    The technique of maximizing sharpness metrics has been used to estimate and compensate for aberrations with adaptive optics, to correct phase errors in synthetic-aperture radar, and to restore images. The largest class of sharpness metrics is the sum over a nonlinear point transformation of the image intensity. How the second derivative of the point nonlinearity varies with image intensity determines the effects of various metrics on the imagery. Some metrics emphasize making shadows darker, and other emphasize making bright points brighter. One can determine the image content needed to pick the best metric by computing the statistics of the image autocorrelation or of the Fourier magnitude, either of which is independent of the phase error. Computationally efficient, closed-form expressions for the gradient make possible efficient search algorithms to maximize sharpness.

  2. Maximally Nonlocal and Monogamous Quantum Correlations

    NASA Astrophysics Data System (ADS)

    Barrett, Jonathan; Kent, Adrian; Pironio, Stefano

    2006-10-01

    We introduce a version of the chained Bell inequality for an arbitrary number of measurement outcomes and use it to give a simple proof that the maximally entangled state of two d-dimensional quantum systems has no local component. That is, if we write its quantum correlations as a mixture of local correlations and general (not necessarily quantum) correlations, the coefficient of the local correlations must be zero. This suggests an experimental program to obtain as good an upper bound as possible on the fraction of local states and provides a lower bound on the amount of classical communication needed to simulate a maximally entangled state in d×d dimensions. We also prove that the quantum correlations violating the inequality are monogamous among nonsignaling correlations and, hence, can be used for quantum key distribution secure against postquantum (but nonsignaling) eavesdroppers.

  3. A theory of maximizing sensory information.

    PubMed

    van Hateren, J H

    1992-01-01

    A theory is developed on the assumption that early sensory processing aims at maximizing the information rate in the channels connecting the sensory system to more central parts of the brain, where it is assumed that these channels are noisy and have a limited dynamic range. Given a stimulus power spectrum, the theory enables the computation of filters accomplishing this maximizing of information. Resulting filters are band-pass or high-pass at high signal-to-noise ratios, and low-pass at low signal-to-noise ratios. In spatial vision this corresponds to lateral inhibition and pooling, respectively. The filters comply with Weber's law over a considerable range of signal-to-noise ratios.

  4. Nonlinear trading models through Sharpe Ratio maximization.

    PubMed

    Choey, M; Weigend, A S

    1997-08-01

    While many trading strategies are based on price prediction, traders in financial markets are typically interested in optimizing risk-adjusted performance such as the Sharpe Ratio, rather than the price predictions themselves. This paper introduces an approach which generates a nonlinear strategy that explicitly maximizes the Sharpe Ratio. It is expressed as a neural network model whose output is the position size between a risky and a risk-free asset. The iterative parameter update rules are derived and compared to alternative approaches. The resulting trading strategy is evaluated and analyzed on both computer-generated data and real world data (DAX, the daily German equity index). Trading based on Sharpe Ratio maximization compares favorably to both profit optimization and probability matching (through cross-entropy optimization). The results show that the goal of optimizing out-of-sample risk-adjusted profit can indeed be achieved with this nonlinear approach.

  5. Maximal strength training improves aerobic endurance performance.

    PubMed

    Hoff, J; Gran, A; Helgerud, J

    2002-10-01

    The aim of this experiment was to examine the effects of maximal strength training with emphasis on neural adaptations on strength- and endurance-performance for endurance trained athletes. Nineteen male cross-country skiers about 19.7 +/- 4.0 years of age and a maximal oxygen uptake (VO(2 max)) of 69.4 +/- 2.2 mL x kg(-1) x min(-1) were randomly assigned to a training group (n = 9) or a control group (n = 10). Strength training was performed, three times a week for 8 weeks, using a cable pulley simulating the movements in double poling in cross-country skiing, and consisted of three sets of six repetitions at a workload of 85% of one repetition maximum emphasizing maximal mobilization of force in the concentric movement. One repetition maximum improved significantly from 40.3 +/- 4.5 to 44.3 +/- 4.9 kg. Time to peak force (TPF) was reduced by 50 and 60% on two different submaximal workloads. Endurance performance measured as time to exhaustion (TTE) on a double poling ski ergometer at maximum aerobic velocity, improved from 6.49 to 10.18 min; 20.5% over the control group. Work economy changed significantly from 1.02 +/- 0.14 to 0.74 +/- 0.10 mL x kg(-0.67) x min(-1). Maximal strength training with emphasis on neural adaptations improves strength, particularly rate of force development, and improves aerobic endurance performance by improved work economy.

  6. Pulmonary diffusing capacity after maximal exercise.

    PubMed

    Manier, G; Moinard, J; Stoïcheff, H

    1993-12-01

    To determine the effect of maximal exercise on alveolocapillary membrane diffusing capacity (Dm), 12 professional handball players aged 23.4 +/- 3.3 (SD) yr were studied before and during early recovery from a progressive maximal exercise [immediately (t0), 15 min, and 30 min (t30) after exercise]. Lung capillary blood volume and Dm were determined in a one-step maneuver by simultaneous measurement of CO and NO lung transfer (DLCO and DLNO, respectively) with use of the single-breath breath-hold method. At t0, DLCO was elevated (13.1 +/- 12.0%; P < 0.01) but both DLNO and Dm for CO remained unchanged. Between t0 and t30, both DLCO and DLNO decreased significantly. At t30, DLCO was not different from the control resting value. DLNO (and consequently Dm for CO) was significantly lower than the control value at t30 (-8.9 +/- 8.1%; P < 0.01). Lung capillary blood volume was elevated at t0 (18.0 +/- 19.0%; P < 0.01) but progressively decreased to near control resting values at t30. Differences in the postexercise kinetics of both DLCO and DLNO point to a role of the transient increase in pulmonary vascular recruitment during the recovery period. We concluded that Dm was somewhat decreased in the 30 min after maximal exercise of short duration, but the exact pulmonary mechanisms involved remain to be elucidated.

  7. Formation Control for the MAXIM Mission

    NASA Technical Reports Server (NTRS)

    Luquette, Richard J.; Leitner, Jesse; Gendreau, Keith; Sanner, Robert M.

    2004-01-01

    Over the next twenty years, a wave of change is occurring in the space-based scientific remote sensing community. While the fundamental limits in the spatial and angular resolution achievable in spacecraft have been reached, based on today s technology, an expansive new technology base has appeared over the past decade in the area of Distributed Space Systems (DSS). A key subset of the DSS technology area is that which covers precision formation flying of space vehicles. Through precision formation flying, the baselines, previously defined by the largest monolithic structure which could fit in the largest launch vehicle fairing, are now virtually unlimited. Several missions including the Micro-Arcsecond X-ray Imaging Mission (MAXIM), and the Stellar Imager will drive the formation flying challenges to achieve unprecedented baselines for high resolution, extended-scene, interferometry in the ultraviolet and X-ray regimes. This paper focuses on establishing the feasibility for the formation control of the MAXIM mission. MAXIM formation flying requirements are on the order of microns, while Stellar Imager mission requirements are on the order of nanometers. This paper specifically addresses: (1) high-level science requirements for these missions and how they evolve into engineering requirements; and (2) the development of linearized equations of relative motion for a formation operating in an n-body gravitational field. Linearized equations of motion provide the ground work for linear formation control designs.

  8. Physical activity extends life expectancy

    Cancer.gov

    Leisure-time physical activity is associated with longer life expectancy, even at relatively low levels of activity and regardless of body weight, according to a study by a team of researchers led by the NCI.

  9. Electromyographic activity in sprinting at speeds ranging from sub-maximal to supra-maximal.

    PubMed

    Mero, A; Komi, P V

    1987-06-01

    Eleven male and eight female sprinters were filmed when running at five different speeds from sub-maximal to supra-maximal levels over a force platform. Supra-maximal running was performed by a towing system. The electromyographic (EMG) activity of 10 muscles was recorded telemetrically using surface electrodes. Pre-activity (PRA), activity during ground contact, immediate post-contact activity, and minimum activity were the major EMG parameters analyzed from two consecutive strides. Reproducibility of the variables used was rather high (r = 0.85 to 0.90 and coefficient of variation = 6.6 to 9.7%). The results demonstrated increases (P less than 0.001) in PRA and forces in the braking phase when running speed increased to supra-maximum. PRA correlated (P less than 0.01) with the average resultant force in the braking phase. Relative PRA (percentage of maximal value during ipsilateral contact) remained fairly constant (about 50 to 70%) at each speed. In the propulsion phase of contact, integrated EMG activity and forces increased (P less than 0.001) to maximal running, but at supra-maximal speed the forces decreased non-significantly. Post-contact activity and minimum activity increased (P less than 0.001) to maximal running but the supra-maximal running was characterized by lowered integrated EMG activities in these phases. Post-contact activity correlated (P less than 0.05) with average resultant force in the propulsion phase of the male subjects when running velocity increased. It was suggested that PRA increases are needed for increasing muscle stiffness to resist great impact forces at the beginning of contact during sprint running.

  10. Maximal violation of tight Bell inequalities for maximal high-dimensional entanglement

    SciTech Connect

    Lee, Seung-Woo; Jaksch, Dieter

    2009-07-15

    We propose a Bell inequality for high-dimensional bipartite systems obtained by binning local measurement outcomes and show that it is tight. We find a binning method for even d-dimensional measurement outcomes for which this Bell inequality is maximally violated by maximally entangled states. Furthermore, we demonstrate that the Bell inequality is applicable to continuous variable systems and yields strong violations for two-mode squeezed states.

  11. Uplink Array Calibration via Far-Field Power Maximization

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V.; Mukai, R.; Lee, D.

    2006-01-01

    Uplink antenna arrays have the potential to greatly increase the Deep Space Network s high-data-rate uplink capabilities as well as useful range, and to provide additional uplink signal power during critical spacecraft emergencies. While techniques for calibrating an array of receive antennas have been addressed previously, proven concepts for uplink array calibration have yet to be demonstrated. This article describes a method of utilizing the Moon as a natural far-field reflector for calibrating a phased array of uplink antennas. Using this calibration technique, the radio frequency carriers transmitted by each antenna of the array are optimally phased to ensure that the uplink power received by the spacecraft is maximized.

  12. The ethics of life expectancy.

    PubMed

    Small, Robin

    2002-08-01

    Some ethical dilemmas in health care, such as over the use of age as a criterion of patient selection, appeal to the notion of life expectancy. However, some features of this concept have not been discussed. Here I look in turn at two aspects: one positive--our expectation of further life--and the other negative--the loss of potential life brought about by death. The most common method of determining this loss, by counting only the period of time between death and some particular age, implies that those who die at ages not far from that one are regarded as losing very little potential life, while those who die at greater ages are regarded as losing none at all. This approach has methodological advantages but ethical disadvantages, in that it fails to correspond to our strong belief that anyone who dies is losing some period of life that he or she would otherwise have had. The normative role of life expectancy expressed in the 'fair innings' attitude arises from a particular historical situation: not the increase of life expectancy in modern societies, but a related narrowing in the distribution of projected life spans. Since life expectancy is really a representation of existing patterns of mortality, which in turn are determined by many influences, including the present allocation of health resources, it should not be taken as a prediction, and still less as a statement of entitlement. PMID:12956176

  13. Broken Expectations: Violation of Expectancies, Not Novelty, Captures Auditory Attention

    ERIC Educational Resources Information Center

    Vachon, Francois; Hughes, Robert W.; Jones, Dylan M.

    2012-01-01

    The role of memory in behavioral distraction by auditory attentional capture was investigated: We examined whether capture is a product of the novelty of the capturing event (i.e., the absence of a recent memory for the event) or its violation of learned expectancies on the basis of a memory for an event structure. Attentional capture--indicated…

  14. Maximal possible accretion rates for slim disks

    NASA Astrophysics Data System (ADS)

    Lin, Yiqing; Jiao, Chengliang

    2009-12-01

    It was proved in the previous work that there must be a maximal possible accretion rate dot M_{max} for a slim disk. Here we discuss how the value of dot M_{max} depends on the two fundamental parameters of the disk, namely the mass of the central black hole M and the viscosity parameter α. It is shown that dot M_{max} increases with decreasing α, but is almost independent of M if dot M_{max} is measured by the Eddington accretion rate dot M_{Edd} , which is in turn proportional to M.

  15. Electromagnetically induced grating with maximal atomic coherence

    SciTech Connect

    Carvalho, Silvania A.; Araujo, Luis E. E. de

    2011-10-15

    We describe theoretically an atomic diffraction grating that combines an electromagnetically induced grating with a coherence grating in a double-{Lambda} atomic system. With the atom in a condition of maximal coherence between its lower levels, the combined gratings simultaneously diffract both the incident probe beam as well as the signal beam generated through four-wave mixing. A special feature of the atomic grating is that it will diffract any beam resonantly tuned to any excited state of the atom accessible by a dipole transition from its ground state.

  16. [What do psychiatric patients expect of inpatient psychiatric hospital treatment?].

    PubMed

    Fleischmann, Heribert

    2003-05-01

    Patients are mostly passive utilizer of the health-care-system. They are confronted with a supply of medical service and they are allowed to show their satisfaction with it retrospectively. Our medical system has in future to develop itself from an effective perspective to an utilizer orientated medicine. Orientation to the utilizers means to ask for the expectations of the patients for supply (at customer's option). Aim of our investigation was to check the subjective expectations of the patients before the beginning of in-patient treatment: 1. What is their opinion about the label of the disorder, they are suffering. 2. Of what therapeutic measures do they expect help for theirselves. 3. Do they want to play a part in planning of therapeutic measures. 209 of 344 (61%) of the patients were at admission ready for answering a self designed questionnaire. Only 4% of the patients said, that their disorder is called insanity. They preferred labels like mental illness (45%), somatic illness (43%) and mental health problem (42%). A pharmacological therapy expected in totally 61% of the patients. Mostly were expected drugs against depressive disorders (32%), drugs against addiction (31%) and tranquilizers (29%). Only 10% of the patients expected to get antipsychotic drugs. A verbal therapeutic intervention expected 76% of the patients. To have a speak with the doctor is with 69% a first rank desire, followed by speaking with the psychologist (60%), the nurses (58%) and the patients comrades (56%). Psychotherapy in a narrower sense expect only 40% of the patients. Furthermore there are privacy and recreation through promenades in front of the expectations (69%), followed by relaxation (59%), occupational therapy (55%) and sports or active exercise therapy (54%). 75% of the patients want to be informed about the therapy. 69% want to cooperate with planning of the therapy. Only 21% commit the therapy to the doctor. About one third of the patients expect a consultation with

  17. Developing maximal neuromuscular power: part 2 - training considerations for improving maximal power production.

    PubMed

    Cormie, Prue; McGuigan, Michael R; Newton, Robert U

    2011-02-01

    This series of reviews focuses on the most important neuromuscular function in many sport performances: the ability to generate maximal muscular power. Part 1, published in an earlier issue of Sports Medicine, focused on the factors that affect maximal power production while part 2 explores the practical application of these findings by reviewing the scientific literature relevant to the development of training programmes that most effectively enhance maximal power production. The ability to generate maximal power during complex motor skills is of paramount importance to successful athletic performance across many sports. A crucial issue faced by scientists and coaches is the development of effective and efficient training programmes that improve maximal power production in dynamic, multi-joint movements. Such training is referred to as 'power training' for the purposes of this review. Although further research is required in order to gain a deeper understanding of the optimal training techniques for maximizing power in complex, sports-specific movements and the precise mechanisms underlying adaptation, several key conclusions can be drawn from this review. First, a fundamental relationship exists between strength and power, which dictates that an individual cannot possess a high level of power without first being relatively strong. Thus, enhancing and maintaining maximal strength is essential when considering the long-term development of power. Second, consideration of movement pattern, load and velocity specificity is essential when designing power training programmes. Ballistic, plyometric and weightlifting exercises can be used effectively as primary exercises within a power training programme that enhances maximal power. The loads applied to these exercises will depend on the specific requirements of each particular sport and the type of movement being trained. The use of ballistic exercises with loads ranging from 0% to 50% of one-repetition maximum (1RM) and

  18. The asymptotic equipartition property in reinforcement learning and its relation to return maximization.

    PubMed

    Iwata, Kazunori; Ikeda, Kazushi; Sakai, Hideaki

    2006-01-01

    We discuss an important property called the asymptotic equipartition property on empirical sequences in reinforcement learning. This states that the typical set of empirical sequences has probability nearly one, that all elements in the typical set are nearly equi-probable, and that the number of elements in the typical set is an exponential function of the sum of conditional entropies if the number of time steps is sufficiently large. The sum is referred to as stochastic complexity. Using the property we elucidate the fact that the return maximization depends on two factors, the stochastic complexity and a quantity depending on the parameters of environment. Here, the return maximization means that the best sequences in terms of expected return have probability one. We also examine the sensitivity of stochastic complexity, which is a qualitative guide in tuning the parameters of action-selection strategy, and show a sufficient condition for return maximization in probability.

  19. Spiders Tune Glue Viscosity to Maximize Adhesion.

    PubMed

    Amarpuri, Gaurav; Zhang, Ci; Diaz, Candido; Opell, Brent D; Blackledge, Todd A; Dhinojwala, Ali

    2015-11-24

    Adhesion in humid conditions is a fundamental challenge to both natural and synthetic adhesives. Yet, glue from most spider species becomes stickier as humidity increases. We find the adhesion of spider glue, from five diverse spider species, maximizes at very different humidities that matches their foraging habitats. By using high-speed imaging and spreading power law, we find that the glue viscosity varies over 5 orders of magnitude with humidity for each species, yet the viscosity at maximal adhesion for each species is nearly identical, 10(5)-10(6) cP. Many natural systems take advantage of viscosity to improve functional response, but spider glue's humidity responsiveness is a novel adaptation that makes the glue stickiest in each species' preferred habitat. This tuning is achieved by a combination of proteins and hygroscopic organic salts that determines water uptake in the glue. We therefore anticipate that manipulation of polymer-salts interaction to control viscosity can provide a simple mechanism to design humidity responsive smart adhesives.

  20. Maximal liquid bridges between horizontal cylinders

    NASA Astrophysics Data System (ADS)

    Cooray, Himantha; Huppert, Herbert E.; Neufeld, Jerome A.

    2016-08-01

    We investigate two-dimensional liquid bridges trapped between pairs of identical horizontal cylinders. The cylinders support forces owing to surface tension and hydrostatic pressure that balance the weight of the liquid. The shape of the liquid bridge is determined by analytically solving the nonlinear Laplace-Young equation. Parameters that maximize the trapping capacity (defined as the cross-sectional area of the liquid bridge) are then determined. The results show that these parameters can be approximated with simple relationships when the radius of the cylinders is small compared with the capillary length. For such small cylinders, liquid bridges with the largest cross-sectional area occur when the centre-to-centre distance between the cylinders is approximately twice the capillary length. The maximum trapping capacity for a pair of cylinders at a given separation is linearly related to the separation when it is small compared with the capillary length. The meniscus slope angle of the largest liquid bridge produced in this regime is also a linear function of the separation. We additionally derive approximate solutions for the profile of a liquid bridge, using the linearized Laplace-Young equation. These solutions analytically verify the above-mentioned relationships obtained for the maximization of the trapping capacity.

  1. Area coverage maximization in service facility siting

    NASA Astrophysics Data System (ADS)

    Matisziw, Timothy C.; Murray, Alan T.

    2009-06-01

    Traditionally, models for siting facilities in order to optimize coverage of area demand have made use of discrete space representations to efficiently handle both candidate facility locations and demand. These discretizations of space are often necessary given the linear functional forms of many siting models and the complexities associated with evaluating continuous space. Recently, several spatial optimization approaches have been proposed to address the more general problem of identifying facility sites that maximize regional coverage for the case where candidate sites and demand are continuously distributed across space. One assumption of existing approaches is that only demand falling within a prescribed radius of the facility can be effectively served. In many practical applications, however, service areas are not necessarily circular, as terrain, transportation, and service characteristics of the facility often result in irregular shapes. This paper develops a generalized service coverage approach, allowing a sited facility to have any continuous service area shape, not simply a circle. Given that demand and facility sites are assumed to be continuous throughout a region, geometrical properties of the demand region and the service facility coverage area are exploited to identify a facility site to optimize the correspondence between the two areas. In particular, we consider the case where demand is uniformly distributed and the service area is translated to maximize coverage. A heuristic approach is proposed for efficient model solution. Application results are presented for siting a facility given differently shaped service areas.

  2. Spiders Tune Glue Viscosity to Maximize Adhesion.

    PubMed

    Amarpuri, Gaurav; Zhang, Ci; Diaz, Candido; Opell, Brent D; Blackledge, Todd A; Dhinojwala, Ali

    2015-11-24

    Adhesion in humid conditions is a fundamental challenge to both natural and synthetic adhesives. Yet, glue from most spider species becomes stickier as humidity increases. We find the adhesion of spider glue, from five diverse spider species, maximizes at very different humidities that matches their foraging habitats. By using high-speed imaging and spreading power law, we find that the glue viscosity varies over 5 orders of magnitude with humidity for each species, yet the viscosity at maximal adhesion for each species is nearly identical, 10(5)-10(6) cP. Many natural systems take advantage of viscosity to improve functional response, but spider glue's humidity responsiveness is a novel adaptation that makes the glue stickiest in each species' preferred habitat. This tuning is achieved by a combination of proteins and hygroscopic organic salts that determines water uptake in the glue. We therefore anticipate that manipulation of polymer-salts interaction to control viscosity can provide a simple mechanism to design humidity responsive smart adhesives. PMID:26513350

  3. Maximal lactate steady state in Judo

    PubMed Central

    de Azevedo, Paulo Henrique Silva Marques; Pithon-Curi, Tania; Zagatto, Alessandro Moura; Oliveira, João; Perez, Sérgio

    2014-01-01

    Summary Background: the purpose of this study was to verify the validity of respiratory compensation threshold (RCT) measured during a new single judo specific incremental test (JSIT) for aerobic demand evaluation. Methods: to test the validity of the new test, the JSIT was compared with Maximal Lactate Steady State (MLSS), which is the gold standard procedure for aerobic demand measuring. Eight well-trained male competitive judo players (24.3 ± 7.9 years; height of 169.3 ± 6.7cm; fat mass of 12.7 ± 3.9%) performed a maximal incremental specific test for judo to assess the RCT and performed on 30-minute MLSS test, where both tests were performed mimicking the UchiKomi drills. Results: the intensity at RCT measured on JSIT was not significantly different compared to MLSS (p=0.40). In addition, it was observed high and significant correlation between MLSS and RCT (r=0.90, p=0.002), as well as a high agreement. Conclusions: RCT measured during JSIT is a valid procedure to measure the aerobic demand, respecting the ecological validity of Judo. PMID:25332923

  4. Optimizing Population Variability to Maximize Benefit

    PubMed Central

    Izu, Leighton T.; Bányász, Tamás; Chen-Izu, Ye

    2015-01-01

    Variability is inherent in any population, regardless whether the population comprises humans, plants, biological cells, or manufactured parts. Is the variability beneficial, detrimental, or inconsequential? This question is of fundamental importance in manufacturing, agriculture, and bioengineering. This question has no simple categorical answer because research shows that variability in a population can have both beneficial and detrimental effects. Here we ask whether there is a certain level of variability that can maximize benefit to the population as a whole. We answer this question by using a model composed of a population of individuals who independently make binary decisions; individuals vary in making a yes or no decision, and the aggregated effect of these decisions on the population is quantified by a benefit function (e.g. accuracy of the measurement using binary rulers, aggregate income of a town of farmers). Here we show that an optimal variance exists for maximizing the population benefit function; this optimal variance quantifies what is often called the “right mix” of individuals in a population. PMID:26650247

  5. Maximizing strain in miniaturized dielectric elastomer actuators

    NASA Astrophysics Data System (ADS)

    Rosset, Samuel; Araromi, Oluwaseun; Shea, Herbert

    2015-04-01

    We present a theoretical model to optimise the unidirectional motion of a rigid object bonded to a miniaturized dielectric elastomer actuator (DEA), a configuration found for example in AMI's haptic feedback devices, or in our tuneable RF phase shifter. Recent work has shown that unidirectional motion is maximized when the membrane is both anistropically prestretched and subjected to a dead load in the direction of actuation. However, the use of dead weights for miniaturized devices is clearly highly impractical. Consequently smaller devices use the membrane itself to generate the opposing force. Since the membrane covers the entire frame, one has the same prestretch condition in the active (actuated) and passive zones. Because the passive zone contracts when the active zone expands, it does not provide a constant restoring force, reducing the maximum achievable actuation strain. We have determined the optimal ratio between the size of the electrode (active zone) and the passive zone, as well as the optimal prestretch in both in-plane directions, in order to maximize the absolute displacement of the rigid object placed at the active/passive border. Our model and experiments show that the ideal active ratio is 50%, with a displacement twice smaller than what can be obtained with a dead load. We expand our fabrication process to also show how DEAs can be laser-post-processed to remove carefully chosen regions of the passive elastomer membrane, thereby increasing the actuation strain of the device.

  6. Metaphors As Storehouses of Expectation.

    ERIC Educational Resources Information Center

    Beavis, Allan K.; Thomas, A. Ross

    1996-01-01

    Explores how metaphors are used to identify and store some expectations that structure schools' interactions and communications. Outlines a systems-theoretical view of schools derived from Niklas Luhmann's social theories. Illustrates how the metaphors identified in an earlier study provide material contexts for identifying and storing structures…

  7. Reasonable Expectation of Adult Behavior.

    ERIC Educational Resources Information Center

    Todaro, Julie

    1999-01-01

    Discusses staff behavioral problems that prove difficult for successful library management. Suggests that reasonable expectations for behavior need to be established in such areas as common courtesies, environmental issues such as temperature and noise levels, work relationships and values, diverse work styles and ways of communicating, and…

  8. Children's Judgments of Expected Value.

    ERIC Educational Resources Information Center

    Schlottmann, Anne; Anderson, Norman H.

    1994-01-01

    Expected value judgments of 5- through 10-year-olds were studied by having children view roulette-type games and make judgments of how happy a puppet playing the game would be. Even the youngest children showed some understanding of probability dependence, with children under eight using an additive integration rule and children eight and older…

  9. Supervising Prerelease Offenders: Clarifying Expectations.

    ERIC Educational Resources Information Center

    Benekos, Peter J.

    1986-01-01

    Presents and discusses a conceptual model of the concerns of prerelease offenders and community supervisors. The conceptualization suggests that "perceptual differences" of the concerns of prerelease status is one alternative for examining the supervisorial relationship. Attempts to identify and confront the different expectations of supervisors…

  10. Career Expectations of Accounting Students

    ERIC Educational Resources Information Center

    Elam, Dennis; Mendez, Francis

    2010-01-01

    The demographic make-up of accounting students is dramatically changing. This study sets out to measure how well the profession is ready to accommodate what may be very different needs and expectations of this new generation of students. Non-traditional students are becoming more and more of a tradition in the current college classroom.…

  11. Primary expectations of secondary metabolites

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Plant secondary metabolites (e.g., phenolics) are important for human health, in addition to the organoleptic properties they impart to fresh and processed foods. Consumer expectations such as appearance, taste, or texture influence their purchasing decisions. Thorough identification of phenolic com...

  12. Training Focuses on Teachers' Expectations

    ERIC Educational Resources Information Center

    Gewertz, Catherine

    2005-01-01

    This article discusses a training program that aims to raise teachers' awareness of how they treat their students. The Teacher Expectations & Student Achievement, or TESA, program--which delves into whether teachers deal with their lower-achieving and higher-achieving students equitably--has been used nationally for more than 30 years. But its…

  13. Great Expectations and New Beginnings

    ERIC Educational Resources Information Center

    Davis, Frances A.

    2009-01-01

    Great Expectation and New Beginnings is a prenatal family support program run by the Family, Infant, and Preschool Program (FIPP) in North Carolina. FIPP has developed an evidence-based integrated framework of early childhood intervention and family support that includes three primary components: providing intervention in everyday family…

  14. Corporate diversification: expectations and outcomes.

    PubMed

    Clement, J P

    1988-01-01

    A review of the research concerning the diversification experience of firms in other industries shows that expectations of higher profit rates and lower risk are not entirely realistic. However, there are many ways in which the probability of financially successful diversification may be increased.

  15. Evaluation of Behavioral Expectation Scales.

    ERIC Educational Resources Information Center

    Zedeck, Sheldon; Baker, Henry T.

    Behavioral Expectation Scales developed by Smith and Kendall were evaluated. Results indicated slight interrater reliability between Head Nurses and Supervisors, moderate dependence among five performance dimensions, and correlation between two scales and tenure. Results are discussed in terms of procedural problems, critical incident problems,…

  16. Education: Expectation and the Unexpected

    ERIC Educational Resources Information Center

    Fulford, Amanda

    2016-01-01

    This paper considers concepts of expectation and responsibility, and how these drive dialogic interactions between tutor and student in an age of marketised Higher Education. In thinking about such interactions in terms of different forms of exchange, the paper considers the philosophy of Martin Buber and Emmanuel Levinas on dialogic…

  17. FastStats: Life Expectancy

    MedlinePlus

    ... years of age by sex, race and Hispanic origin Health, United States 2015, table 15 [PDF - 9.8 MB] Life expectancy at birth and at 65 years of age, by sex: Organisation for Economic Co-operation and Development (OECD) countries Health, United States 2015, table 14 [PDF - 9. ...

  18. Double-Entry Expectancy Tables.

    ERIC Educational Resources Information Center

    Wesman, Alexander G.

    1966-01-01

    Double-entry expectancy tables are used to make admissions, guidance, or employment decisions based on two predictors. Examples of their use in showing relationships between high school and college performance are explained. The advantages of double-entry expectacy tables given are: (1) relative simplicity of preparation requiring no formal…

  19. Privacy Expectations in Online Contexts

    ERIC Educational Resources Information Center

    Pure, Rebekah Abigail

    2013-01-01

    Advances in digital networked communication technology over the last two decades have brought the issue of personal privacy into sharper focus within contemporary public discourse. In this dissertation, I explain the Fourth Amendment and the role that privacy expectations play in the constitutional protection of personal privacy generally, and…

  20. Excap: maximization of haplotypic diversity of linked markers.

    PubMed

    Kahles, André; Sarqume, Fahad; Savolainen, Peter; Arvestad, Lars

    2013-01-01

    Genetic markers, defined as variable regions of DNA, can be utilized for distinguishing individuals or populations. As long as markers are independent, it is easy to combine the information they provide. For nonrecombinant sequences like mtDNA, choosing the right set of markers for forensic applications can be difficult and requires careful consideration. In particular, one wants to maximize the utility of the markers. Until now, this has mainly been done by hand. We propose an algorithm that finds the most informative subset of a set of markers. The algorithm uses a depth first search combined with a branch-and-bound approach. Since the worst case complexity is exponential, we also propose some data-reduction techniques and a heuristic. We implemented the algorithm and applied it to two forensic caseworks using mitochondrial DNA, which resulted in marker sets with significantly improved haplotypic diversity compared to previous suggestions. Additionally, we evaluated the quality of the estimation with an artificial dataset of mtDNA. The heuristic is shown to provide extensive speedup at little cost in accuracy.

  1. Paracellular epithelial sodium transport maximizes energy efficiency in the kidney.

    PubMed

    Pei, Lei; Solis, Glenn; Nguyen, Mien T X; Kamat, Nikhil; Magenheimer, Lynn; Zhuo, Min; Li, Jiahua; Curry, Joshua; McDonough, Alicia A; Fields, Timothy A; Welch, William J; Yu, Alan S L

    2016-07-01

    Efficient oxygen utilization in the kidney may be supported by paracellular epithelial transport, a form of passive diffusion that is driven by preexisting transepithelial electrochemical gradients. Claudins are tight-junction transmembrane proteins that act as paracellular ion channels in epithelial cells. In the proximal tubule (PT) of the kidney, claudin-2 mediates paracellular sodium reabsorption. Here, we used murine models to investigate the role of claudin-2 in maintaining energy efficiency in the kidney. We found that claudin-2-null mice conserve sodium to the same extent as WT mice, even during profound dietary sodium depletion, as a result of the upregulation of transcellular Na-K-2Cl transport activity in the thick ascending limb of Henle. We hypothesized that shifting sodium transport to transcellular pathways would lead to increased whole-kidney oxygen consumption. Indeed, compared with control animals, oxygen consumption in the kidneys of claudin-2-null mice was markedly increased, resulting in medullary hypoxia. Furthermore, tubular injury in kidneys subjected to bilateral renal ischemia-reperfusion injury was more severe in the absence of claudin-2. Our results indicate that paracellular transport in the PT is required for efficient utilization of oxygen in the service of sodium transport. We speculate that paracellular permeability may have evolved as a general strategy in epithelial tissues to maximize energy efficiency.

  2. Paracellular epithelial sodium transport maximizes energy efficiency in the kidney

    PubMed Central

    Pei, Lei; Nguyen, Mien T.X.; Kamat, Nikhil; Magenheimer, Lynn; Zhuo, Min; Li, Jiahua; McDonough, Alicia A.; Fields, Timothy A.; Welch, William J.; Yu, Alan S.L.

    2016-01-01

    Efficient oxygen utilization in the kidney may be supported by paracellular epithelial transport, a form of passive diffusion that is driven by preexisting transepithelial electrochemical gradients. Claudins are tight-junction transmembrane proteins that act as paracellular ion channels in epithelial cells. In the proximal tubule (PT) of the kidney, claudin-2 mediates paracellular sodium reabsorption. Here, we used murine models to investigate the role of claudin-2 in maintaining energy efficiency in the kidney. We found that claudin-2–null mice conserve sodium to the same extent as WT mice, even during profound dietary sodium depletion, as a result of the upregulation of transcellular Na-K-2Cl transport activity in the thick ascending limb of Henle. We hypothesized that shifting sodium transport to transcellular pathways would lead to increased whole-kidney oxygen consumption. Indeed, compared with control animals, oxygen consumption in the kidneys of claudin-2–null mice was markedly increased, resulting in medullary hypoxia. Furthermore, tubular injury in kidneys subjected to bilateral renal ischemia-reperfusion injury was more severe in the absence of claudin-2. Our results indicate that paracellular transport in the PT is required for efficient utilization of oxygen in the service of sodium transport. We speculate that paracellular permeability may have evolved as a general strategy in epithelial tissues to maximize energy efficiency. PMID:27214555

  3. Paracrine communication maximizes cellular response fidelity in wound signaling

    PubMed Central

    Handly, L Naomi; Pilko, Anna; Wollman, Roy

    2015-01-01

    Population averaging due to paracrine communication can arbitrarily reduce cellular response variability. Yet, variability is ubiquitously observed, suggesting limits to paracrine averaging. It remains unclear whether and how biological systems may be affected by such limits of paracrine signaling. To address this question, we quantify the signal and noise of Ca2+ and ERK spatial gradients in response to an in vitro wound within a novel microfluidics-based device. We find that while paracrine communication reduces gradient noise, it also reduces the gradient magnitude. Accordingly we predict the existence of a maximum gradient signal to noise ratio. Direct in vitro measurement of paracrine communication verifies these predictions and reveals that cells utilize optimal levels of paracrine signaling to maximize the accuracy of gradient-based positional information. Our results demonstrate the limits of population averaging and show the inherent tradeoff in utilizing paracrine communication to regulate cellular response fidelity. DOI: http://dx.doi.org/10.7554/eLife.09652.001 PMID:26448485

  4. Graded Expectations: Predictive Processing and the Adjustment of Expectations during Spoken Language Comprehension

    PubMed Central

    Boudewyn, Megan A.; Long, Debra L.; Swaab, Tamara Y.

    2015-01-01

    The goal of this study was to investigate the use of local and global context to incoming words during listening comprehension. Local context was manipulated by presenting a target noun (e.g., cake, veggies) that was preceded by a word that described a prototypical or atypical feature of the noun (e.g., sweet, healthy). Global context was manipulated by presenting the noun in a scenario that was consistent or inconsistent with the critical noun (e.g., a birthday party). ERPs were examined at the feature word and at the critical noun. An N400 effect was found at the feature word reflecting the effect of compatibility with the global context. Global predictability and local feature-word consistency interacted at the critical noun: a larger N200 was found to nouns that mismatched predictions when the context was maximally constraining, relative to nouns in the other conditions. A graded N400 response was observed at the critical noun, modulated by global predictability and feature consistency. Finally, PNP effects of context-updating were observed to nouns supported by one contextual cue (global/local), but unsupported by the other. These results indicate (1) incoming words that are compatible with context-based expectations receive a processing benefit; (2) when the context is sufficiently constraining, specific lexical items may be activated; and (3) listeners dynamically adjust their expectations when input is inconsistent with their predictions, provided that the inconsistency has some level of support from either global or local context. PMID:25673006

  5. Adaptable Careers: Maximizing Less and Exploring More

    ERIC Educational Resources Information Center

    van Vianen, Annelies E. M.; De Pater, Irene E.; Preenen, Paul T. Y.

    2009-01-01

    Today, young adults are expected to decide between educational, vocational, and job options and to make the best choice possible. Career literatures emphasize the importance of young adults' career decision making but also acknowledge the problems related to making these decisions. The authors argue that career counselors could support clients'…

  6. Trust regions in Kriging-based optimization with expected improvement

    NASA Astrophysics Data System (ADS)

    Regis, Rommel G.

    2016-06-01

    The Kriging-based Efficient Global Optimization (EGO) method works well on many expensive black-box optimization problems. However, it does not seem to perform well on problems with steep and narrow global minimum basins and on high-dimensional problems. This article develops a new Kriging-based optimization method called TRIKE (Trust Region Implementation in Kriging-based optimization with Expected improvement) that implements a trust-region-like approach where each iterate is obtained by maximizing an Expected Improvement (EI) function within some trust region. This trust region is adjusted depending on the ratio of the actual improvement to the EI. This article also develops the Kriging-based CYCLONE (CYClic Local search in OptimizatioN using Expected improvement) method that uses a cyclic pattern to determine the search regions where the EI is maximized. TRIKE and CYCLONE are compared with EGO on 28 test problems with up to 32 dimensions and on a 36-dimensional groundwater bioremediation application in appendices supplied as an online supplement available at http://dx.doi.org/10.1080/0305215X.2015.1082350. The results show that both algorithms yield substantial improvements over EGO and they are competitive with a radial basis function method.

  7. Dispatch Scheduling to Maximize Exoplanet Detection

    NASA Astrophysics Data System (ADS)

    Johnson, Samson; McCrady, Nate; MINERVA

    2016-01-01

    MINERVA is a dedicated exoplanet detection telescope array using radial velocity measurements of nearby stars to detect planets. MINERVA will be a completely robotic facility, with a goal of maximizing the number of exoplanets detected. MINERVA requires a unique application of queue scheduling due to its automated nature and the requirement of high cadence observations. A dispatch scheduling algorithm is employed to create a dynamic and flexible selector of targets to observe, in which stars are chosen by assigning values through a weighting function. I designed and have begun testing a simulation which implements the functions of a dispatch scheduler and records observations based on target selections through the same principles that will be used at the commissioned site. These results will be used in a larger simulation that incorporates weather, planet occurrence statistics, and stellar noise to test the planet detection capabilities of MINERVA. This will be used to heuristically determine an optimal observing strategy for the MINERVA project.

  8. Maximally polarized states for quantum light fields

    SciTech Connect

    Sanchez-Soto, Luis L.; Yustas, Eulogio C.; Bjoerk, Gunnar; Klimov, Andrei B.

    2007-10-15

    The degree of polarization of a quantum field can be defined as its distance to an appropriate set of states. When we take unpolarized states as this reference set, the states optimizing this degree for a fixed average number of photons N present a fairly symmetric, parabolic photon statistic, with a variance scaling as N{sup 2}. Although no standard optical process yields such a statistic, we show that, to an excellent approximation, a highly squeezed vacuum can be taken as maximally polarized. We also consider the distance of a field to the set of its SU(2) transformed, finding that certain linear superpositions of SU(2) coherent states make this degree to be unity.

  9. Constrained maximal power in small engines.

    PubMed

    Gaveau, B; Moreau, M; Schulman, L S

    2010-11-01

    Efficiency at maximum power is studied for two simple engines (three- and five-state systems). This quantity is found to be sensitive to the variable with respect to which the maximization is implemented. It can be wildly different from the well-known Curzon-Ahlborn bound (one minus the square root of the temperature ratio), or can be even closer than previously realized. It is shown that when the power is optimized with respect to a maximum number of variables the Curzon-Ahlborn bound is a lower bound, accurate at high temperatures, but a rather poor estimate when the cold reservoir temperature approaches zero (at which point the Carnot limit is achieved).

  10. Characterizing maximally singular phase-space distributions

    NASA Astrophysics Data System (ADS)

    Sperling, J.

    2016-07-01

    Phase-space distributions are widely applied in quantum optics to access the nonclassical features of radiations fields. In particular, the inability to interpret the Glauber-Sudarshan distribution in terms of a classical probability density is the fundamental benchmark for quantum light. However, this phase-space distribution cannot be directly reconstructed for arbitrary states, because of its singular behavior. In this work, we perform a characterization of the Glauber-Sudarshan representation in terms of distribution theory. We address important features of such distributions: (i) the maximal degree of their singularities is studied, (ii) the ambiguity of representation is shown, and (iii) their dual space for nonclassicality tests is specified. In this view, we reconsider the methods for regularizing the Glauber-Sudarshan distribution for verifying its nonclassicality. This treatment is supported with comprehensive examples and counterexamples.

  11. Critical paths: maximizing patient care coordination.

    PubMed

    Spath, P L

    1995-01-01

    1. With today's emphasis on horizontal and vertical integration of patient care services and the new initiatives prompted by these challenges, OR nurses are considering new methods for managing the perioperative period. One such method is the critical path. 2. A critical path defines an optimal sequencing and timing of interventions by physicians, nurses, and other staff members for a particular diagnosis or procedure, designed to better use resources, maximize quality of care, and minimize delays. 3. Hospitals implementing path-based patient care have reported cost reductions and improved team-work. Critical paths have been shown to reduce patient care costs by improving hospital efficiency, not merely by reducing physician practice variations.

  12. Mixtures of maximally entangled pure states

    NASA Astrophysics Data System (ADS)

    Flores, M. M.; Galapon, E. A.

    2016-09-01

    We study the conditions when mixtures of maximally entangled pure states remain entangled. We found that the resulting mixed state remains entangled when the number of entangled pure states to be mixed is less than or equal to the dimension of the pure states. For the latter case of mixing a number of pure states equal to their dimension, we found that the mixed state is entangled provided that the entangled pure states to be mixed are not equally weighted. We also found that one can restrict the set of pure states that one can mix from in order to ensure that the resulting mixed state is genuinely entangled. Also, we demonstrate how these results could be applied as a way to detect entanglement in mixtures of the entangled pure states with noise.

  13. Maximal energy extraction under discrete diffusive exchange

    SciTech Connect

    Hay, M. J.; Schiff, J.; Fisch, N. J.

    2015-10-15

    Waves propagating through a bounded plasma can rearrange the densities of states in the six-dimensional velocity-configuration phase space. Depending on the rearrangement, the wave energy can either increase or decrease, with the difference taken up by the total plasma energy. In the case where the rearrangement is diffusive, only certain plasma states can be reached. It turns out that the set of reachable states through such diffusive rearrangements has been described in very different contexts. Building upon those descriptions, and making use of the fact that the plasma energy is a linear functional of the state densities, the maximal extractable energy under diffusive rearrangement can then be addressed through linear programming.

  14. Energy expenditure in maximal jumps on sand.

    PubMed

    Muramatsu, Shigeru; Fukudome, Akinori; Miyama, Motoyoshi; Arimoto, Morio; Kijima, Akira

    2006-01-01

    The purpose of this study was to comparatively investigate the energy expenditure of jumping on sand and on a firm surface. Eight male university volleyball players were recruited in this study and performed 3 sets of 10 repetitive jumps on sand (the S condition), and also on a force platform (the F condition). The subjects jumped every two seconds during a set, and the interval between sets was 20 seconds. The subjects performed each jump on sand with maximal exertion while in the F condition they jumped as high as they did on sand. The oxygen requirement for jumping was defined as the total oxygen uptake consecutively measured between the first set of jumps and the point that oxygen uptake recovers to the resting value, and the energy expenditure was calculated. The jump height in the S condition was equivalent to 64.0 +/- 4.4% of the height in the maximal jump on the firm surface. The oxygen requirement was 7.39 +/- 0.33 liters in S condition and 6.24 +/- 0.69 liters in the F condition, and the energy expenditure was 37.0 +/- 1.64 kcal and 31.2 +/- 3.46 kcal respectively. The differences in the two counter values were both statistically significant (p < 0.01). The energy expenditure of jumping in the S condition was equivalent to 119.4 +/- 10.1% of the one in the F condition, which ratio was less than in walking and close to in running. PMID:16617210

  15. Anaerobic contribution during maximal anaerobic running test: correlation with maximal accumulated oxygen deficit.

    PubMed

    Zagatto, A; Redkva, P; Loures, J; Kalva Filho, C; Franco, V; Kaminagakura, E; Papoti, M

    2011-12-01

    The aims of this study were: (i) to measure energy system contributions in maximal anaerobic running test (MART); and (ii) to verify any correlation between MART and maximal accumulated oxygen deficit (MAOD). Eleven members of the armed forces were recruited for this study. Participants performed MART and MAOD, both accomplished on a treadmill. MART consisted of intermittent exercise, 20 s effort with 100 s recovery, after each spell of effort exercise. Energy system contributions by MART were also determined by excess post-exercise oxygen consumption, lactate response, and oxygen uptake measurements. MAOD was determined by five submaximal intensities and one supramaximal intensity exercises corresponding to 120% at maximal oxygen uptake intensity. Energy system contributions were 65.4±1.1% to aerobic; 29.5±1.1% to anaerobic a-lactic; and 5.1±0.5% to anaerobic lactic system throughout the whole test, while only during effort periods the anaerobic contribution corresponded to 73.5±1.0%. Maximal power found in MART corresponded to 111.25±1.33 mL/kg/min but did not significantly correlate with MAOD (4.69±0.30 L and 70.85±4.73 mL/kg). We concluded that the anaerobic a-lactic system is the main energy system in MART efforts and this test did not significantly correlate to MAOD.

  16. Does Maximizing Information at the Cut Score Always Maximize Classification Accuracy and Consistency?

    ERIC Educational Resources Information Center

    Wyse, Adam E.; Babcock, Ben

    2016-01-01

    A common suggestion made in the psychometric literature for fixed-length classification tests is that one should design tests so that they have maximum information at the cut score. Designing tests in this way is believed to maximize the classification accuracy and consistency of the assessment. This article uses simulated examples to illustrate…

  17. From entropy-maximization to equality-maximization: Gauss, Laplace, Pareto, and Subbotin

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2014-12-01

    The entropy-maximization paradigm of statistical physics is well known to generate the omnipresent Gauss law. In this paper we establish an analogous socioeconomic model which maximizes social equality, rather than physical disorder, in the context of the distributions of income and wealth in human societies. We show that-on a logarithmic scale-the Laplace law is the socioeconomic equality-maximizing counterpart of the physical entropy-maximizing Gauss law, and that this law manifests an optimized balance between two opposing forces: (i) the rich and powerful, striving to amass ever more wealth, and thus to increase social inequality; and (ii) the masses, struggling to form more egalitarian societies, and thus to increase social equality. Our results lead from log-Gauss statistics to log-Laplace statistics, yield Paretian power-law tails of income and wealth distributions, and show how the emergence of a middle-class depends on the underlying levels of socioeconomic inequality and variability. Also, in the context of asset-prices with Laplace-distributed returns, our results imply that financial markets generate an optimized balance between risk and predictability.

  18. Maximizing the potential of process engineering databases

    SciTech Connect

    McGuire, M.L.; Jones, K. )

    1989-11-01

    The authors discuss their work with a major oil and gas production company. It shows that technical computing and, particularly, the utilization of integration databases, high-performance engineering workstations, and data networking can create major profit opportunities. Properly utilized technical computing can make more time available for optimizing the conceptual design process that critically affects the life-cycle economic performance of process plants. Computer-aided drafting has little influence on total economic performance once a plant is operating, but an investment in process engineering effectiveness can earn a leveraged benefit through its effect on both capital investment and future operating costs.

  19. Expected endings and judged duration.

    PubMed

    Jones, M R; Boltz, M G; Klein, J M

    1993-09-01

    In four experiments, the predictions of an expectancy/contrast model (Jones & Boltz, 1989) for judged duration were evaluated. In Experiments 1 and 2, listeners estimated the relative durations of auditory pattern pairs that varied in contextual phrasing and temporal contrast. The results showed that when the second pattern of a pair either seems to (Experiments 1 and 2) or actually does (Experiment 2) end earlier (later) than the first, subjects judge it as being relatively shorter (longer). In Experiment 3, listeners heard single patterns in which notes immediately preceding the final one were omitted. Timing of the final (target) tone was varied such that it was one beat early, on time, or one beat late. Listeners' ratings of target tones revealed systematic effects of phrasing and target timing. In Experiment 4, listeners temporally completed (extrapolated) sequences of Experiment 3 that were modified to exclude the target tone. The results again showed that phrase context systematically influenced expectancies about "when" sequences should end. As a set, these studies demonstrate the effects of event structure and anticipatory attending upon experienced duration and are discussed in terms of the expectancy/contrast model.

  20. PERMANENT GENETIC RESOURCES: Development of polymorphic microsatellite markers in Acer mono Maxim.

    PubMed

    Kikuchi, S; Shibata, M

    2008-03-01

    Thirteen polymorphic microsatellite markers were developed for Acer mono Maxim., one of the major components of deciduous forests in Japan. An average of 13.8 alleles were found, with expected heterozygosity ranging from 0.140 to 0.945 in 34 A. mono individuals from the Ogawa Forest Reserve in Ibaraki Prefecture, Japan. This set of microsatellite markers can be used to analyse mating patterns and gene flow in A. mono populations.

  1. Relative income expectations, expected malpractice premium costs, and other determinants of physician specialty choice.

    PubMed

    Kiker, B F; Zeh, M

    1998-06-01

    We analyze the effects of relative income expectations, expected malpractice premium cost, and other economic and noneconomic factors on physician specialty choice. The data for this paper are taken from responses of medical students who completed the Association of American Medical Colleges' Medical School Questionnaire and graduated from medical school in 1995. A random utility model is used to guide our thinking; the econometric technique is multinomial logit regression. Selection of a surgical or support specialty is found to be positively income motivated, while the influence of expected relative income is negatively related to the choice of primary-care and medical practices. Concern over malpractice premium cost is negatively related to surgical and positively related to primary-care selection. Other important determinants of choice are planned location of practice, length of residency, type of medical school attended, score on the science problems section of the Medical College Admission Test, predictable working hours and perceived prestige of the specialty. Policies that alter expected relative income, length of residency, desired location of practice, medical school attended, predictable working hours, and prestige of practice, rather than financial aid, may be appropriate for correcting a perceived maldistribution of physicians among specialties.

  2. Proper Timing of Foot-and-Mouth Disease Vaccination of Piglets with Maternally Derived Antibodies Will Maximize Expected Protection Levels.

    PubMed

    Dekker, Aldo; Chénard, Gilles; Stockhofe, Norbert; Eblé, Phaedra L

    2016-01-01

    We investigated to what extent maternally derived antibodies interfere with foot-and-mouth disease (FMD) vaccination in order to determine the factors that influence the correct vaccination for piglets. Groups of piglets with maternally derived antibodies were vaccinated at different time points following birth, and the antibody titers to FMD virus (FMDV) were measured using virus neutralization tests (VNT). We used 50 piglets from 5 sows that had been vaccinated 3 times intramuscularly in the neck during pregnancy with FMD vaccine containing strains of FMDV serotypes O, A, and Asia-1. Four groups of 10 piglets were vaccinated intramuscularly in the neck at 3, 5, 7, or 9 weeks of age using a monovalent Cedivac-FMD vaccine (serotype A TUR/14/98). One group of 10 piglets with maternally derived antibodies was not vaccinated, and another group of 10 piglets without maternally derived antibodies was vaccinated at 3 weeks of age and served as a control group. Sera samples were collected, and antibody titers were determined using VNT. In our study, the antibody responses of piglets with maternally derived antibodies vaccinated at 7 or 9 weeks of age were similar to the responses of piglets without maternally derived antibodies vaccinated at 3 weeks of age. The maternally derived antibody levels in piglets depended very strongly on the antibody titer in the sow, so the optimal time for vaccination of piglets will depend on the vaccination scheme and quality of vaccine used in the sows and should, therefore, be monitored and reviewed on regular basis in countries that use FMD prophylactic vaccination. PMID:27446940

  3. Proper Timing of Foot-and-Mouth Disease Vaccination of Piglets with Maternally Derived Antibodies Will Maximize Expected Protection Levels

    PubMed Central

    Dekker, Aldo; Chénard, Gilles; Stockhofe, Norbert; Eblé, Phaedra L.

    2016-01-01

    We investigated to what extent maternally derived antibodies interfere with foot-and-mouth disease (FMD) vaccination in order to determine the factors that influence the correct vaccination for piglets. Groups of piglets with maternally derived antibodies were vaccinated at different time points following birth, and the antibody titers to FMD virus (FMDV) were measured using virus neutralization tests (VNT). We used 50 piglets from 5 sows that had been vaccinated 3 times intramuscularly in the neck during pregnancy with FMD vaccine containing strains of FMDV serotypes O, A, and Asia-1. Four groups of 10 piglets were vaccinated intramuscularly in the neck at 3, 5, 7, or 9 weeks of age using a monovalent Cedivac-FMD vaccine (serotype A TUR/14/98). One group of 10 piglets with maternally derived antibodies was not vaccinated, and another group of 10 piglets without maternally derived antibodies was vaccinated at 3 weeks of age and served as a control group. Sera samples were collected, and antibody titers were determined using VNT. In our study, the antibody responses of piglets with maternally derived antibodies vaccinated at 7 or 9 weeks of age were similar to the responses of piglets without maternally derived antibodies vaccinated at 3 weeks of age. The maternally derived antibody levels in piglets depended very strongly on the antibody titer in the sow, so the optimal time for vaccination of piglets will depend on the vaccination scheme and quality of vaccine used in the sows and should, therefore, be monitored and reviewed on regular basis in countries that use FMD prophylactic vaccination. PMID:27446940

  4. Lighting Utilization.

    ERIC Educational Resources Information Center

    Crank, Ron

    This instructional unit is one of 10 developed by students on various energy-related areas that deals specifically with lighting utilization. Its objective is for the student to be able to outline the development of lighting use and conservation and identify major types and operating characteristics of lamps used in electric lighting. Some topics…

  5. Are One Man's Rags Another Man's Riches? Identifying Adaptive Expectations Using Panel Data

    ERIC Educational Resources Information Center

    Burchardt, Tania

    2005-01-01

    One of the motivations frequently cited by Sen and Nussbaum for moving away from a utility metric towards a capabilities framework is a concern about adaptive preferences or conditioned expectations. If utility is related to the satisfaction of aspirations or expectations, and if these are affected by the individual's previous experience of…

  6. Freshmen Expectations of the University of Maryland, 1971-72.

    ERIC Educational Resources Information Center

    Horowitz, Joseph L.; Sedlacek, William E.

    The present study had as its purpose to determine incoming freshmen expectations of the University of Maryland. Two models of the College and University Environment Scales (CUES) were utilized in the study, and the data were analyzed to determine the relationship between CUES I and CUES II results, between 1969 and 1971 freshmen perceptions, and…

  7. Space expectations: Latest survey results

    NASA Astrophysics Data System (ADS)

    Raitt, David; Swan, Cathy; Swan, Peter; Woods, Arthur

    2010-11-01

    At the 59th IAC in Glasgow, a paper was presented describing two studies being carried out by Commission VI of the International Academy of Astronautics on the impact of space activities upon society. One of these studies sought to discover the hopes, aspirations and expectations of those outside the space field - the person in the street - regarding space activities. The paper reviewed the thought processes and decisions leading up to the commencement of the survey, documented the reasoning behind the questions which the public were; described the efforts to translate the questionnaire into the six Unesco languages to achieve wider participation, and provided an overview of results to date. This present paper provides an update on this Space Expectations survey as the study comes to a close. The paper briefly discusses the addition of new languages for the questionnaire and the drive to make the survey better known and encourage participation worldwide, before going on to provide a detailed analysis of the latest results of opinions. Insights include respondent's thoughts regarding the visions and costs of space activities, how much people feel part of them and whether and how they would like to be more involved.

  8. Comparison between static maximal force and handbrake pulling force.

    PubMed

    Chateauroux, E; Wang, X

    2012-01-01

    The measurement of maximum pulling force is important not only for specifying force limit of industrial workers but also for designing controls requiring high force. This paper presents a comparison between maximal static handbrake pulling force (FST) and force exerted during normal handbrake pulling task (FDY). These forces were measured for different handle locations and subject characteristics. Participants were asked to pull a handbrake on an adjustable car mock-up as they would do when parking their own car, then to exert a force as high as possible on the pulled handbrake. Hand pulling forces were measured using a six-axes force sensor. 5 fixed handbrake positions were tested as well as a neutral handbrake position defined by the subject. FST and FDY were significantly correlated. Both were found to be dependent on handbrake position, age and gender. As expected, women and older subjects exerted lower forces. FST was significantly higher than FDY. The ratio FmR (FDY divided by FST) was also analyzed. Women showed higher FmR than men meaning that the task required a higher amount of muscle capability for women. FmR was also influenced by handbrake location. These data will be useful for handbrake design.

  9. Quantum Mechanics and the Principle of Maximal Variety

    NASA Astrophysics Data System (ADS)

    Smolin, Lee

    2016-06-01

    Quantum mechanics is derived from the principle that the universe contain as much variety as possible, in the sense of maximizing the distinctiveness of each subsystem. The quantum state of a microscopic system is defined to correspond to an ensemble of subsystems of the universe with identical constituents and similar preparations and environments. A new kind of interaction is posited amongst such similar subsystems which acts to increase their distinctiveness, by extremizing the variety. In the limit of large numbers of similar subsystems this interaction is shown to give rise to Bohm's quantum potential. As a result the probability distribution for the ensemble is governed by the Schroedinger equation. The measurement problem is naturally and simply solved. Microscopic systems appear statistical because they are members of large ensembles of similar systems which interact non-locally. Macroscopic systems are unique, and are not members of any ensembles of similar systems. Consequently their collective coordinates may evolve deterministically. This proposal could be tested by constructing quantum devices from entangled states of a modest number of quits which, by its combinatorial complexity, can be expected to have no natural copies.

  10. Primary Care Clinician Expectations Regarding Aging

    ERIC Educational Resources Information Center

    Davis, Melinda M.; Bond, Lynne A.; Howard, Alan; Sarkisian, Catherine A.

    2011-01-01

    Purpose: Expectations regarding aging (ERA) in community-dwelling older adults are associated with personal health behaviors and health resource usage. Clinicians' age expectations likely influence patients' expectations and care delivery patterns; yet, limited research has explored clinicians' age expectations. The Expectations Regarding Aging…

  11. Core facilities: maximizing the return on investment.

    PubMed

    Farber, Gregory K; Weiss, Linda

    2011-08-10

    To conduct high-quality state-of-the-art research, clinical and translational scientists need access to specialized core facilities and appropriately trained staff. In this time of economic constraints and increasing research costs, organized and efficient core facilities are essential for researchers who seek to investigate complex translational research questions. Here, we describe efforts at the U.S. National Institutes of Health and academic medical centers to enhance the utility of cores.

  12. Core Facilities: Maximizing the Return on Investment

    PubMed Central

    Farber, Gregory K.; Weiss, Linda

    2011-01-01

    To conduct high-quality state-of-the-art research, clinical and translational scientists need access to specialized core facilities and appropriately trained staff. In this time of economic constraints and increasing research costs, organized and efficient core facilities are essential for researchers who seek to investigate complex translational research questions. Here, we describe efforts at the U.S . National Institutes of Health and academic medical centers to enhance the utility of cores. PMID:21832235

  13. Diffusion Tensor Estimation by Maximizing Rician Likelihood

    PubMed Central

    Landman, Bennett; Bazin, Pierre-Louis; Prince, Jerry

    2012-01-01

    Diffusion tensor imaging (DTI) is widely used to characterize white matter in health and disease. Previous approaches to the estimation of diffusion tensors have either been statistically suboptimal or have used Gaussian approximations of the underlying noise structure, which is Rician in reality. This can cause quantities derived from these tensors — e.g., fractional anisotropy and apparent diffusion coefficient — to diverge from their true values, potentially leading to artifactual changes that confound clinically significant ones. This paper presents a novel maximum likelihood approach to tensor estimation, denoted Diffusion Tensor Estimation by Maximizing Rician Likelihood (DTEMRL). In contrast to previous approaches, DTEMRL considers the joint distribution of all observed data in the context of an augmented tensor model to account for variable levels of Rician noise. To improve numeric stability and prevent non-physical solutions, DTEMRL incorporates a robust characterization of positive definite tensors and a new estimator of underlying noise variance. In simulated and clinical data, mean squared error metrics show consistent and significant improvements from low clinical SNR to high SNR. DTEMRL may be readily supplemented with spatial regularization or a priori tensor distributions for Bayesian tensor estimation. PMID:23132746

  14. Reflection quasilattices and the maximal quasilattice

    NASA Astrophysics Data System (ADS)

    Boyle, Latham; Steinhardt, Paul J.

    2016-08-01

    We introduce the concept of a reflection quasilattice, the quasiperiodic generalization of a Bravais lattice with irreducible reflection symmetry. Among their applications, reflection quasilattices are the reciprocal (i.e., Bragg diffraction) lattices for quasicrystals and quasicrystal tilings, such as Penrose tilings, with irreducible reflection symmetry and discrete scale invariance. In a follow-up paper, we will show that reflection quasilattices can be used to generate tilings in real space with properties analogous to those in Penrose tilings, but with different symmetries and in various dimensions. Here we explain that reflection quasilattices only exist in dimensions two, three, and four, and we prove that there is a unique reflection quasilattice in dimension four: the "maximal reflection quasilattice" in terms of dimensionality and symmetry. Unlike crystallographic Bravais lattices, all reflection quasilattices are invariant under rescaling by certain discrete scale factors. We tabulate the complete set of scale factors for all reflection quasilattices in dimension d >2 , and for all those with quadratic irrational scale factors in d =2 .

  15. Maximizing exosome colloidal stability following electroporation.

    PubMed

    Hood, Joshua L; Scott, Michael J; Wickline, Samuel A

    2014-03-01

    Development of exosome-based semisynthetic nanovesicles for diagnostic and therapeutic purposes requires novel approaches to load exosomes with cargo. Electroporation has previously been used to load exosomes with RNA. However, investigations into exosome colloidal stability following electroporation have not been considered. Herein, we report the development of a unique trehalose pulse media (TPM) that minimizes exosome aggregation following electroporation. Dynamic light scattering (DLS) and RNA absorbance were employed to determine the extent of exosome aggregation and electroextraction post electroporation in TPM compared to common PBS pulse media or sucrose pulse media (SPM). Use of TPM to disaggregate melanoma exosomes post electroporation was dependent on both exosome concentration and electric field strength. TPM maximized exosome dispersal post electroporation for both homogenous B16 melanoma and heterogeneous human serum-derived populations of exosomes. Moreover, TPM enabled heavy cargo loading of melanoma exosomes with 5nm superparamagnetic iron oxide nanoparticles (SPION5) while maintaining original exosome size and minimizing exosome aggregation as evidenced by transmission electron microscopy. Loading exosomes with SPION5 increased exosome density on sucrose gradients. This provides a simple, label-free means of enriching exogenously modified exosomes and introduces the potential for MRI-driven theranostic exosome investigations in vivo.

  16. Inverting Monotonic Nonlinearities by Entropy Maximization

    PubMed Central

    López-de-Ipiña Pena, Karmele; Caiafa, Cesar F.

    2016-01-01

    This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results. PMID:27780261

  17. Predicting maximal grip strength using hand circumference.

    PubMed

    Li, Ke; Hewson, David J; Duchêne, Jacques; Hogrel, Jean-Yves

    2010-12-01

    The objective of this study was to analyze the correlations between anthropometric data and maximal grip strength (MGS) in order to establish a simple model to predict "normal" MGS. Randomized bilateral measurement of MGS was performed on a homogeneous population of 100 subjects. MGS was measured according to a standardized protocol with three dynamometers (Jamar, Myogrip and Martin Vigorimeter) for both dominant and non-dominant sides. Several anthropometric data were also measured: height; weight; hand, wrist and forearm circumference; hand and palm length. Among these data, hand circumference had the strongest correlation with MGS for all three dynamometers and for both hands (0.789 and 0.782 for Jamar; 0.829 and 0.824 for Myogrip; 0.663 and 0.730 for Vigorimeter). In addition, the only anthropometric variable systematically selected by a stepwise multiple linear regression analysis was also hand circumference. Based on this parameter alone, a predictive regression model presented good results (r(2) = 0.624 for Jamar; r(2) = 0.683 for Myogrip and r(2) = 0.473 for Vigorimeter; all adjusted r(2)). Moreover a single equation was predictive of MGS for both men and women and for both non-dominant and dominant hands. "Normal" MGS can be predicted using hand circumference alone.

  18. Predicting maximal grip strength using hand circumference.

    PubMed

    Li, Ke; Hewson, David J; Duchêne, Jacques; Hogrel, Jean-Yves

    2010-12-01

    The objective of this study was to analyze the correlations between anthropometric data and maximal grip strength (MGS) in order to establish a simple model to predict "normal" MGS. Randomized bilateral measurement of MGS was performed on a homogeneous population of 100 subjects. MGS was measured according to a standardized protocol with three dynamometers (Jamar, Myogrip and Martin Vigorimeter) for both dominant and non-dominant sides. Several anthropometric data were also measured: height; weight; hand, wrist and forearm circumference; hand and palm length. Among these data, hand circumference had the strongest correlation with MGS for all three dynamometers and for both hands (0.789 and 0.782 for Jamar; 0.829 and 0.824 for Myogrip; 0.663 and 0.730 for Vigorimeter). In addition, the only anthropometric variable systematically selected by a stepwise multiple linear regression analysis was also hand circumference. Based on this parameter alone, a predictive regression model presented good results (r(2) = 0.624 for Jamar; r(2) = 0.683 for Myogrip and r(2) = 0.473 for Vigorimeter; all adjusted r(2)). Moreover a single equation was predictive of MGS for both men and women and for both non-dominant and dominant hands. "Normal" MGS can be predicted using hand circumference alone. PMID:20708427

  19. Maximal and sub-maximal functional lifting performance at different platform heights.

    PubMed

    Savage, Robert J; Jaffrey, Mark A; Billing, Daniel C; Ham, Daniel J

    2015-01-01

    Introducing valid physical employment tests requires identifying and developing a small number of practical tests that provide broad coverage of physical performance across the full range of job tasks. This study investigated discrete lifting performance across various platform heights reflective of common military lifting tasks. Sixteen Australian Army personnel performed a discrete lifting assessment to maximal lifting capacity (MLC) and maximal acceptable weight of lift (MAWL) at four platform heights between 1.30 and 1.70 m. There were strong correlations between platform height and normalised lifting performance for MLC (R(2) = 0.76 ± 0.18, p < 0.05) and MAWL (R(2) = 0.73 ± 0.21, p < 0.05). The developed relationship allowed prediction of lifting capacity at one platform height based on lifting capacity at any of the three other heights, with a standard error of < 4.5 kg and < 2.0 kg for MLC and MAWL, respectively.

  20. Skeletal muscle vasodilatation during maximal exercise in health and disease.

    PubMed

    Calbet, Jose A L; Lundby, Carsten

    2012-12-15

    Maximal exercise vasodilatation results from the balance between vasoconstricting and vasodilating signals combined with the vascular reactivity to these signals. During maximal exercise with a small muscle mass the skeletal muscle vascular bed is fully vasodilated. During maximal whole body exercise, however, vasodilatation is restrained by the sympathetic system. This is necessary to avoid hypotension since the maximal vascular conductance of the musculature exceeds the maximal pumping capacity of the heart. Endurance training and high-intensity intermittent knee extension training increase the capacity for maximal exercise vasodilatation by 20-30%, mainly due to an enhanced vasodilatory capacity, as maximal exercise perfusion pressure changes little with training. The increase in maximal exercise vascular conductance is to a large extent explained by skeletal muscle hypertrophy and vascular remodelling. The vasodilatory capacity during maximal exercise is reduced or blunted with ageing, as well as in chronic heart failure patients and chronically hypoxic humans; reduced vasodilatory responsiveness and increased sympathetic activity (and probably, altered sympatholysis) are potential mechanisms accounting for this effect. Pharmacological counteraction of the sympathetic restraint may result in lower perfusion pressure and reduced oxygen extraction by the exercising muscles. However, at the same time fast inhibition of the chemoreflex in maximally exercising humans may result in increased vasodilatation, further confirming a restraining role of the sympathetic nervous system on exercise-induced vasodilatation. This is likely to be critical for the maintenance of blood pressure in exercising patients with a limited heart pump capacity.

  1. Skeletal muscle vasodilatation during maximal exercise in health and disease

    PubMed Central

    Calbet, Jose A L; Lundby, Carsten

    2012-01-01

    Maximal exercise vasodilatation results from the balance between vasoconstricting and vasodilating signals combined with the vascular reactivity to these signals. During maximal exercise with a small muscle mass the skeletal muscle vascular bed is fully vasodilated. During maximal whole body exercise, however, vasodilatation is restrained by the sympathetic system. This is necessary to avoid hypotension since the maximal vascular conductance of the musculature exceeds the maximal pumping capacity of the heart. Endurance training and high-intensity intermittent knee extension training increase the capacity for maximal exercise vasodilatation by 20–30%, mainly due to an enhanced vasodilatory capacity, as maximal exercise perfusion pressure changes little with training. The increase in maximal exercise vascular conductance is to a large extent explained by skeletal muscle hypertrophy and vascular remodelling. The vasodilatory capacity during maximal exercise is reduced or blunted with ageing, as well as in chronic heart failure patients and chronically hypoxic humans; reduced vasodilatory responsiveness and increased sympathetic activity (and probably, altered sympatholysis) are potential mechanisms accounting for this effect. Pharmacological counteraction of the sympathetic restraint may result in lower perfusion pressure and reduced oxygen extraction by the exercising muscles. However, at the same time fast inhibition of the chemoreflex in maximally exercising humans may result in increased vasodilatation, further confirming a restraining role of the sympathetic nervous system on exercise-induced vasodilatation. This is likely to be critical for the maintenance of blood pressure in exercising patients with a limited heart pump capacity. PMID:23027820

  2. Motor activity improves temporal expectancy.

    PubMed

    Fautrelle, Lilian; Mareschal, Denis; French, Robert; Addyman, Caspar; Thomas, Elizabeth

    2015-01-01

    Certain brain areas involved in interval timing are also important in motor activity. This raises the possibility that motor activity might influence interval timing. To test this hypothesis, we assessed interval timing in healthy adults following different types of training. The pre- and post-training tasks consisted of a button press in response to the presentation of a rhythmic visual stimulus. Alterations in temporal expectancy were evaluated by measuring response times. Training consisted of responding to the visual presentation of regularly appearing stimuli by either: (1) pointing with a whole-body movement, (2) pointing only with the arm, (3) imagining pointing with a whole-body movement, (4) simply watching the stimulus presentation, (5) pointing with a whole-body movement in response to a target that appeared at irregular intervals (6) reading a newspaper. Participants performing a motor activity in response to the regular target showed significant improvements in judgment times compared to individuals with no associated motor activity. Individuals who only imagined pointing with a whole-body movement also showed significant improvements. No improvements were observed in the group that trained with a motor response to an irregular stimulus, hence eliminating the explanation that the improved temporal expectations of the other motor training groups was purely due to an improved motor capacity to press the response button. All groups performed a secondary task equally well, hence indicating that our results could not simply be attributed to differences in attention between the groups. Our results show that motor activity, even when it does not play a causal or corrective role, can lead to improved interval timing judgments. PMID:25806813

  3. Motor activity improves temporal expectancy.

    PubMed

    Fautrelle, Lilian; Mareschal, Denis; French, Robert; Addyman, Caspar; Thomas, Elizabeth

    2015-01-01

    Certain brain areas involved in interval timing are also important in motor activity. This raises the possibility that motor activity might influence interval timing. To test this hypothesis, we assessed interval timing in healthy adults following different types of training. The pre- and post-training tasks consisted of a button press in response to the presentation of a rhythmic visual stimulus. Alterations in temporal expectancy were evaluated by measuring response times. Training consisted of responding to the visual presentation of regularly appearing stimuli by either: (1) pointing with a whole-body movement, (2) pointing only with the arm, (3) imagining pointing with a whole-body movement, (4) simply watching the stimulus presentation, (5) pointing with a whole-body movement in response to a target that appeared at irregular intervals (6) reading a newspaper. Participants performing a motor activity in response to the regular target showed significant improvements in judgment times compared to individuals with no associated motor activity. Individuals who only imagined pointing with a whole-body movement also showed significant improvements. No improvements were observed in the group that trained with a motor response to an irregular stimulus, hence eliminating the explanation that the improved temporal expectations of the other motor training groups was purely due to an improved motor capacity to press the response button. All groups performed a secondary task equally well, hence indicating that our results could not simply be attributed to differences in attention between the groups. Our results show that motor activity, even when it does not play a causal or corrective role, can lead to improved interval timing judgments.

  4. Motor Activity Improves Temporal Expectancy

    PubMed Central

    Fautrelle, Lilian; Mareschal, Denis; French, Robert; Addyman, Caspar; Thomas, Elizabeth

    2015-01-01

    Certain brain areas involved in interval timing are also important in motor activity. This raises the possibility that motor activity might influence interval timing. To test this hypothesis, we assessed interval timing in healthy adults following different types of training. The pre- and post-training tasks consisted of a button press in response to the presentation of a rhythmic visual stimulus. Alterations in temporal expectancy were evaluated by measuring response times. Training consisted of responding to the visual presentation of regularly appearing stimuli by either: (1) pointing with a whole-body movement, (2) pointing only with the arm, (3) imagining pointing with a whole-body movement, (4) simply watching the stimulus presentation, (5) pointing with a whole-body movement in response to a target that appeared at irregular intervals (6) reading a newspaper. Participants performing a motor activity in response to the regular target showed significant improvements in judgment times compared to individuals with no associated motor activity. Individuals who only imagined pointing with a whole-body movement also showed significant improvements. No improvements were observed in the group that trained with a motor response to an irregular stimulus, hence eliminating the explanation that the improved temporal expectations of the other motor training groups was purely due to an improved motor capacity to press the response button. All groups performed a secondary task equally well, hence indicating that our results could not simply be attributed to differences in attention between the groups. Our results show that motor activity, even when it does not play a causal or corrective role, can lead to improved interval timing judgments. PMID:25806813

  5. Maximizing semi-active vibration isolation utilizing a magnetorheological damper with an inner bypass configuration

    SciTech Connect

    Bai, Xian-Xu; Wereley, Norman M.; Hu, Wei

    2015-05-07

    A single-degree-of-freedom (SDOF) semi-active vibration control system based on a magnetorheological (MR) damper with an inner bypass is investigated in this paper. The MR damper employing a pair of concentric tubes, between which the key structure, i.e., the inner bypass, is formed and MR fluids are energized, is designed to provide large dynamic range (i.e., ratio of field-on damping force to field-off damping force) and damping force range. The damping force performance of the MR damper is modeled using phenomenological model and verified by the experimental tests. In order to assess its feasibility and capability in vibration control systems, the mathematical model of a SDOF semi-active vibration control system based on the MR damper and skyhook control strategy is established. Using an MTS 244 hydraulic vibration exciter system and a dSPACE DS1103 real-time simulation system, experimental study for the SDOF semi-active vibration control system is also conducted. Simulation results are compared to experimental measurements.

  6. Improving Simulated Annealing by Replacing Its Variables with Game-Theoretic Utility Maximizers

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Bandari, Esfandiar; Tumer, Kagan

    2001-01-01

    The game-theory field of Collective INtelligence (COIN) concerns the design of computer-based players engaged in a non-cooperative game so that as those players pursue their self-interests, a pre-specified global goal for the collective computational system is achieved as a side-effect. Previous implementations of COIN algorithms have outperformed conventional techniques by up to several orders of magnitude, on domains ranging from telecommunications control to optimization in congestion problems. Recent mathematical developments have revealed that these previously developed algorithms were based on only two of the three factors determining performance. Consideration of only the third factor would instead lead to conventional optimization techniques like simulated annealing that have little to do with non-cooperative games. In this paper we present an algorithm based on all three terms at once. This algorithm can be viewed as a way to modify simulated annealing by recasting it as a non-cooperative game, with each variable replaced by a player. This recasting allows us to leverage the intelligent behavior of the individual players to substantially improve the exploration step of the simulated annealing. Experiments are presented demonstrating that this recasting significantly improves simulated annealing for a model of an economic process run over an underlying small-worlds topology. Furthermore, these experiments reveal novel small-worlds phenomena, and highlight the shortcomings of conventional mechanism design in bounded rationality domains.

  7. Ground truth spectrometry and imagery of eruption clouds to maximize utility of satellite imagery

    NASA Technical Reports Server (NTRS)

    Rose, William I.

    1993-01-01

    Field experiments with thermal imaging infrared radiometers were performed and a laboratory system was designed for controlled study of simulated ash clouds. Using AVHRR (Advanced Very High Resolution Radiometer) thermal infrared bands 4 and 5, a radiative transfer method was developed to retrieve particle sizes, optical depth and particle mass involcanic clouds. A model was developed for measuring the same parameters using TIMS (Thermal Infrared Multispectral Scanner), MODIS (Moderate Resolution Imaging Spectrometer), and ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer). Related publications are attached.

  8. A sampling plan for conduit-flow karst springs: Minimizing sampling cost and maximizing statistical utility

    USGS Publications Warehouse

    Currens, J.C.

    1999-01-01

    Analytical data for nitrate and triazines from 566 samples collected over a 3-year period at Pleasant Grove Spring, Logan County, KY, were statistically analyzed to determine the minimum data set needed to calculate meaningful yearly averages for a conduit-flow karst spring. Results indicate that a biweekly sampling schedule augmented with bihourly samples from high-flow events will provide meaningful suspended-constituent and dissolved-constituent statistics. Unless collected over an extensive period of time, daily samples may not be representative and may also be autocorrelated. All high-flow events resulting in a significant deflection of a constituent from base-line concentrations should be sampled. Either the geometric mean or the flow-weighted average of the suspended constituents should be used. If automatic samplers are used, then they may be programmed to collect storm samples as frequently as every few minutes to provide details on the arrival time of constituents of interest. However, only samples collected bihourly should be used to calculate averages. By adopting a biweekly sampling schedule augmented with high-flow samples, the need to continuously monitor discharge, or to search for and analyze existing data to develop a statistically valid monitoring plan, is lessened.Analytical data for nitrate and triazines from 566 samples collected over a 3-year period at Pleasant Grove Spring, Logan County, KY, were statistically analyzed to determine the minimum data set needed to calculate meaningful yearly averages for a conduit-flow karst spring. Results indicate that a biweekly sampling schedule augmented with bihourly samples from high-flow events will provide meaningful suspended-constituent and dissolved-constituent statistics. Unless collected over an extensive period of time, daily samples may not be representative and may also be autocorrelated. All high-flow events resulting in a significant deflection of a constituent from base-line concentrations should be sampled. Either the geometric mean or the flow-weighted average of the suspended constituents should be used. If automatic samplers are used, then they may be programmed to collect storm samples as frequently as every few minutes to provide details on the arrival time of constituents of interest. However, only samples collected bihourly should be used to calculate averages. By adopting a biweekly sampling schedule augmented with high-flow samples, the need to continuously monitor discharge, or to search for and analyze existing data to develop a statistically valid monitoring plan, is lessened.

  9. Maximizing semi-active vibration isolation utilizing a magnetorheological damper with an inner bypass configuration

    NASA Astrophysics Data System (ADS)

    Bai, Xian-Xu; Wereley, Norman M.; Hu, Wei

    2015-05-01

    A single-degree-of-freedom (SDOF) semi-active vibration control system based on a magnetorheological (MR) damper with an inner bypass is investigated in this paper. The MR damper employing a pair of concentric tubes, between which the key structure, i.e., the inner bypass, is formed and MR fluids are energized, is designed to provide large dynamic range (i.e., ratio of field-on damping force to field-off damping force) and damping force range. The damping force performance of the MR damper is modeled using phenomenological model and verified by the experimental tests. In order to assess its feasibility and capability in vibration control systems, the mathematical model of a SDOF semi-active vibration control system based on the MR damper and skyhook control strategy is established. Using an MTS 244 hydraulic vibration exciter system and a dSPACE DS1103 real-time simulation system, experimental study for the SDOF semi-active vibration control system is also conducted. Simulation results are compared to experimental measurements.

  10. Optimal stack gas cleaning technology to maximize coal utilization in electric power generation

    SciTech Connect

    Emish, G.J.; Schulte, W.; Ellison, W.

    1997-07-01

    Major trends and developments are affecting availability, cost and comparative advantages to be assessed in choice of alternative primary energy/fuel sources and stack gas cleaning processes. As a result, electric power development can be seen to be in a traditional period leading to broadened, principal use of plentiful, higher-sulfur, fossil fuels, e.g. bituminous coal, petroleum coke, Orimulsion, etc., accompanied by gas cleaning system design affording minimum total cost per ton of SO{sub 2} removal in conjunction with advantageous, increased volume of high-value sulfurous byproduct generation.

  11. Banking on a bad bet. Probability matching in risky choice is linked to expectation generation.

    PubMed

    James, Greta; Koehler, Derek J

    2011-06-01

    Probability matching is the tendency to match choice probabilities to outcome probabilities in a binary prediction task. This tendency is a long-standing puzzle in the study of decision making under risk and uncertainty, because always predicting the more probable outcome across a series of trials (maximizing) would yield greater predictive accuracy and payoffs. In three experiments, we tied the predominance of probability matching over maximizing to a generally adaptive cognitive operation that generates expectations regarding the aggregate outcomes of an upcoming sequence of events. Under conditions designed to diminish the generation or perceived applicability of such expectations, we found that the frequency of probability-matching behavior dropped substantially and maximizing became the norm.

  12. Family context and adolescents' fertility expectations.

    PubMed

    Trent, K

    1994-09-01

    Data from the National Longitudinal Surveys of Labor Market Experience of Youth are used to examine and contrast the effects of family context and individual characteristics on adolescents' expectations about adolescent fertility, nonmarital childbearing, family size, and childlessness. The findings indicate that family structure has modest but specific effects on adolescents' fertility expectations. Living with mothers only increases expectations for nonmarital childbearing, and living with fathers (without biological mother) lowers the total number of children expected. Larger subsize raises expectations for nonmarital childbearing and family size. Poverty raises expectations for adolescent childbearing but does not affect other fertility expectations. Adolescent women are less likely than men to expect nonmarital childbearing, and overall, expect fewer children. Blacks are more likely than Whites to expect adolescent and nonmarital fertility and Hispanics are significantly less likely than non-Hispanic Whites to expect childlessness.

  13. Maximal Oxygen Uptake, Sweating and Tolerance to Exercise in the Heat

    NASA Technical Reports Server (NTRS)

    Greenleaf, J. E.; Castle, B. L.; Ruff, W. K.

    1972-01-01

    The physiological mechanisms that facilitate acute acclimation to heat have not been fully elucidated, but the result is the establishment of a more efficient cardiovascular system to increase heat dissipation via increased sweating that allows the acclimated man to function with a cooler internal environment and to extend his performance. Men in good physical condition with high maximal oxygen uptakes generally acclimate to heat more rapidly and retain it longer than men in poorer condition. Also, upon first exposure trained men tolerate exercise in the heat better than untrained men. Both resting in heat and physical training in a cool environment confer only partial acclimation when first exposed to work in the heat. These observations suggest separate additive stimuli of metabolic heat from exercise and environmental heat to increase sweating during the acclimation process. However, the necessity of utilizing physical exercise during acclimation has been questioned. Bradbury et al. (1964) have concluded exercise has no effect on the course of heat acclimation since increased sweating can be induced by merely heating resting subjects. Preliminary evidence suggests there is a direct relationship between the maximal oxygen uptake and the capacity to maintain thermal regulation, particularly through the control of sweating. Since increased sweating is an important mechanism for the development of heat acclimation, and fit men have high sweat rates, it follows that upon initial exposure to exercise in the heat, men with high maximal oxygen uptakes should exhibit less strain than men with lower maximal oxygen uptakes. The purpose of this study was: (1) to determine if men with higher maximal oxygen uptakes exhibit greater tolerance than men with lower oxygen uptakes during early exposure to exercise in the heat, and (2) to investigate further the mechanism of the relationship between sweating and maximal work capacity.

  14. Maximizing industrial infrastructure efficiency in Iceland

    NASA Astrophysics Data System (ADS)

    Ingason, Helgi Thor; Sigfusson, Thorsteinn I.

    2010-08-01

    As a consequence of the increasing aluminum production in Iceland, local processing of aluminum skimmings has become a feasible business opportunity. A recycling plant for this purpose was built in Helguvik on the Reykjanes peninsula in 2003. The case of the recycling plant reflects increased concern regarding environmental aspects of the industry. An interesting characteristic of this plant is the fact that it is run in the same facilities as a large fishmeal production installation. It is operated by the same personnel and uses—partly—the same equipment and infrastructure. This paper reviews the grounds for these decisions and the experience of this merger of a traditional fish melting industry and a more recent aluminum melting industry after 6 years of operation. The paper is written by the original entrepreneurs behind the company, who provide observations on how the aluminum industry in Iceland has evolved since the starting of Alur’s operation and what might be expected in the near future.

  15. Maximal stochastic transport in the Lorenz equations

    NASA Astrophysics Data System (ADS)

    Agarwal, Sahil; Wettlaufer, John

    2015-11-01

    We calculate the stochastic upper bounds for the Lorenz equations using an extension of the background method. In analogy with Rayleigh-Benard convection the upper bounds are for heat transport versus Rayleigh number. As might be expected the stochastic upper bounds are larger than the deterministic counterpart of Souza and Doering (2015), but their variation with noise amplitude exhibits surprising behavior. Below the transition to chaotic dynamics the upper bounds increase monotonically with noise amplitude. However, in the chaotic regime this monotonicity is lost; at a particular Rayleigh number the bound may increase or decrease with noise amplitude. The origin of this behavior is the coupling between the noise and unstable periodic orbits. This is confirmed by examining the close returns plots of the full solutions to the stochastic equations. Finally, we note that these solutions demonstrate that the effect of noise is equivalent to the effect of chaos.

  16. Health Status and Health Dynamics in an Empirical Model of Expected Longevity*

    PubMed Central

    Benítez-Silva, Hugo; Ni, Huan

    2010-01-01

    Expected longevity is an important factor influencing older individuals’ decisions such as consumption, savings, purchase of life insurance and annuities, claiming of Social Security benefits, and labor supply. It has also been shown to be a good predictor of actual longevity, which in turn is highly correlated with health status. A relatively new literature on health investments under uncertainty, which builds upon the seminal work by Grossman (1972), has directly linked longevity with characteristics, behaviors, and decisions by utility maximizing agents. Our empirical model can be understood within that theoretical framework as estimating a production function of longevity. Using longitudinal data from the Health and Retirement Study, we directly incorporate health dynamics in explaining the variation in expected longevities, and compare two alternative measures of health dynamics: the self-reported health change, and the computed health change based on self-reports of health status. In 38% of the reports in our sample, computed health changes are inconsistent with the direct report on health changes over time. And another 15% of the sample can suffer from information losses if computed changes are used to assess changes in actual health. These potentially serious problems raise doubts regarding the use and interpretation of the computed health changes and even the lagged measures of self-reported health as controls for health dynamics in a variety of empirical settings. Our empirical results, controlling for both subjective and objective measures of health status and unobserved heterogeneity in reporting, suggest that self-reported health changes are a preferred measure of health dynamics. PMID:18187217

  17. Maximality-Based Structural Operational Semantics for Petri Nets

    NASA Astrophysics Data System (ADS)

    Saīdouni, Djamel Eddine; Belala, Nabil; Bouneb, Messaouda

    2009-03-01

    The goal of this work is to exploit an implementable model, namely the maximality-based labeled transition system, which permits to express true-concurrency in a natural way without splitting actions on their start and end events. One can do this by giving a maximality-based structural operational semantics for the model of Place/Transition Petri nets in terms of maximality-based labeled transition systems structures.

  18. WHO expectation and industry goals.

    PubMed

    Vandersmissen, W

    2001-02-01

    It is expected the world's vaccine market will show a robust growth over the next few years, yet this growth will predominantly come from introduction of new vaccines in industrialised countries. Economic market forces will increasingly direct vaccine sales and vaccine development towards the needs of markets with effective purchasing power. Yet the scientific and technological progress that drives the development of such innovative vaccines holds the promise of applicability for vaccines that are highly desirable for developing countries. Corrective measures that take into account economic and industrial reality must be considered to span the widening gap between richer and poorer countries in terms of availability and general use of current and recent vaccines. Such measures must help developing countries to get access to future vaccines for diseases that predominantly or exclusively affect them, but for which the poor economic prospects do not provide a basis for the vaccine industry to undertake costly research and development programmes. Recent initiatives such as GAVI, including the establishment of a reliable, guaranteed purchase fund, could provide a solution to the problem. PMID:11166883

  19. Violating Bell inequalities maximally for two d-dimensional systems

    SciTech Connect

    Chen Jingling; Wu Chunfeng; Oh, C. H.; Kwek, L. C.; Ge Molin

    2006-09-15

    We show the maximal violation of Bell inequalities for two d-dimensional systems by using the method of the Bell operator. The maximal violation corresponds to the maximal eigenvalue of the Bell operator matrix. The eigenvectors corresponding to these eigenvalues are described by asymmetric entangled states. We estimate the maximum value of the eigenvalue for large dimension. A family of elegant entangled states |{psi}>{sub app} that violate Bell inequality more strongly than the maximally entangled state but are somewhat close to these eigenvectors is presented. These approximate states can potentially be useful for quantum cryptography as well as many other important fields of quantum information.

  20. Criticality Maximizes Complexity in Neural Tissue

    PubMed Central

    Timme, Nicholas M.; Marshall, Najja J.; Bennett, Nicholas; Ripp, Monica; Lautzenhiser, Edward; Beggs, John M.

    2016-01-01

    The analysis of neural systems leverages tools from many different fields. Drawing on techniques from the study of critical phenomena in statistical mechanics, several studies have reported signatures of criticality in neural systems, including power-law distributions, shape collapses, and optimized quantities under tuning. Independently, neural complexity—an information theoretic measure—has been introduced in an effort to quantify the strength of correlations across multiple scales in a neural system. This measure represents an important tool in complex systems research because it allows for the quantification of the complexity of a neural system. In this analysis, we studied the relationships between neural complexity and criticality in neural culture data. We analyzed neural avalanches in 435 recordings from dissociated hippocampal cultures produced from rats, as well as neural avalanches from a cortical branching model. We utilized recently developed maximum likelihood estimation power-law fitting methods that account for doubly truncated power-laws, an automated shape collapse algorithm, and neural complexity and branching ratio calculation methods that account for sub-sampling, all of which are implemented in the freely available Neural Complexity and Criticality MATLAB toolbox. We found evidence that neural systems operate at or near a critical point and that neural complexity is optimized in these neural systems at or near the critical point. Surprisingly, we found evidence that complexity in neural systems is dependent upon avalanche profiles and neuron firing rate, but not precise spiking relationships between neurons. In order to facilitate future research, we made all of the culture data utilized in this analysis freely available online. PMID:27729870

  1. Increasing inspection equipment productivity by utilizing factory automation SW on TeraScan 5XX systems

    NASA Astrophysics Data System (ADS)

    Jakubski, Thomas; Piechoncinski, Michal; Moses, Raphael; Bugata, Bharathi; Schmalfuss, Heiko; Köhler, Ines; Lisowski, Jan; Klobes, Jens; Fenske, Robert

    2009-01-01

    Especially for advanced masks the reticle inspection operation is a very significant cost factor, since it is a time consuming process and inspection tools are becoming disproportionately expensive. Analyzing and categorizing historical equipment utilization times of the reticle inspection tools however showed a significant amount of time which can be classified as non productive. In order to reduce the inspection costs the equipment utilization needed to be improved. The main contributors to non productive time were analyzed and several use cases identified, where automation utilizing a SECS1 equipment interface was expected to help to reduce these non productive times. The paper demonstrates how real time access to equipment utilization data can be applied to better control manufacturing resources. Scenarios are presented where remote monitoring and control of the inspection equipment can be used to avoid setup errors or save inspection time by faster response to problem situations. Additionally a solution to the second important need, the maximization of tool utilization in cases where not all of the intended functions are available, is explained. Both the models and the software implementation are briefly explained. For automation of the so called inspection strategy a new approach which allows separation of the business rules from the automation infrastructure was chosen. Initial results of inspection equipment performance data tracked through the SECS interface are shown. Furthermore a system integration overview is presented and examples of how the inspection strategy rules are implemented and managed are given.

  2. Expected geoneutrino signal at JUNO

    NASA Astrophysics Data System (ADS)

    Strati, Virginia; Baldoncini, Marica; Callegari, Ivan; Mantovani, Fabio; McDonough, William F.; Ricci, Barbara; Xhixha, Gerti

    2015-12-01

    Constraints on the Earth's composition and on its radiogenic energy budget come from the detection of geoneutrinos. The Kamioka Liquid scintillator Antineutrino Detector (KamLAND) and Borexino experiments recently reported the geoneutrino flux, which reflects the amount and distribution of U and Th inside the Earth. The Jiangmen Underground Neutrino Observatory (JUNO) neutrino experiment, designed as a 20 kton liquid scintillator detector, will be built in an underground laboratory in South China about 53 km from the Yangjiang and Taishan nuclear power plants, each one having a planned thermal power of approximately 18 GW. Given the large detector mass and the intense reactor antineutrino flux, JUNO aims not only to collect high statistics antineutrino signals from reactors but also to address the challenge of discriminating the geoneutrino signal from the reactor background. The predicted geoneutrino signal at JUNO is terrestrial neutrino unit (TNU), based on the existing reference Earth model, with the dominant source of uncertainty coming from the modeling of the compositional variability in the local upper crust that surrounds (out to approximately 500 km) the detector. A special focus is dedicated to the 6° × 4° local crust surrounding the detector which is estimated to contribute for the 44% of the signal. On the basis of a worldwide reference model for reactor antineutrinos, the ratio between reactor antineutrino and geoneutrino signals in the geoneutrino energy window is estimated to be 0.7 considering reactors operating in year 2013 and reaches a value of 8.9 by adding the contribution of the future nuclear power plants. In order to extract useful information about the mantle's composition, a refinement of the abundance and distribution of U and Th in the local crust is required, with particular attention to the geochemical characterization of the accessible upper crust where 47% of the expected geoneutrino signal originates and this region contributes

  3. Expectation-Based Control of Noise and Chaos

    NASA Technical Reports Server (NTRS)

    Zak, Michael

    2006-01-01

    A proposed approach to control of noise and chaos in dynamic systems would supplement conventional methods. The approach is based on fictitious forces composed of expectations governed by Fokker-Planck or Liouville equations that describe the evolution of the probability densities of the controlled parameters. These forces would be utilized as feedback control forces that would suppress the undesired diffusion of the controlled parameters. Examples of dynamic systems in which the approach is expected to prove beneficial include spacecraft, electronic systems, and coupled lasers.

  4. Maximizing Exposure Therapy: An Inhibitory Learning Approach

    PubMed Central

    Craske, Michelle G.; Treanor, Michael; Conway, Chris; Zbozinek, Tomislav; Vervliet, Bram

    2014-01-01

    Exposure therapy is an effective approach for treating anxiety disorders, although a substantial number of individuals fail to benefit or experience a return of fear after treatment. Research suggests that anxious individuals show deficits in the mechanisms believed to underlie exposure therapy, such as inhibitory learning. Targeting these processes may help improve the efficacy of exposure-based procedures. Although evidence supports an inhibitory learning model of extinction, there has been little discussion of how to implement this model in clinical practice. The primary aim of this paper is to provide examples to clinicians for how to apply this model to optimize exposure therapy with anxious clients, in ways that distinguish it from a ‘fear habituation’ approach and ‘belief disconfirmation’ approach within standard cognitive-behavior therapy. Exposure optimization strategies include 1) expectancy violation, 2) deepened extinction, 3) occasional reinforced extinction, 4) removal of safety signals, 5) variability, 6) retrieval cues, 7) multiple contexts, and 8) affect labeling. Case studies illustrate methods of applying these techniques with a variety of anxiety disorders, including obsessive-compulsive disorder, posttraumatic stress disorder, social phobia, specific phobia, and panic disorder. PMID:24864005

  5. Maximal stochastic transport in the Lorenz equations

    NASA Astrophysics Data System (ADS)

    Agarwal, Sahil; Wettlaufer, J. S.

    2016-01-01

    We calculate the stochastic upper bounds for the Lorenz equations using an extension of the background method. In analogy with Rayleigh-Bénard convection the upper bounds are for heat transport versus Rayleigh number. As might be expected, the stochastic upper bounds are larger than the deterministic counterpart of Souza and Doering [1], but their variation with noise amplitude exhibits interesting behavior. Below the transition to chaotic dynamics the upper bounds increase monotonically with noise amplitude. However, in the chaotic regime this monotonicity depends on the number of realizations in the ensemble; at a particular Rayleigh number the bound may increase or decrease with noise amplitude. The origin of this behavior is the coupling between the noise and unstable periodic orbits, the degree of which depends on the degree to which the ensemble represents the ergodic set. This is confirmed by examining the close returns plots of the full solutions to the stochastic equations and the numerical convergence of the noise correlations. The numerical convergence of both the ensemble and time averages of the noise correlations is sufficiently slow that it is the limiting aspect of the realization of these bounds. Finally, we note that the full solutions of the stochastic equations demonstrate that the effect of noise is equivalent to the effect of chaos.

  6. Maximizing total nitrogen removal from onsite-generated wastewater.

    PubMed

    Safferman, Steven I; Novellino, Marianna I; Burks, Bennette D; Parker, Robert A

    2006-01-01

    The research reported here examined the use of hydraulic loading strategies to maximize nitrogen removal from onsite-generated wastewater. These strategies are made practical by the inherently intermittent flow of onsite-generated wastewater. Experimentation was conducted at the Western Regional Wastewater Pretreatment Facility in Montgomery County, Ohio, with an established, full-scale onsite wastewater treatment system rated at 500 gallons per day. The onsite wastewater treatment unit was fed primarily with domestic wastewater that had passed through fine screens and grit removal. The dosing schedule was intermittent, representing what would be expected from onsite-generated wastewater. Oxidation occurred in the aeration tank and potentially on the solid-liquid filtration socks within the aeration tank. All major wastewater characterization parameters were monitored during the approximately one-year study, including five-day biochemical oxygen demand (BOD;), total suspended solids (TSS), nitrate, total nitrogen, pH, and alkalinity. Excellent removal of BOD5 and TSS resulted, with the effluent concentration of each parameter substantially and consistently below 10 mg/L for all operating conditions. Excellent total nitrogen removal occurred, typically to below 10 mg/L of nitrogen when the instantaneous flow of wastewater was low, even when the daily hydraulic loading was high. The removal of nitrogen was attributed to microbial biodegradation. This result indicates that the onsite wastewater treatment unit has an inherent denitrification capacity that can be matched with an equalized-hydraulic-loading strategy. The practical ability to equalize and reduce instantaneous loading results from the inherently intermittent nature of the flow associated with onsite wastewater treatment.

  7. Assigning values to intermediate health states for cost-utility analysis: theory and practice.

    PubMed

    Cohen, B J

    1996-01-01

    Cost-utility analysis (CUA) was developed to guide the allocation of health care resources under a budget constraint. As the generally stated goal of CUA is to maximize aggregate health benefits, the philosophical underpinning of this method is classic utilitarianism. Utilitarianism has been criticized as a basis for social choice because of its emphasis on the net sum of benefits without regard to the distribution of benefits. For example, it has been argued that absolute priority should be given to the worst off when making social choices affecting basic needs. Application of classic utilitarianism requires use of strength-of-preference utilities, assessed under conditions of certainty, to assign quality-adjustment factors to intermediate health states. The two methods commonly used to measure strength-of-preference utility, categorical scaling and time tradeoff, produce rankings that systematically give priority to those who are better off. Alternatively, von Neumann-Morgenstern utilities, assessed under conditions of uncertainty, could be used to assign values to intermediate health states. The theoretical basis for this would be Harsanyi's proposal that social choice be made under the hypothetical assumption that one had an equal chance of being anyone in society. If this proposal is accepted, as well as the expected-utility axioms applied to both individual choice and social choice, the preferred societal arrangement is that with the highest expected von Neumann-Morgenstern utility. In the presence of risk aversion, this will give some priority to the worst-off relative to classic utilitarianism. Another approach is to raise the values obtained by time-tradeoff assessments to a power a between 0 and 1. This would explicitly give priority to the worst off, with the degree of priority increasing as a decreases. Results could be presented over a range of a. The results of CUA would then provide useful information to those holding a range of philosophical points

  8. Pace's Maxims for Homegrown Library Projects. Coming Full Circle

    ERIC Educational Resources Information Center

    Pace, Andrew K.

    2005-01-01

    This article discusses six maxims by which to run library automation. The following maxims are discussed: (1) Solve only known problems; (2) Avoid changing data to fix display problems; (3) Aut viam inveniam aut faciam; (4) If you cannot make it yourself, buy something; (5) Kill the alligator closest to the boat; and (6) Just because yours is…

  9. Minimal Length, Maximal Momentum and the Entropic Force Law

    NASA Astrophysics Data System (ADS)

    Nozari, Kourosh; Pedram, Pouria; Molkara, M.

    2012-04-01

    Different candidates of quantum gravity proposal such as string theory, noncommutative geometry, loop quantum gravity and doubly special relativity, all predict the existence of a minimum observable length and/or a maximal momentum which modify the standard Heisenberg uncertainty principle. In this paper, we study the effects of minimal length and maximal momentum on the entropic force law formulated recently by E. Verlinde.

  10. Effect of Age and Other Factors on Maximal Heart Rate.

    ERIC Educational Resources Information Center

    Londeree, Ben R.; Moeschberger, Melvin L.

    1982-01-01

    To reduce confusion regarding reported effects of age on maximal exercise heart rate, a comprehensive review of the relevant English literature was conducted. Data on maximal heart rate after exercising with a bicycle, a treadmill, and after swimming were analyzed with regard to physical fitness and to age, sex, and racial differences. (Authors/PP)

  11. Maximal entanglement versus entropy for mixed quantum states

    SciTech Connect

    Wei, T.-C.; Goldbart, Paul M.; Kwiat, Paul G.; Nemoto, Kae; Munro, William J.; Verstraete, Frank

    2003-02-01

    Maximally entangled mixed states are those states that, for a given mixedness, achieve the greatest possible entanglement. For two-qubit systems and for various combinations of entanglement and mixedness measures, the form of the corresponding maximally entangled mixed states is determined primarily analytically. As measures of entanglement, we consider entanglement of formation, relative entropy of entanglement, and negativity; as measures of mixedness, we consider linear and von Neumann entropies. We show that the forms of the maximally entangled mixed states can vary with the combination of (entanglement and mixedness) measures chosen. Moreover, for certain combinations, the forms of the maximally entangled mixed states can change discontinuously at a specific value of the entropy. Along the way, we determine the states that, for a given value of entropy, achieve maximal violation of Bell's inequality.

  12. Learning to minimize efforts versus maximizing rewards: computational principles and neural correlates.

    PubMed

    Skvortsova, Vasilisa; Palminteri, Stefano; Pessiglione, Mathias

    2014-11-19

    The mechanisms of reward maximization have been extensively studied at both the computational and neural levels. By contrast, little is known about how the brain learns to choose the options that minimize action cost. In principle, the brain could have evolved a general mechanism that applies the same learning rule to the different dimensions of choice options. To test this hypothesis, we scanned healthy human volunteers while they performed a probabilistic instrumental learning task that varied in both the physical effort and the monetary outcome associated with choice options. Behavioral data showed that the same computational rule, using prediction errors to update expectations, could account for both reward maximization and effort minimization. However, these learning-related variables were encoded in partially dissociable brain areas. In line with previous findings, the ventromedial prefrontal cortex was found to positively represent expected and actual rewards, regardless of effort. A separate network, encompassing the anterior insula, the dorsal anterior cingulate, and the posterior parietal cortex, correlated positively with expected and actual efforts. These findings suggest that the same computational rule is applied by distinct brain systems, depending on the choice dimension-cost or benefit-that has to be learned.

  13. Multicultural Differences in Women's Expectations of Birth.

    PubMed

    Moore, Marianne F

    2016-01-01

    This review surveyed qualitative and quantitative studies to explore the expectations around birth that are held by women from different cultures. These studies are grouped according to expectations of personal control expectations of support from partner/others/family; expectations of carel behavior from providers such as nurses, doctors, and/or midwives; expectations about the health of the baby; and expectations about pain in childbirth. Discussed are the findings and the role that Western culture in medicine, power and privilege are noted in providing care to these women. PMID:27263233

  14. The Constrained Maximal Expression Level Owing to Haploidy Shapes Gene Content on the Mammalian X Chromosome

    PubMed Central

    Hurst, Laurence D.; Ghanbarian, Avazeh T.; Forrest, Alistair R. R.; Huminiecki, Lukasz

    2015-01-01

    X chromosomes are unusual in many regards, not least of which is their nonrandom gene content. The causes of this bias are commonly discussed in the context of sexual antagonism and the avoidance of activity in the male germline. Here, we examine the notion that, at least in some taxa, functionally biased gene content may more profoundly be shaped by limits imposed on gene expression owing to haploid expression of the X chromosome. Notably, if the X, as in primates, is transcribed at rates comparable to the ancestral rate (per promoter) prior to the X chromosome formation, then the X is not a tolerable environment for genes with very high maximal net levels of expression, owing to transcriptional traffic jams. We test this hypothesis using The Encyclopedia of DNA Elements (ENCODE) and data from the Functional Annotation of the Mammalian Genome (FANTOM5) project. As predicted, the maximal expression of human X-linked genes is much lower than that of genes on autosomes: on average, maximal expression is three times lower on the X chromosome than on autosomes. Similarly, autosome-to-X retroposition events are associated with lower maximal expression of retrogenes on the X than seen for X-to-autosome retrogenes on autosomes. Also as expected, X-linked genes have a lesser degree of increase in gene expression than autosomal ones (compared to the human/Chimpanzee common ancestor) if highly expressed, but not if lowly expressed. The traffic jam model also explains the known lower breadth of expression for genes on the X (and the Z of birds), as genes with broad expression are, on average, those with high maximal expression. As then further predicted, highly expressed tissue-specific genes are also rare on the X and broadly expressed genes on the X tend to be lowly expressed, both indicating that the trend is shaped by the maximal expression level not the breadth of expression per se. Importantly, a limit to the maximal expression level explains biased tissue of expression

  15. The Constrained Maximal Expression Level Owing to Haploidy Shapes Gene Content on the Mammalian X Chromosome.

    PubMed

    Hurst, Laurence D; Ghanbarian, Avazeh T; Forrest, Alistair R R; Huminiecki, Lukasz

    2015-12-01

    X chromosomes are unusual in many regards, not least of which is their nonrandom gene content. The causes of this bias are commonly discussed in the context of sexual antagonism and the avoidance of activity in the male germline. Here, we examine the notion that, at least in some taxa, functionally biased gene content may more profoundly be shaped by limits imposed on gene expression owing to haploid expression of the X chromosome. Notably, if the X, as in primates, is transcribed at rates comparable to the ancestral rate (per promoter) prior to the X chromosome formation, then the X is not a tolerable environment for genes with very high maximal net levels of expression, owing to transcriptional traffic jams. We test this hypothesis using The Encyclopedia of DNA Elements (ENCODE) and data from the Functional Annotation of the Mammalian Genome (FANTOM5) project. As predicted, the maximal expression of human X-linked genes is much lower than that of genes on autosomes: on average, maximal expression is three times lower on the X chromosome than on autosomes. Similarly, autosome-to-X retroposition events are associated with lower maximal expression of retrogenes on the X than seen for X-to-autosome retrogenes on autosomes. Also as expected, X-linked genes have a lesser degree of increase in gene expression than autosomal ones (compared to the human/Chimpanzee common ancestor) if highly expressed, but not if lowly expressed. The traffic jam model also explains the known lower breadth of expression for genes on the X (and the Z of birds), as genes with broad expression are, on average, those with high maximal expression. As then further predicted, highly expressed tissue-specific genes are also rare on the X and broadly expressed genes on the X tend to be lowly expressed, both indicating that the trend is shaped by the maximal expression level not the breadth of expression per se. Importantly, a limit to the maximal expression level explains biased tissue of expression

  16. The Constrained Maximal Expression Level Owing to Haploidy Shapes Gene Content on the Mammalian X Chromosome.

    PubMed

    Hurst, Laurence D; Ghanbarian, Avazeh T; Forrest, Alistair R R; Huminiecki, Lukasz

    2015-12-01

    X chromosomes are unusual in many regards, not least of which is their nonrandom gene content. The causes of this bias are commonly discussed in the context of sexual antagonism and the avoidance of activity in the male germline. Here, we examine the notion that, at least in some taxa, functionally biased gene content may more profoundly be shaped by limits imposed on gene expression owing to haploid expression of the X chromosome. Notably, if the X, as in primates, is transcribed at rates comparable to the ancestral rate (per promoter) prior to the X chromosome formation, then the X is not a tolerable environment for genes with very high maximal net levels of expression, owing to transcriptional traffic jams. We test this hypothesis using The Encyclopedia of DNA Elements (ENCODE) and data from the Functional Annotation of the Mammalian Genome (FANTOM5) project. As predicted, the maximal expression of human X-linked genes is much lower than that of genes on autosomes: on average, maximal expression is three times lower on the X chromosome than on autosomes. Similarly, autosome-to-X retroposition events are associated with lower maximal expression of retrogenes on the X than seen for X-to-autosome retrogenes on autosomes. Also as expected, X-linked genes have a lesser degree of increase in gene expression than autosomal ones (compared to the human/Chimpanzee common ancestor) if highly expressed, but not if lowly expressed. The traffic jam model also explains the known lower breadth of expression for genes on the X (and the Z of birds), as genes with broad expression are, on average, those with high maximal expression. As then further predicted, highly expressed tissue-specific genes are also rare on the X and broadly expressed genes on the X tend to be lowly expressed, both indicating that the trend is shaped by the maximal expression level not the breadth of expression per se. Importantly, a limit to the maximal expression level explains biased tissue of expression

  17. A taxonomic approach to communicating maxims in interstellar messages

    NASA Astrophysics Data System (ADS)

    Vakoch, Douglas A.

    2011-02-01

    Previous discussions of interstellar messages that could be sent to extraterrestrial intelligence have focused on descriptions of mathematics, science, and aspects of human culture and civilization. Although some of these depictions of humanity have implicitly referred to our aspirations, this has not clearly been separated from descriptions of our actions and attitudes as they are. In this paper, a methodology is developed for constructing interstellar messages that convey information about our aspirations by developing a taxonomy of maxims that provide guidance for living. Sixty-six maxims providing guidance for living were judged for degree of similarity to each of other. Quantitative measures of the degree of similarity between all pairs of maxims were derived by aggregating similarity judgments across individual participants. These composite similarity ratings were subjected to a cluster analysis, which yielded a taxonomy that highlights perceived interrelationships between individual maxims and that identifies major classes of maxims. Such maxims can be encoded in interstellar messages through three-dimensional animation sequences conveying narratives that highlight interactions between individuals. In addition, verbal descriptions of these interactions in Basic English can be combined with these pictorial sequences to increase intelligibility. Online projects to collect messages such as the SETI Institute's Earth Speaks and La Tierra Habla, can be used to solicit maxims from participants around the world.

  18. Oxygen uptake in maximal effort constant rate and interval running.

    PubMed

    Pratt, Daniel; O'Brien, Brendan J; Clark, Bradley

    2013-01-01

    This study investigated differences in average VO2 of maximal effort interval running to maximal effort constant rate running at lactate threshold matched for time. The average VO2 and distance covered of 10 recreational male runners (VO2max: 4158 ± 390 mL · min(-1)) were compared between a maximal effort constant-rate run at lactate threshold (CRLT), a maximal effort interval run (INT) consisting of 2 min at VO2max speed with 2 minutes at 50% of VO2 repeated 5 times, and a run at the average speed sustained during the interval run (CR submax). Data are presented as mean and 95% confidence intervals. The average VO2 for INT, 3451 (3269-3633) mL · min(-1), 83% VO2max, was not significantly different to CRLT, 3464 (3285-3643) mL · min(-1), 84% VO2max, but both were significantly higher than CR sub-max, 3464 (3285-3643) mL · min(-1), 76% VO2max. The distance covered was significantly greater in CLRT, 4431 (4202-3731) metres, compared to INT and CR sub-max, 4070 (3831-4309) metres. The novel finding was that a 20-minute maximal effort constant rate run uses similar amounts of oxygen as a 20-minute maximal effort interval run despite the greater distance covered in the maximal effort constant-rate run. PMID:24288501

  19. Salience of Alcohol Expectancies and Drinking Outcomes.

    ERIC Educational Resources Information Center

    Reese, Finetta L.

    1997-01-01

    Investigated whether the prediction of drinking might be enhanced by considering salience of alcohol expectancies rather than mere endorsement. Hierarchical regression analyses demonstrated that expectancy salience significantly improved the prediction of total alcohol consumption above and beyond the effects of expectancy endorsement. Expectancy…

  20. International and American Students' Expectancies about Counseling.

    ERIC Educational Resources Information Center

    Yuen, Rhoda Ka-Wai; Tinsley, Howard E.A.

    1981-01-01

    American students expect the counselor to be less directive and protective and they themselves expect to be more responsible for improvement. In contrast, the Chinese, Iranian, and African students expect to assume a more passive role and that the counselor will be a more directive and nurturing authority figure. (Author)

  1. 7 CFR 760.636 - Expected revenue.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Expected revenue. 760.636 Section 760.636 Agriculture... SPECIAL PROGRAMS INDEMNITY PAYMENT PROGRAMS Supplemental Revenue Assistance Payments Program § 760.636 Expected revenue. The expected revenue for each crop on a farm is: (a) For each insurable crop,...

  2. Brain mechanisms supporting violated expectations of pain.

    PubMed

    Zeidan, Fadel; Lobanov, Oleg V; Kraft, Robert A; Coghill, Robert C

    2015-09-01

    The subjective experience of pain is influenced by interactions between experiences, future predictions, and incoming afferent information. Expectations of high pain can exacerbate pain, whereas expectations of low pain during a consistently noxious stimulus can produce significant reductions in pain. However, the brain mechanisms associated with processing mismatches between expected and experienced pain are poorly understood, but are important for imparting salience to a sensory event to override erroneous top-down expectancy-mediated information. This investigation examined pain-related brain activation when expectations of pain were abruptly violated. After conditioning participants to cues predicting low or high pain, 10 incorrectly cued stimuli were administered across 56 stimulus trials to determine whether expectations would be less influential on pain when there is a high discordance between prestimulus cues and corresponding thermal stimulation. Incorrectly cued stimuli produced pain ratings and pain-related brain activation consistent with placebo analgesia, nocebo hyperalgesia, and violated expectations. Violated expectations of pain were associated with activation in distinct regions of the inferior parietal lobe, including the supramarginal and angular gyrus, and intraparietal sulcus, the superior parietal lobe, cerebellum, and occipital lobe. Thus, violated expectations of pain engage mechanisms supporting salience-driven sensory discrimination, working memory, and associative learning processes. By overriding the influence of expectations on pain, these brain mechanisms are likely engaged in clinical situations in which patients' unrealistic expectations of pain relief diminish the efficacy of pain treatments. Accordingly, these findings underscore the importance of maintaining realistic expectations to augment the effectiveness of pain management.

  3. Building hospital TQM teams: effective polarity analysis and maximization.

    PubMed

    Hurst, J B

    1996-09-01

    Building and maintaining teams require careful attention to and maximization of such polar opposites (¿polarities¿) as individual and team, directive and participatory leadership, task and process, and stability and change. Analyzing systematic elements of any polarity and listing blocks, supports, and flexible ways to maximize it will prevent the negative consequences that occur when treating a polarity like a solvable problem. Flexible, well-timed shifts from pole to pole result in the maximization of upside and minimization of downside consequences.

  4. Siting Samplers to Minimize Expected Time to Detection

    SciTech Connect

    Walter, Travis; Lorenzetti, David M.; Sohn, Michael D.

    2012-05-02

    We present a probabilistic approach to designing an indoor sampler network for detecting an accidental or intentional chemical or biological release, and demonstrate it for a real building. In an earlier paper, Sohn and Lorenzetti(1) developed a proof of concept algorithm that assumed samplers could return measurements only slowly (on the order of hours). This led to optimal detect to treat architectures, which maximize the probability of detecting a release. This paper develops a more general approach, and applies it to samplers that can return measurements relatively quickly (in minutes). This leads to optimal detect to warn architectures, which minimize the expected time to detection. Using a model of a real, large, commercial building, we demonstrate the approach by optimizing networks against uncertain release locations, source terms, and sampler characteristics. Finally, we speculate on rules of thumb for general sampler placement.

  5. Expectancy effects in memory for melodies.

    PubMed

    Schmuckler, M A

    1997-12-01

    Two experiments explored the relation between melodic expectancy and melodic memory. In Experiment 1, listeners rated the degree to which different endings confirmed their expectations for a set of melodies. After providing these expectancy ratings, listeners received a recognition memory test in which they discriminated previously heard melodies from new melodies. Recognition memory in this task positively correlated with perceived expectancy, and was related to the estimated tonal coherence of these melodies. Experiment 2 extended these results, demonstrating better recognition memory for high expectancy melodies, relative to medium and low expectancy melodies. This experiment also observed asymmetrical memory confusions as a function of perceived expectancy. These findings fit with a model of musical memory in which schematically central events are better remembered than schematically peripheral events. PMID:9606947

  6. Prior expectations facilitate metacognition for perceptual decision.

    PubMed

    Sherman, M T; Seth, A K; Barrett, A B; Kanai, R

    2015-09-01

    The influential framework of 'predictive processing' suggests that prior probabilistic expectations influence, or even constitute, perceptual contents. This notion is evidenced by the facilitation of low-level perceptual processing by expectations. However, whether expectations can facilitate high-level components of perception remains unclear. We addressed this question by considering the influence of expectations on perceptual metacognition. To isolate the effects of expectation from those of attention we used a novel factorial design: expectation was manipulated by changing the probability that a Gabor target would be presented; attention was manipulated by instructing participants to perform or ignore a concurrent visual search task. We found that, independently of attention, metacognition improved when yes/no responses were congruent with expectations of target presence/absence. Results were modeled under a novel Bayesian signal detection theoretic framework which integrates bottom-up signal propagation with top-down influences, to provide a unified description of the mechanisms underlying perceptual decision and metacognition.

  7. The maximization of overall reinforcement rate on concurrent chains.

    PubMed

    Houston, A I; Sumida, B H; McNamara, J M

    1987-07-01

    We model behavioral allocation on concurrent chains in which the initial links are independent variable-interval schedules. We also quantify the relationship between behavior during the initial links and the probability of entering a terminal link. The behavior that maximizes overall reinforcement rate is then considered and compared with published experimental data. Although all the trends in the data are predicted by rate maximization, there are considerable deviations from the predictions of rate maximization when reward magnitudes are unequal. We argue from our results that optimal allocation on concurrent chains, and prey choice as used in the theory of optimal diets, are distinct concepts. We show that the maximization of overall rate can lead to apparent violations of stochastic transitivity.

  8. Rational maximizing by humans (Homo sapiens) in an ultimatum game.

    PubMed

    Smith, Phillip; Silberberg, Alan

    2010-07-01

    In the human mini-ultimatum game, a proposer splits a sum of money with a responder. If the responder accepts, both are paid. If not, neither is paid. Typically, responders reject inequitable distributions, favoring punishing over maximizing. In Jensen et al.'s (Science 318:107-109, 2007) adaptation with apes, a proposer selects between two distributions of raisins. Despite inequitable offers, responders often accept, thereby maximizing. The rejection response differs between the human and ape versions of this game. For humans, rejection is instantaneous; for apes, it requires 1 min of inaction. We replicate Jensen et al.'s procedure in humans with money. When waiting 1 min to reject, humans favor punishing over maximizing; however, when rejection requires 5 min of inaction, humans, like apes, maximize. If species differences in time horizons are accommodated, Jensen et al.'s ape data are reproducible in humans.

  9. Carnot cycle at finite power: attainability of maximal efficiency.

    PubMed

    Allahverdyan, Armen E; Hovhannisyan, Karen V; Melkikh, Alexey V; Gevorkian, Sasun G

    2013-08-01

    We want to understand whether and to what extent the maximal (Carnot) efficiency for heat engines can be reached at a finite power. To this end we generalize the Carnot cycle so that it is not restricted to slow processes. We show that for realistic (i.e., not purposefully designed) engine-bath interactions, the work-optimal engine performing the generalized cycle close to the maximal efficiency has a long cycle time and hence vanishing power. This aspect is shown to relate to the theory of computational complexity. A physical manifestation of the same effect is Levinthal's paradox in the protein folding problem. The resolution of this paradox for realistic proteins allows to construct engines that can extract at a finite power 40% of the maximally possible work reaching 90% of the maximal efficiency. For purposefully designed engine-bath interactions, the Carnot efficiency is achievable at a large power.

  10. Maximal slicing of D-dimensional spherically symmetric vacuum spacetime

    SciTech Connect

    Nakao, Ken-ichi; Abe, Hiroyuki; Yoshino, Hirotaka; Shibata, Masaru

    2009-10-15

    We study the foliation of a D-dimensional spherically symmetric black-hole spacetime with D{>=}5 by two kinds of one-parameter families of maximal hypersurfaces: a reflection-symmetric foliation with respect to the wormhole slot and a stationary foliation that has an infinitely long trumpetlike shape. As in the four-dimensional case, the foliations by the maximal hypersurfaces avoid the singularity irrespective of the dimensionality. This indicates that the maximal slicing condition will be useful for simulating higher-dimensional black-hole spacetimes in numerical relativity. For the case of D=5, we present analytic solutions of the intrinsic metric, the extrinsic curvature, the lapse function, and the shift vector for the foliation by the stationary maximal hypersurfaces. These data will be useful for checking five-dimensional numerical-relativity codes based on the moving puncture approach.

  11. Maximizing Your Investment in Building Automation System Technology.

    ERIC Educational Resources Information Center

    Darnell, Charles

    2001-01-01

    Discusses how organizational issues and system standardization can be important factors that determine an institution's ability to fully exploit contemporary building automation systems (BAS). Further presented is management strategy for maximizing BAS investments. (GR)

  12. Maximized PUFA measurements improve insight in changes in fatty acid composition in response to temperature.

    PubMed

    van Dooremalen, Coby; Pel, Roel; Ellers, Jacintha

    2009-10-01

    A general mechanism underlying the response of ectotherms to environmental changes often involves changes in fatty acid composition. Theory predicts that a decrease in temperature causes an increase in unsaturation of fatty acids, with an important role for long-chain poly-unsaturated fatty acids (PUFAs). However, PUFAs are particularly unstable and susceptible to peroxidation, hence subtle differences in fatty acid composition can be challenging to detect. We determined the fatty acid composition in springtail (Collembola) in response to two temperatures (5 degrees C and 25 degrees C). First, we tested different sample preparation methods to maximize PUFAs. Treatments consisted of different solvents for primary lipid extraction, mixing with antioxidant, flushing with inert gas, and using different temperature exposures during saponification. Especially slow saponification at low temperature (90 min at 70 degrees C) in combination with replacement of headspace air with nitrogen during saponification and methylation maximized PUFAs for GC analysis. Applying these methods to measure thermal responses in fatty acid composition, the data showed that the (maximized) proportion of C(20) PUFAs increased at low acclimation temperature. However, C(18) PUFAs increased at high acclimation temperature, which is contrary to expectations. Our study illustrates that PUFA levels in lipids may often be underestimated and this may hamper a correct interpretation of differential responses of fatty acid composition. PMID:19557745

  13. Maximized PUFA measurements improve insight in changes in fatty acid composition in response to temperature.

    PubMed

    van Dooremalen, Coby; Pel, Roel; Ellers, Jacintha

    2009-10-01

    A general mechanism underlying the response of ectotherms to environmental changes often involves changes in fatty acid composition. Theory predicts that a decrease in temperature causes an increase in unsaturation of fatty acids, with an important role for long-chain poly-unsaturated fatty acids (PUFAs). However, PUFAs are particularly unstable and susceptible to peroxidation, hence subtle differences in fatty acid composition can be challenging to detect. We determined the fatty acid composition in springtail (Collembola) in response to two temperatures (5 degrees C and 25 degrees C). First, we tested different sample preparation methods to maximize PUFAs. Treatments consisted of different solvents for primary lipid extraction, mixing with antioxidant, flushing with inert gas, and using different temperature exposures during saponification. Especially slow saponification at low temperature (90 min at 70 degrees C) in combination with replacement of headspace air with nitrogen during saponification and methylation maximized PUFAs for GC analysis. Applying these methods to measure thermal responses in fatty acid composition, the data showed that the (maximized) proportion of C(20) PUFAs increased at low acclimation temperature. However, C(18) PUFAs increased at high acclimation temperature, which is contrary to expectations. Our study illustrates that PUFA levels in lipids may often be underestimated and this may hamper a correct interpretation of differential responses of fatty acid composition.

  14. STOCK MARKET CRASH AND EXPECTATIONS OF AMERICAN HOUSEHOLDS*

    PubMed Central

    HUDOMIET, PÉTER; KÉZDI, GÁBOR; WILLIS, ROBERT J.

    2011-01-01

    SUMMARY This paper utilizes data on subjective probabilities to study the impact of the stock market crash of 2008 on households’ expectations about the returns on the stock market index. We use data from the Health and Retirement Study that was fielded in February 2008 through February 2009. The effect of the crash is identified from the date of the interview, which is shown to be exogenous to previous stock market expectations. We estimate the effect of the crash on the population average of expected returns, the population average of the uncertainty about returns (subjective standard deviation), and the cross-sectional heterogeneity in expected returns (disagreement). We show estimates from simple reduced-form regressions on probability answers as well as from a more structural model that focuses on the parameters of interest and separates survey noise from relevant heterogeneity. We find a temporary increase in the population average of expectations and uncertainty right after the crash. The effect on cross-sectional heterogeneity is more significant and longer lasting, which implies substantial long-term increase in disagreement. The increase in disagreement is larger among the stockholders, the more informed, and those with higher cognitive capacity, and disagreement co-moves with trading volume and volatility in the market. PMID:21547244

  15. Violated expectancies: Cause and function of exploration, fear, and aggression.

    PubMed

    van Kampen, Hendrik S

    2015-08-01

    To be able to reproduce, animals need to survive and interact with an ever changing environment. Therefore, they create a cognitive representation of that environment, from which they derive expectancies regarding current and future events. These expected events are compared continuously with information gathered through exploration, to guide behaviour and update the existing representation. When a moderate discrepancy between perceived and expected events is detected, exploration is employed to update the internal representation so as to alter the expectancy and make it match the perceived event. When the discrepancy is relatively large, exploration is inhibited, and animals will try to alter the perceived event utilizing aggression or fear. The largest discrepancies are associated with a tendency to flee. When an exploratory, fear, or aggressive behaviour pattern proofs to be the optimal solution for a particular discrepancy, the response will become conditioned to events that previously preceded the occurrence of that discrepancy. When primary needs are relatively low, animals will actively look for or create moderately violated expectancies in order to learn about objects, behaviour patterns, and the environment. In those situations, exploratory tendencies will summate with ongoing behaviour and, when all primary needs are satiated, may even be performed exclusively. This results in behavioural variability, play, and active information-seeking. This article is part of a Special Issue entitled: In Honor of Jerry Hogan.

  16. A new augmentation based algorithm for extracting maximal chordal subgraphs

    DOE PAGES

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2014-10-18

    If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’more » parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.« less

  17. A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs

    PubMed Central

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2014-01-01

    A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’ parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph. PMID:25767331

  18. A new augmentation based algorithm for extracting maximal chordal subgraphs

    SciTech Connect

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2014-10-18

    If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’ parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.

  19. Evolution of Shanghai STOCK Market Based on Maximal Spanning Trees

    NASA Astrophysics Data System (ADS)

    Yang, Chunxia; Shen, Ying; Xia, Bingying

    2013-01-01

    In this paper, using a moving window to scan through every stock price time series over a period from 2 January 2001 to 11 March 2011 and mutual information to measure the statistical interdependence between stock prices, we construct a corresponding weighted network for 501 Shanghai stocks in every given window. Next, we extract its maximal spanning tree and understand the structure variation of Shanghai stock market by analyzing the average path length, the influence of the center node and the p-value for every maximal spanning tree. A further analysis of the structure properties of maximal spanning trees over different periods of Shanghai stock market is carried out. All the obtained results indicate that the periods around 8 August 2005, 17 October 2007 and 25 December 2008 are turning points of Shanghai stock market, at turning points, the topology structure of the maximal spanning tree changes obviously: the degree of separation between nodes increases; the structure becomes looser; the influence of the center node gets smaller, and the degree distribution of the maximal spanning tree is no longer a power-law distribution. Lastly, we give an analysis of the variations of the single-step and multi-step survival ratios for all maximal spanning trees and find that two stocks are closely bonded and hard to be broken in a short term, on the contrary, no pair of stocks remains closely bonded for a long time.

  20. Enumerating all maximal frequent subtrees in collections of phylogenetic trees

    PubMed Central

    2014-01-01

    Background A common problem in phylogenetic analysis is to identify frequent patterns in a collection of phylogenetic trees. The goal is, roughly, to find a subset of the species (taxa) on which all or some significant subset of the trees agree. One popular method to do so is through maximum agreement subtrees (MASTs). MASTs are also used, among other things, as a metric for comparing phylogenetic trees, computing congruence indices and to identify horizontal gene transfer events. Results We give algorithms and experimental results for two approaches to identify common patterns in a collection of phylogenetic trees, one based on agreement subtrees, called maximal agreement subtrees, the other on frequent subtrees, called maximal frequent subtrees. These approaches can return subtrees on larger sets of taxa than MASTs, and can reveal new common phylogenetic relationships not present in either MASTs or the majority rule tree (a popular consensus method). Our current implementation is available on the web at https://code.google.com/p/mfst-miner/. Conclusions Our computational results confirm that maximal agreement subtrees and all maximal frequent subtrees can reveal a more complete phylogenetic picture of the common patterns in collections of phylogenetic trees than maximum agreement subtrees; they are also often more resolved than the majority rule tree. Further, our experiments show that enumerating maximal frequent subtrees is considerably more practical than enumerating ordinary (not necessarily maximal) frequent subtrees. PMID:25061474

  1. Combustion Research Aboard the ISS Utilizing the Combustion Integrated Rack and Microgravity Science Glovebox

    NASA Technical Reports Server (NTRS)

    Sutliff, Thomas J.; Otero, Angel M.; Urban, David L.

    2002-01-01

    The Physical Sciences Research Program of NASA sponsors a broad suite of peer-reviewed research investigating fundamental combustion phenomena and applied combustion research topics. This research is performed through both ground-based and on-orbit research capabilities. The International Space Station (ISS) and two facilities, the Combustion Integrated Rack and the Microgravity Science Glovebox, are key elements in the execution of microgravity combustion flight research planned for the foreseeable future. This paper reviews the Microgravity Combustion Science research planned for the International Space Station implemented from 2003 through 2012. Examples of selected research topics, expected outcomes, and potential benefits will be provided. This paper also summarizes a multi-user hardware development approach, recapping the progress made in preparing these research hardware systems. Within the description of this approach, an operational strategy is presented that illustrates how utilization of constrained ISS resources may be maximized dynamically to increase science through design decisions made during hardware development.

  2. The Smoking Consequences Questionnaire-Adult: Measurement of Smoking Outcome Expectancies of Experienced Smokers.

    ERIC Educational Resources Information Center

    Copeland, Amy L.; And Others

    1995-01-01

    Two versions of the Smoking Consequences Questionnaire for adults were developed and tested with 407 smokers and nonsmokers. The version with probability items appeared to have greater construct validity than the version with subjective expected utility items. The scale reflects the refinement of smokers' outcome expectancies with experience. (SLD)

  3. Rapid Expectation Adaptation during Syntactic Comprehension

    PubMed Central

    Fine, Alex B.; Jaeger, T. Florian; Farmer, Thomas A.; Qian, Ting

    2013-01-01

    When we read or listen to language, we are faced with the challenge of inferring intended messages from noisy input. This challenge is exacerbated by considerable variability between and within speakers. Focusing on syntactic processing (parsing), we test the hypothesis that language comprehenders rapidly adapt to the syntactic statistics of novel linguistic environments (e.g., speakers or genres). Two self-paced reading experiments investigate changes in readers’ syntactic expectations based on repeated exposure to sentences with temporary syntactic ambiguities (so-called “garden path sentences”). These sentences typically lead to a clear expectation violation signature when the temporary ambiguity is resolved to an a priori less expected structure (e.g., based on the statistics of the lexical context). We find that comprehenders rapidly adapt their syntactic expectations to converge towards the local statistics of novel environments. Specifically, repeated exposure to a priori unexpected structures can reduce, and even completely undo, their processing disadvantage (Experiment 1). The opposite is also observed: a priori expected structures become less expected (even eliciting garden paths) in environments where they are hardly ever observed (Experiment 2). Our findings suggest that, when changes in syntactic statistics are to be expected (e.g., when entering a novel environment), comprehenders can rapidly adapt their expectations, thereby overcoming the processing disadvantage that mistaken expectations would otherwise cause. Our findings take a step towards unifying insights from research in expectation-based models of language processing, syntactic priming, and statistical learning. PMID:24204909

  4. Premenstrual symptoms and smoking-related expectancies.

    PubMed

    Pang, Raina D; Bello, Mariel S; Stone, Matthew D; Kirkpatrick, Matthew G; Huh, Jimi; Monterosso, John; Haselton, Martie G; Fales, Melissa R; Leventhal, Adam M

    2016-06-01

    Given that prior research implicates smoking abstinence in increased premenstrual symptoms, tobacco withdrawal, and smoking behaviors, it is possible that women with more severe premenstrual symptoms have stronger expectancies about the effects of smoking and abstaining from smoking on mood and withdrawal. However, such relations have not been previously explored. This study examined relations between premenstrual symptoms experienced in the last month and expectancies that abstaining from smoking results in withdrawal (i.e., smoking abstinence withdrawal expectancies), that smoking is pleasurable (i.e., positive reinforcement smoking expectancies), and smoking relieves negative mood (i.e., negative reinforcement smoking expectancies). In a cross-sectional design, 97 non-treatment seeking women daily smokers completed self-report measures of smoking reinforcement expectancies, smoking abstinence withdrawal expectancies, premenstrual symptoms, mood symptoms, and nicotine dependence. Affect premenstrual symptoms were associated with increased negative reinforcement smoking expectancies, but not over and above covariates. Affect and pain premenstrual symptoms were associated with increased positive reinforcement smoking expectancies, but only affect premenstrual symptoms remained significant in adjusted models. Affect, pain, and water retention premenstrual symptoms were associated with increased smoking abstinence withdrawal expectancies, but only affect premenstrual symptoms remained significant in adjusted models. Findings from this study suggest that addressing concerns about withdrawal and alternatives to smoking may be particularly important in women who experience more severe premenstrual symptoms, especially affect-related changes. PMID:26869196

  5. Premenstrual symptoms and smoking-related expectancies.

    PubMed

    Pang, Raina D; Bello, Mariel S; Stone, Matthew D; Kirkpatrick, Matthew G; Huh, Jimi; Monterosso, John; Haselton, Martie G; Fales, Melissa R; Leventhal, Adam M

    2016-06-01

    Given that prior research implicates smoking abstinence in increased premenstrual symptoms, tobacco withdrawal, and smoking behaviors, it is possible that women with more severe premenstrual symptoms have stronger expectancies about the effects of smoking and abstaining from smoking on mood and withdrawal. However, such relations have not been previously explored. This study examined relations between premenstrual symptoms experienced in the last month and expectancies that abstaining from smoking results in withdrawal (i.e., smoking abstinence withdrawal expectancies), that smoking is pleasurable (i.e., positive reinforcement smoking expectancies), and smoking relieves negative mood (i.e., negative reinforcement smoking expectancies). In a cross-sectional design, 97 non-treatment seeking women daily smokers completed self-report measures of smoking reinforcement expectancies, smoking abstinence withdrawal expectancies, premenstrual symptoms, mood symptoms, and nicotine dependence. Affect premenstrual symptoms were associated with increased negative reinforcement smoking expectancies, but not over and above covariates. Affect and pain premenstrual symptoms were associated with increased positive reinforcement smoking expectancies, but only affect premenstrual symptoms remained significant in adjusted models. Affect, pain, and water retention premenstrual symptoms were associated with increased smoking abstinence withdrawal expectancies, but only affect premenstrual symptoms remained significant in adjusted models. Findings from this study suggest that addressing concerns about withdrawal and alternatives to smoking may be particularly important in women who experience more severe premenstrual symptoms, especially affect-related changes.

  6. Integrated life sciences technology utilization development program

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The goal of the TU program was to maximize the development of operable hardware and systems which will be of substantial benefit to the public. Five working prototypes were developed, and a meal system for the elderly is now undergoing evaluation. Manpower utilization is shown relative to the volume of requests in work for each month. The ASTP mobile laboratories and post Skylab bedrest study are also described.

  7. Design and manufacturing rules for maximizing the performance of polycrystalline piezoelectric bending actuators

    NASA Astrophysics Data System (ADS)

    Jafferis, Noah T.; Smith, Michael J.; Wood, Robert J.

    2015-06-01

    Increasing the energy and power density of piezoelectric actuators is very important for any weight-sensitive application, and is especially crucial for enabling autonomy in micro/milli-scale robots and devices utilizing this technology. This is achieved by maximizing the mechanical flexural strength and electrical dielectric strength through the use of laser-induced melting or polishing, insulating edge coating, and crack-arresting features, combined with features for rigid ground attachments to maximize force output. Manufacturing techniques have also been developed to enable mass customization, in which sheets of material are pre-stacked to form a laminate from which nearly arbitrary planar actuator designs can be fabricated using only laser cutting. These techniques have led to a 70% increase in energy density and an increase in mean lifetime of at least 15× compared to prior manufacturing methods. In addition, measurements have revealed a doubling of the piezoelectric coefficient when operating at the high fields necessary to achieve maximal energy densities, along with an increase in the Young’s modulus at the high compressive strains encountered—these two effects help to explain the higher performance of our actuators as compared to that predicted by linear models.

  8. Grief experiences and expectance of suicide.

    PubMed

    Wojtkowiak, Joanna; Wild, Verena; Egger, Jos

    2012-02-01

    Suicide is generally viewed as an unexpected cause of death. However, some suicides might be expected to a certain extent, which needs to be further studied. The relationships between expecting suicide, feeling understanding for the suicide, and later grief experiences were explored. In total, 142 bereaved participants completed the Grief Experience Questionnaire and additional measurements on expectance and understanding. Results supported the prediction of a link between expecting suicide and understanding the suicide. Higher expectance and understanding were related to less searching for explanation and preoccupation with the suicide. There was no direct association with other grief experiences. We conclude that more attention should be brought to the relation between expecting the suicide of a loved one and later grief responses in research and in clinical practice.

  9. Eleutherococcus senticosus (Rupr. & Maxim.) Maxim. (Araliaceae) as an adaptogen: a closer look.

    PubMed

    Davydov, M; Krikorian, A D

    2000-10-01

    The adaptogen concept is examined from an historical, biological, chemical, pharmacological and medical perspective using a wide variety of primary and secondary literature. The definition of an adaptogen first proposed by Soviet scientists in the late 1950s, namely that an adaptogen is any substance that exerts effects on both sick and healthy individuals by 'correcting' any dysfunction(s) without producing unwanted side effects, was used as a point of departure. We attempted to identify critically what an adaptogen supposedly does and to determine whether the word embodies in and of itself any concept(s) acceptable to western conventional (allopathic) medicine. Special attention was paid to the reported pharmacological effects of the 'adaptogen-containing plant' Eleutherococcus senticosus (Rupr. & Maxim.) Maxim. (Araliaceae), referred to by some as 'Siberian ginseng', and to its secondary chemical composition. We conclude that so far as specific pharmacological activities are concerned there are a number of valid arguments for equating the action of so-called adaptogens with those of medicinal agents that have activities as anti-oxidants, and/or anti-cancerogenic, immunomodulatory and hypocholesteroletic as well as hypoglycemic and choleretic action. However, 'adaptogens' and 'anti-oxidants' etc. also show significant dissimilarities and these are discussed. Significantly, the classical definition of an adaptogen has much in common with views currently being invoked to describe and explain the 'placebo effect'. Nevertheless, the chemistry of the secondary compounds of Eleutherococcus isolated thus far and their pharmacological effects support our hypothesis that the reported beneficial effects of adaptogens derive from their capacity to exert protective and/or inhibitory action against free radicals. An inventory of the secondary substances contained in Eleutherococcus discloses a potential for a wide range of activities reported from work on cultured cell lines

  10. How fast-growing bacteria robustly tune their ribosome concentration to approximate growth-rate maximization.

    PubMed

    Bosdriesz, Evert; Molenaar, Douwe; Teusink, Bas; Bruggeman, Frank J

    2015-05-01

    Maximization of growth rate is an important fitness strategy for bacteria. Bacteria can achieve this by expressing proteins at optimal concentrations, such that resources are not wasted. This is exemplified for Escherichia coli by the increase of its ribosomal protein-fraction with growth rate, which precisely matches the increased protein synthesis demand. These findings and others have led to the hypothesis that E. coli aims to maximize its growth rate in environments that support growth. However, what kind of regulatory strategy is required for a robust, optimal adjustment of the ribosome concentration to the prevailing condition is still an open question. In the present study, we analyze the ppGpp-controlled mechanism of ribosome expression used by E. coli and show that this mechanism maintains the ribosomes saturated with its substrates. In this manner, overexpression of the highly abundant ribosomal proteins is prevented, and limited resources can be redirected to the synthesis of other growth-promoting enzymes. It turns out that the kinetic conditions for robust, optimal protein-partitioning, which are required for growth rate maximization across conditions, can be achieved with basic biochemical interactions. We show that inactive ribosomes are the most suitable 'signal' for tracking the intracellular nutritional state and for adjusting gene expression accordingly, as small deviations from optimal ribosome concentration cause a huge fractional change in ribosome inactivity. We expect to find this control logic implemented across fast-growing microbial species because growth rate maximization is a common selective pressure, ribosomes are typically highly abundant and thus costly, and the required control can be implemented by a small, simple network.

  11. Brain Mechanisms Supporting Violated Expectations of Pain

    PubMed Central

    Zeidan, Fadel; Lobanov, Oleg V.; Kraft, Robert A.; Coghill, Robert C.

    2015-01-01

    The subjective experience of pain is influenced by interactions between prior experiences, future predictions and incoming afferent information. Expectations of high pain can exacerbate pain while expectations of low pain during a consistently noxious stimulus can produce significant reductions in pain. However, the brain mechanisms associated with processing mismatches between expected and experienced pain are poorly understood, but are important for imparting salience to a sensory event in order to override erroneous top-down expectancy-mediated information. The present investigation examined pain-related brain activation when expectations of pain were abruptly violated. After conditioning participants to cues predicting low or high pain, ten incorrectly cued stimuli were administered across 56 stimulus trials to determine if expectations would be less influential on pain when there is a high discordance between pre-stimulus cues and corresponding thermal stimulation. Incorrectly cued stimuli produced pain ratings and pain-related brain activation consistent with placebo analgesia, nocebo hyperalgesia, and violated expectations. Violated expectations of pain were associated with activation in distinct regions of the inferior parietal lobe, including the supramarginal and angular gyrus, and intraparietal sulcus, the superior parietal lobe, cerebellum and occipital lobe. Thus, violated expectations of pain engage mechanisms supporting salience-driven sensory discrimination, working memory, and associative learning processes. By overriding the influence of expectations on pain, these brain mechanisms are likely engaged in clinical situations where patients’ unrealistic expectations for pain relief diminish the efficacy of pain treatments. Accordingly, these findings underscore the importance of maintaining realistic expectations to augment the effectiveness of pain management. PMID:26083664

  12. Expectant fathers at risk for couvade.

    PubMed

    Clinton, J F

    1986-01-01

    A repeated measures survey design was used to monitor the physical and emotional health of 81 expectant fathers at lunar month intervals throughout their partners' pregnancy and the early postpartum period. The data set consisted of 515 repeated measures. The backward elimination regression procedure was used to identify six factors that partially explained health events experienced by expectant fathers: affective involvement in pregnancy, number of previous children, income, ethnic identity, perceived stress, and recent health prior to expectant fatherhood.

  13. Increasing hope by addressing clients' outcome expectations.

    PubMed

    Swift, Joshua K; Derthick, Annie O

    2013-09-01

    Addressing clients' outcome expectations is an important clinical process that can lead to a strong therapeutic alliance, more positive treatment outcomes, and decreased rates of premature termination from psychotherapy. Five interventions designed to foster appropriate outcome expectations are discussed, including presenting a convincing treatment rationale, increasing clients' faith in their therapists, expressing faith in clients, providing outcome education, and comparing progress with expectations. Clinical examples and research support are provided for each. PMID:24000836

  14. Documenting and explaining the common AAB pattern in music and humor: establishing and breaking expectations.

    PubMed

    Rozin, Paul; Rozin, Alexander; Appel, Brian; Wachtel, Charles

    2006-08-01

    The AAB pattern consists of two similar events followed by a third dissimilar event. The prevalence of this pattern in the aesthetic domain may be explained as violation of expectation: A minimum of two iterations is required to establish a repetitive pattern; once established, it is most efficient to promptly violate the expected continuance of the pattern to produce the maximal aesthetic effect. We demonstrate the prevalence of this pattern (in comparison to AB or AAAB) in a representative sample of a variety of musical genres and in a representative sample of repetitive genre of jokes. We also provide experimental evidence that the AAB pattern in jokes is maximally effective in producing a humor response in participants.

  15. [Utilities: a solution of a decision problem?].

    PubMed

    Koller, Michael; Ohmann, Christian; Lorenz, Wilfried

    2008-01-01

    Utility is a concept that originates from utilitarianism, a highly influential philosophical school in the Anglo-American world. The cornerstone of utilitarianism is the principle of maximum happiness or utility. In the medical sciences, this utility approach has been adopted and developed within the field of medical decision making. On an operational level, utility is the evaluation of a health state or an outcome on a one-dimensional scale ranging from 0 (death) to 1 (perfect health). By adding the concept of expectancy, the graphic representation of both concepts in a decision tree results in the specification of expected utilities and helps to resolve complex medical decision problems. Criticism of the utility approach relates to the rational perspective on humans (which is rejected by a considerable fraction of research in psychology) and to the artificial methods used in the evaluation of utility, such as Standard Gamble or Time Trade Off. These may well be the reason why the utility approach has never been accepted in Germany. Nevertheless, innovative concepts for defining goals in health care are urgently required, as the current debate in Germany on "Nutzen" (interestingly translated as 'benefit' instead of as 'utility') and integrated outcome models indicates. It remains to be seen whether this discussion will lead to a re-evaluation of the utility approach.

  16. The Role of Labour Market Expectations and Admission Probabilities in Students' Application Decisions on Higher Education: The Case of Hungary

    ERIC Educational Resources Information Center

    Varga, Julia

    2006-01-01

    This paper analyses students' application strategies to higher education, the effects of labour market expectations and admission probabilities. The starting hypothesis of this study is that students consider the expected utility of their choices, a function of expected net lifetime earnings and the probability of admission. Based on a survey…

  17. Predicting problem behaviors with multiple expectancies: expanding expectancy-value theory.

    PubMed

    Borders, Ashley; Earleywine, Mitchell; Huey, Stanley J

    2004-01-01

    Expectancy-value theory emphasizes the importance of outcome expectancies for behavioral decisions, but most tests of the theory focus on a single behavior and a single expectancy. However, the matching law suggests that individuals consider expected outcomes for both the target behavior and alternative behaviors when making decisions. In this study, we expanded expectancy-value theory to evaluate the contributions of two competing expectancies to adolescent behavior problems. One hundred twenty-one high school students completed measures of behavior problems, expectancies for both acting out and academic effort, and perceived academic competence. Students' self-reported behavior problems covaried mostly with perceived competence and academic expectancies and only nominally with problem behavior expectancies. We suggest that behavior problems may result from students perceiving a lack of valued or feasible alternative behaviors, such as studying. We discuss implications for interventions and suggest that future research continue to investigate the contribution of alternative expectancies to behavioral decisions.

  18. Disk Density Tuning of a Maximal Random Packing

    PubMed Central

    Ebeida, Mohamed S.; Rushdi, Ahmad A.; Awad, Muhammad A.; Mahmoud, Ahmed H.; Yan, Dong-Ming; English, Shawn A.; Owens, John D.; Bajaj, Chandrajit L.; Mitchell, Scott A.

    2016-01-01

    We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations. PMID:27563162

  19. Molecular maximizing characterizes choice on Vaughan's (1981) procedure.

    PubMed

    Silberberg, A; Ziriax, J M

    1985-01-01

    Pigeons keypecked on a two-key procedure in which their choice ratios during one time period determined the reinforcement rates assigned to each key during the next period (Vaughan, 1981). During each of four phases, which differed in the reinforcement rates they provided for different choice ratios, the duration of these periods was four minutes, duplicating one condition from Vaughan's study. During the other four phases, these periods lasted six seconds. When these periods were long, the results were similar to Vaughan's and appeared compatible with melioration theory. But when these periods were short, the data were consistent with molecular maximizing (see Silberberg & Ziriax, 1982) and were incompatible with melioration, molar maximizing, and matching. In a simulation, stat birds following a molecular-maximizing algorithm responded on the short- and long-period conditions of this experiment. When the time periods lasted four minutes, the results were similar to Vaughan's and to the results of the four-minute conditions of this study; when the time periods lasted six seconds, the choice data were similar to the data from real subjects for the six-second conditions. Thus, a molecular-maximizing response rule generated choice data comparable to those from the short- and long-period conditions of this experiment. These data show that, among extant accounts, choice on the Vaughan procedure is most compatible with molecular maximizing.

  20. Ventilatory patterns differ between maximal running and cycling.

    PubMed

    Tanner, David A; Duke, Joseph W; Stager, Joel M

    2014-01-15

    To determine the effect of exercise mode on ventilatory patterns, 22 trained men performed two maximal graded exercise tests; one running on a treadmill and one cycling on an ergometer. Tidal flow-volume (FV) loops were recorded during each minute of exercise with maximal loops measured pre and post exercise. Running resulted in a greater VO2peak than cycling (62.7±7.6 vs. 58.1±7.2mLkg(-1)min(-1)). Although maximal ventilation (VE) did not differ between modes, ventilatory equivalents for O2 and CO2 were significantly larger during maximal cycling. Arterial oxygen saturation (estimated via ear oximeter) was also greater during maximal cycling, as were end-expiratory (EELV; 3.40±0.54 vs. 3.21±0.55L) and end-inspiratory lung volumes, (EILV; 6.24±0.88 vs. 5.90±0.74L). Based on these results we conclude that ventilatory patterns differ as a function of exercise mode and these observed differences are likely due to the differences in posture adopted during exercise in these modes.

  1. Do obese children perceive submaximal and maximal exertion differently?

    PubMed

    Belanger, Kevin; Breithaupt, Peter; Ferraro, Zachary M; Barrowman, Nick; Rutherford, Jane; Hadjiyannakis, Stasia; Colley, Rachel C; Adamo, Kristi B

    2013-01-01

    We examined how obese children perceive a maximal cardiorespiratory fitness test compared with a submaximal cardiorespiratory fitness test. Twenty-one obese children (body mass index ≥95th percentile, ages 10-17 years) completed maximal and submaximal cardiorespiratory fitness tests on 2 separate occasions. Oxygen consumption (VO2) and overall perceived exertion (Borg 15-category scale) were measured in both fitness tests. At comparable workloads, perceived exertion was rated significantly higher (P < 0.001) in the submaximal cardiorespiratory fitness test compared with the maximal cardiorespiratory fitness test. The submaximal cardiorespiratory fitness test was significantly longer than the maximal test (14:21 ± 04:04 seconds vs. 12:48 ± 03:27 seconds, P < 0.001). Our data indicate that at the same relative intensity, obese children report comparable or even higher perceived exertion during submaximal fitness testing than during maximal fitness testing. Perceived exertion in a sample of children and youth with obesity may be influenced by test duration and protocol design.

  2. Grief Experiences and Expectance of Suicide

    ERIC Educational Resources Information Center

    Wojtkowiak, Joanna; Wild, Verena; Egger, Jos

    2012-01-01

    Suicide is generally viewed as an unexpected cause of death. However, some suicides might be expected to a certain extent, which needs to be further studied. The relationships between expecting suicide, feeling understanding for the suicide, and later grief experiences were explored. In total, 142 bereaved participants completed the Grief…

  3. Intentions and Expectations in Differential Residential Selection

    ERIC Educational Resources Information Center

    Michelson, William; And Others

    1973-01-01

    This paper summarizes intentions and expectations in differential residential selection among families who had chose to move. Wives appear at face value to assess alternatives in the selection process rationally, to be aware of limitations in housing and location they will experience, and to have expectations about behavioral changes consistant…

  4. Framing expectations in early HIV cure research.

    PubMed

    Dubé, Karine; Henderson, Gail E; Margolis, David M

    2014-10-01

    Language used to describe clinical research represents a powerful opportunity to educate volunteers. In the case of HIV cure research there is an emerging need to manage expectations by using the term 'experiment'. Cure experiments are proof-of-concept studies designed to evaluate novel paradigms to reduce persistent HIV-1 reservoirs, without any expectation of medical benefit.

  5. Parents' Role in Adolescents' Educational Expectations

    ERIC Educational Resources Information Center

    Rimkute, Laura; Hirvonen, Riikka; Tolvanen, Asko; Aunola, Kaisa; Nurmi, Jari-Erik

    2012-01-01

    The present study examined the extent to which mothers' and fathers' expectations for their offspring's future education, their level of education, and adolescents' academic achievement predict adolescents' educational expectations. To investigate this, 230 adolescents were examined twice while they were in comprehensive school (in the 7th and 9th…

  6. The Expectant Reader in Theory and Practice.

    ERIC Educational Resources Information Center

    Fowler, Lois Josephs; McCormick, Kathleen

    1986-01-01

    Offers a method of using reader response theory that emphasizes the expectations about a text and how those expectations are fulfilled or deflated. Specifically, students read traditional fables, fairy tales, and parables, and compare them to contemporary works such as Kafka's "Metamorphosis" and Marquez's "The Very Old Man With Enormous Wings."…

  7. Rising Tides: Faculty Expectations of Library Websites

    ERIC Educational Resources Information Center

    Nicol, Erica Carlson; O'English, Mark

    2012-01-01

    Looking at 2003-2009 LibQUAL+ responses at research-oriented universities in the United States, faculty library users report a significant and consistent rise in desires and expectations for library-provided online tools and websites, even as student user groups show declining or leveling expectations. While faculty, like students, also report…

  8. First Grade Teacher Expectations in Mathematics.

    ERIC Educational Resources Information Center

    Funkhouser, Charles P.

    The focus of this study was on the expectations that first-grade teachers have of the mathematics skills of their incoming first-grade students. At the end of one school year and at the beginning of the next school year, first-grade teachers (n=64) in rural and urban settings completed the Mathematics Skills Expectations Survey (MSES). The MSES…

  9. Raising Expectations is Aim of New Effort

    ERIC Educational Resources Information Center

    Sparks, Sarah D.

    2010-01-01

    Researchers and policymakers agree that teachers' expectations of what their students can do can become self-fulfilling prophecies for children's academic performance. Yet while the "soft bigotry of low expectations" has become an education catchphrase, scholars and advocates are just beginning to explore whether it is possible to prevent such…

  10. Teacher Expectations and the Able Child.

    ERIC Educational Resources Information Center

    Lee-Corbin, Hilary

    1994-01-01

    Two middle school teachers and two students in each of the teacher's classes were assessed for field dependence-independence (FDI). The teachers were interviewed about their students. Found that one teacher had higher expectations and one had lower expectations for the student who had the same FDI orientation as the teacher than for the student…

  11. What Respondents Really Expect from Researchers

    ERIC Educational Resources Information Center

    Kolar, Tomaz; Kolar, Iztok

    2008-01-01

    This article addresses the issue of falling response rates in telephone surveys. To better understand and maintain respondent goodwill, concepts of psychological contract and respondent expectations are introduced and explored. Results of the qualitative study show that respondent expectations are not only socially contingent but also…

  12. International Variations in Measuring Customer Expectations.

    ERIC Educational Resources Information Center

    Calvert, Philip J.

    2001-01-01

    Discussion of customer expectations of library service quality and SERVQUAL as a measurement tool focuses on two studies: one that compared a survey of Chinese university students' expectations of service quality to New Zealand students; and one that investigated national culture as a source of attitudes to customer service. (Author/LRW)

  13. Trends in Life Expectancy in Wellbeing

    ERIC Educational Resources Information Center

    Perenboom, R. J. M.; Van Herten, L. M.; Boshuizen, H. C.; Van Den Bos, G. A. M.

    2004-01-01

    Objectives: This paper describes and discusses trends in life expectancy in wellbeing between 1989 and 1998. Methods: Data on wellbeing by the Bradburn Affect Balance Scale is obtained from the Netherlands Continuous Health Interview Surveys for the calendar years from 1989 to 1998. Using Sullivan's method, life expectancy in wellbeing is…

  14. Do Students Expect Compensation for Wage Risk?

    ERIC Educational Resources Information Center

    Schweri, Juerg; Hartog, Joop; Wolter, Stefan C.

    2011-01-01

    We use a unique data set about the wage distribution that Swiss students expect for themselves ex ante, deriving parametric and non-parametric measures to capture expected wage risk. These wage risk measures are unfettered by heterogeneity which handicapped the use of actual market wage dispersion as risk measure in earlier studies. Students in…

  15. Expectancy Theory in Media and Message Selection.

    ERIC Educational Resources Information Center

    Van Leuven, Jim

    1981-01-01

    Argues for reversing emphasis on uses and gratifications research in favor of an expectancy model which holds that selection of a particular medium depends on (1) the expectation that the choice will be followed by a message of interest and (2) the importance of that message in satisfying user's values. (PD)

  16. Course Expectations and Career Management Skills

    ERIC Educational Resources Information Center

    Kennedy, Marnie L.; Haines, Ben

    2008-01-01

    Course completion and student satisfaction is likely to be influenced by how realistic the expectations of students are when they enroll. This report explores the idea that students' expectations would be more realistic if students have well developed career management competencies. Recent research argues that lack of information is not the…

  17. Maximizing Output Power in a Cantilevered Piezoelectric Vibration Energy Harvester by Electrode Design

    NASA Astrophysics Data System (ADS)

    Du, Sijun; Jia, Yu; Seshia, Ashwin

    2015-12-01

    A resonant vibration energy harvester typically comprises of a clamped anchor and a vibrating shuttle with a proof mass. Piezoelectric materials are embedded in locations of high strain in order to transduce mechanical deformation into electric charge. Conventional design for piezoelectric vibration energy harvesters (PVEH) usually utilizes piezoelectric material and metal electrode layers covering the entire surface area of the cantilever with no consideration provided to examining the trade-off involved with respect to maximizing output power. This paper reports on the theory and experimental verification underpinning optimization of the active electrode area of a cantilevered PVEH in order to maximize output power. The analytical formulation utilizes Euler-Bernoulli beam theory to model the mechanical response of the cantilever. The expression for output power is reduced to a fifth order polynomial expression as a function of the electrode area. The maximum output power corresponds to the case when 44% area of the cantilever is covered by electrode metal. Experimental results are also provided to verify the theory.

  18. Maximizing the Impact of e-Therapy and Serious Gaming: Time for a Paradigm Shift.

    PubMed

    Fleming, Theresa M; de Beurs, Derek; Khazaal, Yasser; Gaggioli, Andrea; Riva, Giuseppe; Botella, Cristina; Baños, Rosa M; Aschieri, Filippo; Bavin, Lynda M; Kleiboer, Annet; Merry, Sally; Lau, Ho Ming; Riper, Heleen

    2016-01-01

    Internet interventions for mental health, including serious games, online programs, and apps, hold promise for increasing access to evidence-based treatments and prevention. Many such interventions have been shown to be effective and acceptable in trials; however, uptake and adherence outside of trials is seldom reported, and where it is, adherence at least, generally appears to be underwhelming. In response, an international Collaboration On Maximizing the impact of E-Therapy and Serious Gaming (COMETS) was formed. In this perspectives' paper, we call for a paradigm shift to increase the impact of internet interventions toward the ultimate goal of improved population mental health. We propose four pillars for change: (1) increased focus on user-centered approaches, including both user-centered design of programs and greater individualization within programs, with the latter perhaps utilizing increased modularization; (2) Increased emphasis on engagement utilizing processes such as gaming, gamification, telepresence, and persuasive technology; (3) Increased collaboration in program development, testing, and data sharing, across both sectors and regions, in order to achieve higher quality, more sustainable outcomes with greater reach; and (4) Rapid testing and implementation, including the measurement of reach, engagement, and effectiveness, and timely implementation. We suggest it is time for researchers, clinicians, developers, and end-users to collaborate on these aspects in order to maximize the impact of e-therapies and serious gaming. PMID:27148094

  19. Maximizing the Impact of e-Therapy and Serious Gaming: Time for a Paradigm Shift

    PubMed Central

    Fleming, Theresa M.; de Beurs, Derek; Khazaal, Yasser; Gaggioli, Andrea; Riva, Giuseppe; Botella, Cristina; Baños, Rosa M.; Aschieri, Filippo; Bavin, Lynda M.; Kleiboer, Annet; Merry, Sally; Lau, Ho Ming; Riper, Heleen

    2016-01-01

    Internet interventions for mental health, including serious games, online programs, and apps, hold promise for increasing access to evidence-based treatments and prevention. Many such interventions have been shown to be effective and acceptable in trials; however, uptake and adherence outside of trials is seldom reported, and where it is, adherence at least, generally appears to be underwhelming. In response, an international Collaboration On Maximizing the impact of E-Therapy and Serious Gaming (COMETS) was formed. In this perspectives’ paper, we call for a paradigm shift to increase the impact of internet interventions toward the ultimate goal of improved population mental health. We propose four pillars for change: (1) increased focus on user-centered approaches, including both user-centered design of programs and greater individualization within programs, with the latter perhaps utilizing increased modularization; (2) Increased emphasis on engagement utilizing processes such as gaming, gamification, telepresence, and persuasive technology; (3) Increased collaboration in program development, testing, and data sharing, across both sectors and regions, in order to achieve higher quality, more sustainable outcomes with greater reach; and (4) Rapid testing and implementation, including the measurement of reach, engagement, and effectiveness, and timely implementation. We suggest it is time for researchers, clinicians, developers, and end-users to collaborate on these aspects in order to maximize the impact of e-therapies and serious gaming. PMID:27148094

  20. Maximizing the Impact of e-Therapy and Serious Gaming: Time for a Paradigm Shift.

    PubMed

    Fleming, Theresa M; de Beurs, Derek; Khazaal, Yasser; Gaggioli, Andrea; Riva, Giuseppe; Botella, Cristina; Baños, Rosa M; Aschieri, Filippo; Bavin, Lynda M; Kleiboer, Annet; Merry, Sally; Lau, Ho Ming; Riper, Heleen

    2016-01-01

    Internet interventions for mental health, including serious games, online programs, and apps, hold promise for increasing access to evidence-based treatments and prevention. Many such interventions have been shown to be effective and acceptable in trials; however, uptake and adherence outside of trials is seldom reported, and where it is, adherence at least, generally appears to be underwhelming. In response, an international Collaboration On Maximizing the impact of E-Therapy and Serious Gaming (COMETS) was formed. In this perspectives' paper, we call for a paradigm shift to increase the impact of internet interventions toward the ultimate goal of improved population mental health. We propose four pillars for change: (1) increased focus on user-centered approaches, including both user-centered design of programs and greater individualization within programs, with the latter perhaps utilizing increased modularization; (2) Increased emphasis on engagement utilizing processes such as gaming, gamification, telepresence, and persuasive technology; (3) Increased collaboration in program development, testing, and data sharing, across both sectors and regions, in order to achieve higher quality, more sustainable outcomes with greater reach; and (4) Rapid testing and implementation, including the measurement of reach, engagement, and effectiveness, and timely implementation. We suggest it is time for researchers, clinicians, developers, and end-users to collaborate on these aspects in order to maximize the impact of e-therapies and serious gaming.

  1. Bioengineering and Coordination of Regulatory Networks and Intracellular Complexes to Maximize Hydrogen Production by Phototrophic Microorganisms

    SciTech Connect

    Tabita, F. Robert

    2013-07-30

    In this study, the Principal Investigator, F.R. Tabita has teemed up with J. C. Liao from UCLA. This project's main goal is to manipulate regulatory networks in phototrophic bacteria to affect and maximize the production of large amounts of hydrogen gas under conditions where wild-type organisms are constrained by inherent regulatory mechanisms from allowing this to occur. Unrestrained production of hydrogen has been achieved and this will allow for the potential utilization of waste materials as a feed stock to support hydrogen production. By further understanding the means by which regulatory networks interact, this study will seek to maximize the ability of currently available “unrestrained” organisms to produce hydrogen. The organisms to be utilized in this study, phototrophic microorganisms, in particular nonsulfur purple (NSP) bacteria, catalyze many significant processes including the assimilation of carbon dioxide into organic carbon, nitrogen fixation, sulfur oxidation, aromatic acid degradation, and hydrogen oxidation/evolution. Moreover, due to their great metabolic versatility, such organisms highly regulate these processes in the cell and since virtually all such capabilities are dispensable, excellent experimental systems to study aspects of molecular control and biochemistry/physiology are available.

  2. Force Irregularity Following Maximal Effort: The After-Peak Reduction.

    PubMed

    Doucet, Barbara M; Mettler, Joni A; Griffin, Lisa; Spirduso, Waneen

    2016-08-01

    Irregularities in force output are present throughout human movement and can impair task performance. We investigated the presence of a large force discontinuity (after-peak reduction, APR) that appeared immediately following peak in maximal effort ramp contractions performed with the thumb adductor and ankle dorsiflexor muscles in 25 young adult participants (76% males, 24% females; M age 24.4 years, SD = 7.1). The after-peak reduction displayed similar parameters in both muscle groups with comparable drops in force during the after-peak reduction minima (thumb adductor: 27.5 ± 7.5% maximal voluntary contraction; ankle dorsiflexor: 25.8 ± 6.2% maximal voluntary contraction). A trend for the presence of fewer after-peak reductions with successive ramp trials was observed, suggesting a learning effect. Further investigation should explore underlying neural mechanisms contributing to the after-peak reduction. PMID:27502241

  3. Matching and maximizing with concurrent ratio-interval schedules.

    PubMed

    Green, L; Rachlin, H; Hanson, J

    1983-11-01

    Animals exposed to standard concurrent variable-ratio variable-interval schedules could maximize overall reinforcement rate if, in responding, they showed a strong response bias toward the variable-ratio schedule. Tests with the standard schedules have failed to find such a bias and have been widely cited as evidence against maximization as an explanation of animal choice behavior. However, those experiments were confounded in that the value of leisure (behavior other than the instrumental response) partially offsets the value of reinforcement. The present experiment provides another such test using a concurrent procedure in which the confounding effects of leisure were mostly eliminated while the critical aspects of the concurrent variable-ratio variable-interval contingency were maintained: Responding in one component advanced only its ratio schedule while responding in the other component advanced both ratio schedules. The bias toward the latter component predicted by maximization theory was found.

  4. Control of communication networks: welfare maximization and multipath transfers.

    PubMed

    Key, Peter B; Massoulié, Laurent

    2008-06-13

    We discuss control strategies for communication networks such as the Internet. We advocate the goal of welfare maximization as a paradigm for network resource allocation. We explore the application of this paradigm to the case of parallel network paths. We show that welfare maximization requires active balancing across paths by data sources, and potentially requires implementation of novel transport protocols. However, the only requirement from the underlying 'network layer' is to expose the marginal congestion cost of network paths to the 'transport layer'. We further illustrate the versatility of the corresponding layered architecture by describing transport protocols with the following properties: they welfare maximization, each communication may use an arbitrary collection of paths, where paths may be from an overlay, and paths may be combined in series and parallel. We conclude by commenting on incentives, pricing and open problems. PMID:18325871

  5. June`s jolt: A utility`s perspective

    SciTech Connect

    Fayne, H.

    1998-10-01

    The unprecedented Midwest market turmoil in late June was an anomalous situation stemming from a confluence of four key conditions. Claims of inappropriate behavior by utilities and other market participants are unfounded, and a retreat to re-regulation would be a serious mistake. During the week of June 22, the Midwestern United States saw a convergence of events that severely stretched the physical capabilities of the interconnected electric utility system and its supporting power generation. These events, added to an already tight capacity situation, included a significant amount of Midwestern power generation that went out of service, hotter than expected weather over a large area of the region, and various transmission constraints. Further complications resulted when some market participants defaulted on contracts. A number of utilities were forced into the hourly spot market to meet their load obligations. The utilities were forced into the market because their suppliers curtailed deliveries, they (the utilities) did not have sufficient capabilities, or their contracted resources were insufficient to cover their load requirements.

  6. Cascade Reservoirs Floodwater Resources Utilization

    NASA Astrophysics Data System (ADS)

    Wang, Y.

    2015-12-01

    A reasonable floodwater resources utilization method is put forward by dynamic controlling of cascade reservoirs flood control limited level in this paper. According to the probability distribution of the beginning time of the first flood and the ending time of the final flood from July to September, the Fuzzy Statistic Analysis was used to divide the main flood season. By fitting the flood season membership functions of each period, the cascade reservoirs flood control limited water level for each period were computed according to the characteristic data of reservoirs. In terms of the benefit maximization and risk minimum principle, the reasonable combination of flood control limited water level of cascade reservoirs was put forward.

  7. Cardiovascular consequences of bed rest: effect on maximal oxygen uptake

    NASA Technical Reports Server (NTRS)

    Convertino, V. A.

    1997-01-01

    Maximal oxygen uptake (VO2max) is reduced in healthy individuals confined to bed rest, suggesting it is independent of any disease state. The magnitude of reduction in VO2max is dependent on duration of bed rest and the initial level of aerobic fitness (VO2max), but it appears to be independent of age or gender. Bed rest induces an elevated maximal heart rate which, in turn, is associated with decreased cardiac vagal tone, increased sympathetic catecholamine secretion, and greater cardiac beta-receptor sensitivity. Despite the elevation in heart rate, VO2max is reduced primarily from decreased maximal stroke volume and cardiac output. An elevated ejection fraction during exercise following bed rest suggests that the lower stroke volume is not caused by ventricular dysfunction but is primarily the result of decreased venous return associated with lower circulating blood volume, reduced central venous pressure, and higher venous compliance in the lower extremities. VO2max, stroke volume, and cardiac output are further compromised by exercise in the upright posture. The contribution of hypovolemia to reduced cardiac output during exercise following bed rest is supported by the close relationship between the relative magnitude (% delta) and time course of change in blood volume and VO2max during bed rest, and also by the fact that retention of plasma volume is associated with maintenance of VO2max after bed rest. Arteriovenous oxygen difference during maximal exercise is not altered by bed rest, suggesting that peripheral mechanisms may not contribute significantly to the decreased VO2max. However reduction in baseline and maximal muscle blood flow, red blood cell volume, and capillarization in working muscles represent peripheral mechanisms that may contribute to limited oxygen delivery and, subsequently, lowered VO2max. Thus, alterations in cardiac and vascular functions induced by prolonged confinement to bed rest contribute to diminution of maximal oxygen uptake

  8. Cardiovascular consequences of bed rest: effect on maximal oxygen uptake.

    PubMed

    Convertino, V A

    1997-02-01

    Maximal oxygen uptake (VO2max) is reduced in healthy individuals confined to bed rest, suggesting it is independent of any disease state. The magnitude of reduction in VO2max is dependent on duration of bed rest and the initial level of aerobic fitness (VO2max), but it appears to be independent of age or gender. Bed rest induces an elevated maximal heart rate which, in turn, is associated with decreased cardiac vagal tone, increased sympathetic catecholamine secretion, and greater cardiac beta-receptor sensitivity. Despite the elevation in heart rate, VO2max is reduced primarily from decreased maximal stroke volume and cardiac output. An elevated ejection fraction during exercise following bed rest suggests that the lower stroke volume is not caused by ventricular dysfunction but is primarily the result of decreased venous return associated with lower circulating blood volume, reduced central venous pressure, and higher venous compliance in the lower extremities. VO2max, stroke volume, and cardiac output are further compromised by exercise in the upright posture. The contribution of hypovolemia to reduced cardiac output during exercise following bed rest is supported by the close relationship between the relative magnitude (% delta) and time course of change in blood volume and VO2max during bed rest, and also by the fact that retention of plasma volume is associated with maintenance of VO2max after bed rest. Arteriovenous oxygen difference during maximal exercise is not altered by bed rest, suggesting that peripheral mechanisms may not contribute significantly to the decreased VO2max. However reduction in baseline and maximal muscle blood flow, red blood cell volume, and capillarization in working muscles represent peripheral mechanisms that may contribute to limited oxygen delivery and, subsequently, lowered VO2max. Thus, alterations in cardiac and vascular functions induced by prolonged confinement to bed rest contribute to diminution of maximal oxygen uptake

  9. Chemical structure elucidation from ¹³C NMR chemical shifts: efficient data processing using bipartite matching and maximal clique algorithms.

    PubMed

    Koichi, Shungo; Arisaka, Masaki; Koshino, Hiroyuki; Aoki, Atsushi; Iwata, Satoru; Uno, Takeaki; Satoh, Hiroko

    2014-04-28

    Computer-assisted chemical structure elucidation has been intensively studied since the first use of computers in chemistry in the 1960s. Most of the existing elucidators use a structure-spectrum database to obtain clues about the correct structure. Such a structure-spectrum database is expected to grow on a daily basis. Hence, the necessity to develop an efficient structure elucidation system that can adapt to the growth of a database has been also growing. Therefore, we have developed a new elucidator using practically efficient graph algorithms, including the convex bipartite matching, weighted bipartite matching, and Bron-Kerbosch maximal clique algorithms. The utilization of the two matching algorithms especially is a novel point of our elucidator. Because of these sophisticated algorithms, the elucidator exactly produces a correct structure if all of the fragments are included in the database. Even if not all of the fragments are in the database, the elucidator proposes relevant substructures that can help chemists to identify the actual chemical structures. The elucidator, called the CAST/CNMR Structure Elucidator, plays a complementary role to the CAST/CNMR Chemical Shift Predictor, and together these two functions can be used to analyze the structures of organic compounds.

  10. Projection of two biphoton qutrits onto a maximally entangled state.

    PubMed

    Halevy, A; Megidish, E; Shacham, T; Dovrat, L; Eisenberg, H S

    2011-04-01

    Bell state measurements, in which two quantum bits are projected onto a maximally entangled state, are an essential component of quantum information science. We propose and experimentally demonstrate the projection of two quantum systems with three states (qutrits) onto a generalized maximally entangled state. Each qutrit is represented by the polarization of a pair of indistinguishable photons-a biphoton. The projection is a joint measurement on both biphotons using standard linear optics elements. This demonstration enables the realization of quantum information protocols with qutrits, such as teleportation and entanglement swapping. PMID:21517363

  11. Maximal expiratory flow volume curve in quarry workers.

    PubMed

    Subhashini, Arcot Sadagopa; Satchidhanandam, Natesa

    2002-01-01

    Maximal Expiratory Flow Volume (MEFV) curves were recorded with a computerized Spirometer (Med Spiror). Forced Vital Capacity (FVC), Forced Expiratory Volumes (FEV), mean and maximal flow rates were obtained in 25 quarry workers who were free from respiratory disorders and 20 healthy control subjects. All the functional values are lower in quarry workers than in the control subject, the largest reduction in quarry workers with a work duration of over 15 years, especially for FEF75. The effects are probably due to smoking rather than dust exposure.

  12. Expectation-Driven Text Extraction from Medical Ultrasound Images.

    PubMed

    Reul, Christian; Köberle, Philipp; Üçeyler, Nurcan; Puppe, Frank

    2016-01-01

    In this study an expectation-driven approach is proposed to extract data stored as pixel structures in medical ultrasound images. Prior knowledge about certain properties like the position of the text and its background and foreground grayscale values is utilized. Several open source Java libraries are used to pre-process the image and extract the textual information. The results are presented in an Excel table together with the outcome of several consistency checks. After manually correcting potential errors, the outcome is automatically stored in the main database. The proposed system yielded excellent results, reaching an accuracy of 99.94% and reducing the necessary human effort to a minimum. PMID:27577478

  13. Expectations predict chronic pain treatment outcomes.

    PubMed

    Cormier, Stéphanie; Lavigne, Geneviève L; Choinière, Manon; Rainville, Pierre

    2016-02-01

    Accumulating evidence suggests an association between patient pretreatment expectations and numerous health outcomes. However, it remains unclear if and how expectations relate to outcomes after treatments in multidisciplinary pain programs. The present study aims at investigating the predictive association between expectations and clinical outcomes in a large database of chronic pain patients. In this observational cohort study, participants were 2272 patients treated in one of 3 university-affiliated multidisciplinary pain treatment centers. All patients received personalized care, including medical, psychological, and/or physical interventions. Patient expectations regarding pain relief and improvements in quality of life and functioning were measured before the first visit to the pain centers and served as predictor variables. Changes in pain intensity, depressive symptoms, pain interference, and tendency to catastrophize, as well as satisfaction with pain treatment and global impressions of change at 6-month follow-up, were considered as treatment outcomes. Structural equation modeling analyses showed significant positive relationships between expectations and most clinical outcomes, and this association was largely mediated by patients' global impressions of change. Similar patterns of relationships between variables were also observed in various subgroups of patients based on sex, age, pain duration, and pain classification. Such results emphasize the relevance of patient expectations as a determinant of outcomes in multimodal pain treatment programs. Furthermore, the results suggest that superior clinical outcomes are observed in individuals who expect high positive outcomes as a result of treatment.

  14. Bison distribution under conflicting foraging strategies: site fidelity vs. energy maximization.

    PubMed

    Merkle, Jerod A; Cherry, Seth G; Fortin, Daniel

    2015-07-01

    Foraging strategies based on site fidelity and maximization of energy intake rate are two adaptive forces shaping animal behavior. Whereas these strategies can both be evolutionarily stable, they predict conflicting optimal behaviors when population abundance is in decline. In such a case, foragers employing an energy-maximizing strategy should reduce their use of low-quality patches as interference competition becomes less intense for high-quality patches. Foragers using a site fidelity strategy, however, should continue to use familiar patches. Because natural fluctuations in population abundance provide the only non-manipulative opportunity to evaluate adaptation to these evolutionary forces, few studies have examined these foraging strategies simultaneously. Using abundance and space use data from a free-ranging bison (Bison bison) population living in a meadow-forest matrix in Prince Albert National Park, Canada, we determined how individuals balance the trade-off between site fidelity and energy-maximizing patch choice strategies with respect to changes in population abundance. From 1996 to 2005, bison abundance increased from 225 to 475 and then decreased to 225 by 2013. During the period of population increase, population range size increased. This expansion involved the addition of relatively less profitable areas and patches, leading to a decrease in the mean expected profitability of the range. Yet, during the period of population decline, we detected neither a subsequent retraction in population range size nor an increase in mean expected profitability of the range. Further, patch selection models. during the population decline indicated that, as density decreased, bison portrayed stronger fidelity to previously visited meadows, but no increase in selection strength for profitable meadows. Our analysis reveals that an energy-maximizing patch choice strategy alone cannot explain the distribution ofindividuals and populations, and site fidelity is an

  15. Bison distribution under conflicting foraging strategies: site fidelity vs. energy maximization.

    PubMed

    Merkle, Jerod A; Cherry, Seth G; Fortin, Daniel

    2015-07-01

    Foraging strategies based on site fidelity and maximization of energy intake rate are two adaptive forces shaping animal behavior. Whereas these strategies can both be evolutionarily stable, they predict conflicting optimal behaviors when population abundance is in decline. In such a case, foragers employing an energy-maximizing strategy should reduce their use of low-quality patches as interference competition becomes less intense for high-quality patches. Foragers using a site fidelity strategy, however, should continue to use familiar patches. Because natural fluctuations in population abundance provide the only non-manipulative opportunity to evaluate adaptation to these evolutionary forces, few studies have examined these foraging strategies simultaneously. Using abundance and space use data from a free-ranging bison (Bison bison) population living in a meadow-forest matrix in Prince Albert National Park, Canada, we determined how individuals balance the trade-off between site fidelity and energy-maximizing patch choice strategies with respect to changes in population abundance. From 1996 to 2005, bison abundance increased from 225 to 475 and then decreased to 225 by 2013. During the period of population increase, population range size increased. This expansion involved the addition of relatively less profitable areas and patches, leading to a decrease in the mean expected profitability of the range. Yet, during the period of population decline, we detected neither a subsequent retraction in population range size nor an increase in mean expected profitability of the range. Further, patch selection models. during the population decline indicated that, as density decreased, bison portrayed stronger fidelity to previously visited meadows, but no increase in selection strength for profitable meadows. Our analysis reveals that an energy-maximizing patch choice strategy alone cannot explain the distribution ofindividuals and populations, and site fidelity is an

  16. What to Expect After Breast Reconstruction Surgery

    MedlinePlus

    ... Topic References What to expect after breast reconstruction surgery It’s important to have an idea of what ... regular mammograms. Possible risks during and after reconstruction surgery There are certain risks from any type of ...

  17. Classics in the Classroom: Great Expectations Fulfilled.

    ERIC Educational Resources Information Center

    Pearl, Shela

    1986-01-01

    Describes how an English teacher in a Queens, New York, ghetto school introduced her grade nine students to Charles Dickens's "Great Expectations." Focuses on students' responses, which eventually became enthusiastic, and discusses the use of classics within the curriculum. (KH)

  18. What To Expect Before a Lung Transplant

    MedlinePlus

    ... NHLBI on Twitter. What To Expect Before a Lung Transplant If you get into a medical center's ... friends also can offer support. When a Donor Lung Becomes Available OPTN matches donor lungs to recipients ...

  19. What to Expect During a Lung Transplant

    MedlinePlus

    ... NHLBI on Twitter. What To Expect During a Lung Transplant Just before lung transplant surgery, you will ... airway and its blood vessels to your heart. Lung Transplant The illustration shows the process of a ...

  20. Maximizing Thermal Efficiency and Optimizing Energy Management (Fact Sheet)

    SciTech Connect

    Not Available

    2012-03-01

    Researchers at the Thermal Test Facility (TTF) on the campus of the U.S. Department of Energy's National Renewable Energy Laboratory (NREL) in Golden, Colorado, are addressing maximizing thermal efficiency and optimizing energy management through analysis of efficient heating, ventilating, and air conditioning (HVAC) strategies, automated home energy management (AHEM), and energy storage systems.

  1. Dynamical generation of maximally entangled states in two identical cavities

    SciTech Connect

    Alexanian, Moorad

    2011-11-15

    The generation of entanglement between two identical coupled cavities, each containing a single three-level atom, is studied when the cavities exchange two coherent photons and are in the N=2,4 manifolds, where N represents the maximum number of photons possible in either cavity. The atom-photon state of each cavity is described by a qutrit for N=2 and a five-dimensional qudit for N=4. However, the conservation of the total value of N for the interacting two-cavity system limits the total number of states to only 4 states for N=2 and 8 states for N=4, rather than the usual 9 for two qutrits and 25 for two five-dimensional qudits. In the N=2 manifold, two-qutrit states dynamically generate four maximally entangled Bell states from initially unentangled states. In the N=4 manifold, two-qudit states dynamically generate maximally entangled states involving three or four states. The generation of these maximally entangled states occurs rather rapidly for large hopping strengths. The cavities function as a storage of periodically generated maximally entangled states.

  2. Neural network approach for solving the maximal common subgraph problem.

    PubMed

    Shoukry, A; Aboutabl, M

    1996-01-01

    A new formulation of the maximal common subgraph problem (MCSP), that is implemented using a two-stage Hopfield neural network, is given. Relative merits of this proposed formulation, with respect to current neural network-based solutions as well as classical sequential-search-based solutions, are discussed.

  3. Optimal technique for maximal forward rotating vaults in men's gymnastics.

    PubMed

    Hiley, Michael J; Jackson, Monique I; Yeadon, Maurice R

    2015-08-01

    In vaulting a gymnast must generate sufficient linear and angular momentum during the approach and table contact to complete the rotational requirements in the post-flight phase. This study investigated the optimization of table touchdown conditions and table contact technique for the maximization of rotation potential for forwards rotating vaults. A planar seven-segment torque-driven computer simulation model of the contact phase in vaulting was evaluated by varying joint torque activation time histories to match three performances of a handspring double somersault vault by an elite gymnast. The closest matching simulation was used as a starting point to maximize post-flight rotation potential (the product of angular momentum and flight time) for a forwards rotating vault. It was found that the maximized rotation potential was sufficient to produce a handspring double piked somersault vault. The corresponding optimal touchdown configuration exhibited hip flexion in contrast to the hyperextended configuration required for maximal height. Increasing touchdown velocity and angular momentum lead to additional post-flight rotation potential. By increasing the horizontal velocity at table touchdown, within limits obtained from recorded performances, the handspring double somersault tucked with one and a half twists, and the handspring triple somersault tucked became theoretically possible.

  4. Emotional Control and Instructional Effectiveness: Maximizing a Timeout

    ERIC Educational Resources Information Center

    Andrews, Staci R.

    2015-01-01

    This article provides recommendations for best practices for basketball coaches to maximize the instructional effectiveness of a timeout during competition. Practical applications are derived from research findings linking emotional intelligence to effective coaching behaviors. Additionally, recommendations are based on the implications of the…

  5. Children Use Categories to Maximize Accuracy in Estimation

    ERIC Educational Resources Information Center

    Duffy, Sean; Huttenlocher, Janellen; Crawford, L. Elizabeth

    2006-01-01

    The present study tests a model of category effects upon stimulus estimation in children. Prior work with adults suggests that people inductively generalize distributional information about a category of stimuli and use this information to adjust their estimates of individual stimuli in a way that maximizes average accuracy in estimation (see…

  6. Maximally entangled mixed-state generation via local operations

    SciTech Connect

    Aiello, A.; Puentes, G.; Voigt, D.; Woerdman, J. P.

    2007-06-15

    We present a general theoretical method to generate maximally entangled mixed states of a pair of photons initially prepared in the singlet polarization state. This method requires only local operations upon a single photon of the pair and exploits spatial degrees of freedom to induce decoherence. We report also experimental confirmation of these theoretical results.

  7. On Adaptation, Maximization, and Reinforcement Learning among Cognitive Strategies

    ERIC Educational Resources Information Center

    Erev, Ido; Barron, Greg

    2005-01-01

    Analysis of binary choice behavior in iterated tasks with immediate feedback reveals robust deviations from maximization that can be described as indications of 3 effects: (a) a payoff variability effect, in which high payoff variability seems to move choice behavior toward random choice; (b) underweighting of rare events, in which alternatives…

  8. Maximizing plant density affects broccoli yield and quality

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Increased demand for fresh market bunch broccoli (Brassica oleracea L. var. italica) has led to increased production along the United States east coast. Maximizing broccoli yields is a primary concern for quickly expanding southeastern commercial markets. This broccoli plant density study was carr...

  9. Maximizing grain sorghum water use efficiency under deficit irrigation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Development and evaluation of sustainable and efficient irrigation strategies is a priority for producers faced with water shortages resulting from aquifer depletion, reduced base flows, and reallocation of water to non-agricultural sectors. Under a limited water supply, yield maximization may not b...

  10. Maximality and Idealized Cognitive Models: The Complementation of Spanish "Tener."

    ERIC Educational Resources Information Center

    Hilferty, Joseph; Valenzuela, Javier

    2001-01-01

    Discusses the bare-noun phrase (NP) complementation pattern of the Spanish verb "tener" (have). Shows that the maximality of the complement NP is dependent upon three factors: (1) idiosyncratic valence requirements; (2) encyclopedic knowledge related to possession; and (3) contextualized semantic construal. (Author/VWL)

  11. The Profit-Maximizing Firm: Old Wine in New Bottles.

    ERIC Educational Resources Information Center

    Felder, Joseph

    1990-01-01

    Explains and illustrates a simplified use of graphical analysis for analyzing the profit-maximizing firm. Believes that graphical analysis helps college students gain a deeper understanding of marginalism and an increased ability to formulate economic problems in marginalist terms. (DB)

  12. Maximal tree size of few-qubit states

    NASA Astrophysics Data System (ADS)

    Le, Huy Nguyen; Cai, Yu; Wu, Xingyao; Rabelo, Rafael; Scarani, Valerio

    2014-06-01

    Tree size (TS) is an interesting measure of complexity for multiqubit states: not only is it in principle computable, but one can obtain lower bounds for it. In this way, it has been possible to identify families of states whose complexity scales superpolynomially in the number of qubits. With the goal of progressing in the systematic study of the mathematical property of TS, in this work we characterize the tree size of pure states for the case where the number of qubits is small, namely, 3 or 4. The study of three qubits does not hold great surprises, insofar as the structure of entanglement is rather simple; the maximal TS is found to be 8, reached for instance by the |W> state. The study of four qubits yields several insights: in particular, the most economic description of a state is found not to be recursive. The maximal TS is found to be 16, reached for instance by a state called |Ψ(4)> which was already discussed in the context of four-photon down-conversion experiments. We also find that the states with maximal tree size form a set of zero measure: a smoothed version of tree size over a neighborhood of a state (ɛ-TS) reduces the maximal values to 6 and 14, respectively. Finally, we introduce a notion of tree size for mixed states and discuss it for a one-parameter family of states.

  13. Nursing Students' Awareness and Intentional Maximization of Their Learning Styles

    ERIC Educational Resources Information Center

    Mayfield, Linda Riggs

    2012-01-01

    This small, descriptive, pilot study addressed survey data from four levels of nursing students who had been taught to maximize their learning styles in a first-semester freshman success skills course. Bandura's Agency Theory supports the design. The hypothesis was that without reinforcing instruction, the students' recall and application of that…

  14. Optimal technique for maximal forward rotating vaults in men's gymnastics.

    PubMed

    Hiley, Michael J; Jackson, Monique I; Yeadon, Maurice R

    2015-08-01

    In vaulting a gymnast must generate sufficient linear and angular momentum during the approach and table contact to complete the rotational requirements in the post-flight phase. This study investigated the optimization of table touchdown conditions and table contact technique for the maximization of rotation potential for forwards rotating vaults. A planar seven-segment torque-driven computer simulation model of the contact phase in vaulting was evaluated by varying joint torque activation time histories to match three performances of a handspring double somersault vault by an elite gymnast. The closest matching simulation was used as a starting point to maximize post-flight rotation potential (the product of angular momentum and flight time) for a forwards rotating vault. It was found that the maximized rotation potential was sufficient to produce a handspring double piked somersault vault. The corresponding optimal touchdown configuration exhibited hip flexion in contrast to the hyperextended configuration required for maximal height. Increasing touchdown velocity and angular momentum lead to additional post-flight rotation potential. By increasing the horizontal velocity at table touchdown, within limits obtained from recorded performances, the handspring double somersault tucked with one and a half twists, and the handspring triple somersault tucked became theoretically possible. PMID:26026290

  15. PROFIT-MAXIMIZING PRINCIPLES, INSTRUCTIONAL UNITS FOR VOCATIONAL AGRICULTURE.

    ERIC Educational Resources Information Center

    BARKER, RICHARD L.

    THE PURPOSE OF THIS GUIDE IS TO ASSIST VOCATIONAL AGRICULTURE TEACHERS IN STIMULATING JUNIOR AND SENIOR HIGH SCHOOL STUDENT THINKING, UNDERSTANDING, AND DECISION MAKING AS ASSOCIATED WITH PROFIT-MAXIMIZING PRINCIPLES OF FARM OPERATION FOR USE IN FARM MANAGEMENT. IT WAS DEVELOPED UNDER A U.S. OFFICE OF EDUCATION GRANT BY TEACHER-EDUCATORS, A FARM…

  16. Curriculum and Testing Strategies to Maximize Special Education STAAR Achievement

    ERIC Educational Resources Information Center

    Johnson, William L.; Johnson, Annabel M.; Johnson, Jared W.

    2015-01-01

    This document is from a presentation at the 2015 annual conference of the Science Teachers Association of Texas (STAT). The two sessions (each listed as feature sessions at the state conference) examined classroom strategies the presenter used in his chemistry classes to maximize Texas end-of-course chemistry test scores for his special population…

  17. Density-metric unimodular gravity: Vacuum maximal symmetry

    SciTech Connect

    Abbassi, A.H.; Abbassi, A.M.

    2011-05-15

    We have investigated the vacuum maximally symmetric solutions of recently proposed density-metric unimodular gravity theory. The results are widely different from inflationary scenario. The exponential dependence on time in deSitter space is substituted by a power law. Open space-times with non-zero cosmological constant are excluded.

  18. Teacher Praise: Maximizing the Motivational Impact. Teaching Strategies.

    ERIC Educational Resources Information Center

    McVey, Mary D.

    2001-01-01

    Recognizes the influence of praise on human behavior, and provides specific suggestions on how to maximize the positive effects of praise when intended as positive reinforcement. Examines contingency, specificity, and selectivity aspects of praise. Cautions teachers to avoid the controlling effects of praise and the possibility that praise may…

  19. Response Independence, Matching, and Maximizing: A Reply to Heyman.

    ERIC Educational Resources Information Center

    Staddon, J. E. R.; Motheral, Susan

    1979-01-01

    Heyman's major criticism (TM 504 810) of Staddon and Motheral's reinforcement maximization model is that it does not consider "local" and "interchangeover" interresponse times separately. We show that this separation may not be necessary. Heyman's apparent gain in comprehensiveness may not be worth the added complexity. (Author/RD)

  20. Modifying Softball for Maximizing Learning Outcomes in Physical Education

    ERIC Educational Resources Information Center

    Brian, Ali; Ward, Phillip; Goodway, Jacqueline D.; Sutherland, Sue

    2014-01-01

    Softball is taught in many physical education programs throughout the United States. This article describes modifications that maximize learning outcomes and that address the National Standards and safety recommendations. The modifications focus on tasks and equipment, developmentally appropriate motor-skill acquisition, increasing number of…

  1. Maximizing Data Use: A Focus on the Completion Agenda

    ERIC Educational Resources Information Center

    Phillips, Brad C.; Horowitz, Jordan E.

    2013-01-01

    The completion agenda is in full force at the nation's community colleges. To maximize the impact colleges can have on improving completion, colleges must organize around using student progress and outcome data to monitor and track their efforts. Unfortunately, colleges are struggling to identify relevant data and to mobilize staff to review…

  2. Maximization of total genetic variance in breed conservation programmes.

    PubMed

    Cervantes, I; Meuwissen, T H E

    2011-12-01

    The preservation of the maximum genetic diversity in a population is one of the main objectives within a breed conservation programme. We applied the maximum variance total (MVT) method to a unique population in order to maximize the total genetic variance. The function maximization was performed by the annealing algorithm. We have selected the parents and the mating scheme at the same time simply maximizing the total genetic variance (a mate selection problem). The scenario was compared with a scenario of full-sib lines, a MVT scenario with a rate of inbreeding restriction, and with a minimum coancestry selection scenario. The MVT method produces sublines in a population attaining a similar scheme as the full-sib sublining that agrees with other authors that the maximum genetic diversity in a population (the lowest overall coancestry) is attained in the long term by subdividing it in as many isolated groups as possible. The application of a restriction on the rate of inbreeding jointly with the MVT method avoids the consequences of inbreeding depression and maintains the effective size at an acceptable minimum. The scenario of minimum coancestry selection gave higher effective size values, but a lower total genetic variance. A maximization of the total genetic variance ensures more genetic variation for extreme traits, which could be useful in case the population needs to adapt to a new environment/production system.

  3. Maximizing the Online Learning Experience: Suggestions for Educators and Students

    ERIC Educational Resources Information Center

    Cicco, Gina

    2011-01-01

    This article will discuss ways of maximizing the online course experience for teachers- and counselors-in-training. The widespread popularity of online instruction makes it a necessary learning experience for future teachers and counselors (Ash, 2011). New teachers and counselors take on the responsibility of preparing their students for real-life…

  4. Fertilizer placement to maximize nitrogen use by fescue

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The method of fertilizer nitrogen(N) application can affect N uptake in tall fescue and therefore its yield and quality. Subsurface-banding (knife) of fertilizer maximizes fescue N uptake in the poorly-drained clay–pan soils of southeastern Kansas. This study was conducted to determine if knifed N r...

  5. Comparison of evidenced and expected ADN competencies.

    PubMed

    Burch, S; Joyce-Nagata, B; Reeb, R

    1991-01-01

    These study findings indicate that nursing service administrators in the State of Mississippi expect strong technical level skills from the ADN. Congruency between nursing education and nursing service was validated. The predominance of role competencies outlined by nurse educators were validated as both expected and evidenced for the ADN in the State of Mississippi. Competencies need to be continually evaluated to reflect changes in the health delivery system as related to the ADN.

  6. Taxation and life expectancy in Western Europe.

    PubMed

    Bagger, P J

    2004-06-01

    With the exception of Denmark, life expectancy in Western Europe has shown a significant increase over the last decades. During that period of time overall taxation has increased in most of the countries, especially in Denmark. We, therefore, examined whether taxation could influence life expectancy in Western Europe. We used information on the sum of income tax and employees' social contribution in percentage of gross wage earnings from the OECD database and data on disability adjusted life expectancy at birth from the World Health Organization database. We arbitrarily only included countries with populations in excess of 4 millions and thereby excluded smaller countries where tax exemption is part of the national monetary policy. We found that disability adjusted life expectancy at birth was inversely correlated to the total tax burden in Western Europe. We speculate whether a threshold exists where high taxes exert a negative influence on life expectancy despite increased welfare spending. The study suggests that tax burden should be considered among the multiple factors influencing life expectancy. PMID:15242031

  7. Utilization of the terrestrial cyanobacterial sheet

    NASA Astrophysics Data System (ADS)

    Katoh, Hiroshi; Tomita-Yokotani, Kaori; Furukawa, Jun; Kimura, Shunta; Yamaguchi, Yuji; Takenaka, Hiroyuki; Kohno, Nobuyuki

    2016-07-01

    The terrestrial nitrogen-fixing cyanobacterium, Nostoc commune, is living ranging from polar to desert. N. commune makes visible colonies composed extracellular polymeric substances. N. commune has expected to utilize for agriculture, food and terraforming cause of its extracellular polysaccharide, desiccation tolerance and nitrogen fixation. To exhibit the potential abilities, the N. commune sheet is made to use convenient and evaluated by plant growth and radioactive accumulation. We will discuss utilization of terrestrial cyanobacteria under closed environment.

  8. Understanding the Hows and Whys of Decision-Making: From Expected Utility to Divisive Normalization.

    PubMed

    Glimcher, Paul

    2014-01-01

    Over the course of the last century, economists and ethologists have built detailed models from first principles of how humans and animals should make decisions. Over the course of the last few decades, psychologists and behavioral economists have gathered a wealth of data at variance with the predictions of these economic models. This has led to the development of highly descriptive models that can often predict what choices people or animals will make but without offering any insight into why people make the choices that they do--especially when those choices reduce a decision-maker's well-being. Over the course of the last two decades, neurobiologists working with economists and psychologists have begun to use our growing understanding of how the nervous system works to develop new models of how the nervous system makes decisions. The result, a growing revolution at the interdisciplinary border of neuroscience, psychology, and economics, is a new field called Neuroeconomics. Emerging neuroeconomic models stand to revolutionize our understanding of human and animal choice behavior by combining fundamental properties of neurobiological representation with decision-theoretic analyses. In this overview, one class of these models, based on the widely observed neural computation known as divisive normalization, is presented in detail. The work demonstrates not only that a discrete class of computation widely observed in the nervous system is fundamentally ubiquitous, but how that computation shapes behaviors ranging from visual perception to financial decision-making. It also offers the hope of reconciling economic analysis of what choices we should make with psychological observations of the choices we actually do make. PMID:25637264

  9. Life expectancy in Canada--an overview.

    PubMed

    Adams, O

    1990-01-01

    At 73 years for men and more than 80 years for women, Canada's life expectancy at birth compares favourably with other developed countries; Japan currently leads the world with 75.6 years for men and 81.4 years for women. In 1920-1922, fewer than six out of ten Canadians could expect to survive to their 65th birthday; by 1985-1987, this had risen to eight out of ten. At the oldest ages, the increases in survival are even more striking. In 1920-1922, just over one in ten Canadians could expect to reach their 85th birthday; by 1985-1987, this had increased to more than three out of ten. Since the 1920s, life expectancy has been higher in the Western provinces and lower in Atlantic Canada and Quebec. In 1950-1952, for example, a person born in Saskatchewan could expect to live four years longer than a person born in Quebec. By 1985-1987, this difference had been reduced to just over one year. Women have made much greater gains in life expectancy than men. In 1920-1922, women had an advantage in life expectancy over men of less than two years; by 1970-1972, this had more than tripled to seven years. Married men and women have a distinct advantage in longevity over other marital status categories. Married men may expect to live over eight years longer than never-married men, and more than ten years longer than widowed men. Married women can expect to live three years longer than never-married women, and four years longer than women who are either divorced or widowed. As of 1986, a boy born in highest-income quintile area in urban Canada can expect to live almost six years longer than a boy born in a lowest-income quintile area. For girls, the difference is almost two years. However, this socio-economic differential narrowed from 1971 to 1986.

  10. Comparison of myocardial /sup 201/Tl clearance after maximal and submaximal exercise: implications for diagnosis of coronary disease: concise communication

    SciTech Connect

    Massie, B.M.; Wisneski, J.; Kramer, B.; Hollenberg, M.; Gertz, E.; Stern, D.

    1982-05-01

    Recently the quantitation of regional /sup 201/Tl clearance has been shown to increase the sensitivity of the scintigraphic detection of coronary disease. Although /sup 201/Tl clearance rates might be expected to vary with the degree of exercise, this relationship has not been explored. We therefore evaluated the rate of decrease in myocardial /sup 201/Tl activity following maximal and submaximal stress in seven normal subjects and 21 patients with chest pain, using the seven-pinhole tomographic reconstruction technique. In normals, the mean /sup 201/Tl clearance rate declined from 41% +/- 7 over a 3-hr period with maximal exercise to 25% +/- 5 after 3 hr at a submaximal level (p less than 0.001). Similar differences in clearance rates were found in the normally perfused regions of the left ventricle in patients with chest pain, depending on whether or not a maximal end point (defined as either the appearance of ischemia or reaching 85% of age-predicted heart rate) was achieved. In five patients who did not reach these end points, 3-hr clearance rates in uninvolved regions averaged 25% +/- 2, in contrast to a mean of 38% +/- 5 for such regions in 15 patients who exercised to ischemia or an adequate heart rate. These findings indicate that clearance criteria derived from normals can be applied to patients who are stressed maximally, even if the duration of exercise is limited, but that caution must be used in interpreting clearance rates in those who do not exercise to an accepted end point.

  11. Controlling Your Utility Rates.

    ERIC Educational Resources Information Center

    Lucht, Ray; Dembowski, Frederick L.

    1985-01-01

    A cost-effective alternative to high utility bills for middle-sized and smaller utility users is the service of utility rate consultants. The consultants analyze utility invoices for the previous 12 months to locate available refunds or credits. (MLF)

  12. Non-negative matrix factorization by maximizing correntropy for cancer clustering

    PubMed Central

    2013-01-01

    Background Non-negative matrix factorization (NMF) has been shown to be a powerful tool for clustering gene expression data, which are widely used to classify cancers. NMF aims to find two non-negative matrices whose product closely approximates the original matrix. Traditional NMF methods minimize either the l2 norm or the Kullback-Leibler distance between the product of the two matrices and the original matrix. Correntropy was recently shown to be an effective similarity measurement due to its stability to outliers or noise. Results We propose a maximum correntropy criterion (MCC)-based NMF method (NMF-MCC) for gene expression data-based cancer clustering. Instead of minimizing the l2 norm or the Kullback-Leibler distance, NMF-MCC maximizes the correntropy between the product of the two matrices and the original matrix. The optimization problem can be solved by an expectation conditional maximization algorithm. Conclusions Extensive experiments on six cancer benchmark sets demonstrate that the proposed method is significantly more accurate than the state-of-the-art methods in cancer clustering. PMID:23522344

  13. Expectations for Melodic Contours Transcend Pitch

    PubMed Central

    Graves, Jackson E.; Micheyl, Christophe; Oxenham, Andrew J.

    2015-01-01

    The question of what makes a good melody has interested composers, music theorists, and psychologists alike. Many of the observed principles of good “melodic continuation” involve melodic contour – the pattern of rising and falling pitch within a sequence. Previous work has shown that contour perception can extend beyond pitch to other auditory dimensions, such as brightness and loudness. Here, we show with two experiments that the generalization of contour perception to non-traditional dimensions also extends to melodic expectations. In the first experiment, subjective ratings for three-tone sequences that vary in brightness or loudness conformed to the same general contour-based expectations as pitch sequences. In the second experiment, we modified the sequence of melody presentation such that melodies with the same beginning were blocked together. This change produced substantively different results, but the patterns of ratings remained similar across the three auditory dimensions. Taken together, these results suggest that 1) certain well-known principles of melodic expectation (such as the expectation for a reversal following a skip) are dependent on long-term context, and 2) these expectations are not unique to the dimension of pitch and may instead reflect more general principles of perceptual organization. PMID:25365571

  14. Components of attention modulated by temporal expectation.

    PubMed

    Sørensen, Thomas Alrik; Vangkilde, Signe; Bundesen, Claus

    2015-01-01

    By varying the probabilities that a stimulus would appear at particular times after the presentation of a cue and modeling the data by the theory of visual attention (Bundesen, 1990), Vangkilde, Coull, and Bundesen (2012) provided evidence that the speed of encoding a singly presented stimulus letter into visual short-term memory (VSTM) is modulated by the observer's temporal expectations. We extended the investigation from single-stimulus recognition to whole report (Experiment 1) and partial report (Experiment 2). Cue-stimulus foreperiods were distributed geometrically using time steps of 500 ms. In high expectancy conditions, the probability that the stimulus would appear on the next time step, given that it had not yet appeared, was high, whereas in low expectancy conditions, the probability was low. The speed of encoding the stimuli into VSTM was higher in the high expectancy conditions. In line with the Easterbrook (1959) hypothesis, under high temporal expectancy, the processing was also more focused (selective). First, the storage capacity of VSTM was lower, so that fewer stimuli were encoded into VSTM. Second, the distribution of attentional weights across stimuli was less even: The efficiency of selecting targets rather than distractors for encoding into VSTM was higher, as was the spread of the attentional weights of the target letters.

  15. The subjective marijuana experience: great expectations.

    PubMed

    Stark-Adamec, C; Adamec, R E; Pihl, R O

    1981-10-01

    Participants' expectations of marijuana effects are frequently cited as unmeasured post hoc explanations of variability in response to the drug, or of the data which fail to conform to the experimenters' expectations of the drug's effects. Twenty-four male volunteers, experienced in the use of marijuana, participated in research involving the administration of coltsfoot, placebo, and marijauna to investigate whether expectancy of marijuana effects could be measured and related to observed effects. Data for the Expectancy Questionnaire were derived from the Marihuana Effects Questions filled out when potential participants volunteered for the study and were compared to the High Questionnaire filled out after drug administration sessions. Expectancy was shown to have a quantifiable effect on the drug experience (both placebo and marijuana), even in an experimental situation. Prior frequency of occurrence of specific effects was positively related to both the intensity and duration of the effects in the laboratory. The data are discussed in terms of the learned components in getting stoned, and in terms of the social nature of cannabis intoxication.

  16. Components of attention modulated by temporal expectation.

    PubMed

    Sørensen, Thomas Alrik; Vangkilde, Signe; Bundesen, Claus

    2015-01-01

    By varying the probabilities that a stimulus would appear at particular times after the presentation of a cue and modeling the data by the theory of visual attention (Bundesen, 1990), Vangkilde, Coull, and Bundesen (2012) provided evidence that the speed of encoding a singly presented stimulus letter into visual short-term memory (VSTM) is modulated by the observer's temporal expectations. We extended the investigation from single-stimulus recognition to whole report (Experiment 1) and partial report (Experiment 2). Cue-stimulus foreperiods were distributed geometrically using time steps of 500 ms. In high expectancy conditions, the probability that the stimulus would appear on the next time step, given that it had not yet appeared, was high, whereas in low expectancy conditions, the probability was low. The speed of encoding the stimuli into VSTM was higher in the high expectancy conditions. In line with the Easterbrook (1959) hypothesis, under high temporal expectancy, the processing was also more focused (selective). First, the storage capacity of VSTM was lower, so that fewer stimuli were encoded into VSTM. Second, the distribution of attentional weights across stimuli was less even: The efficiency of selecting targets rather than distractors for encoding into VSTM was higher, as was the spread of the attentional weights of the target letters. PMID:25068851

  17. Orbiter electrical equipment utilization baseline

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The baseline for utilization of Orbiter electrical equipment in both electrical and Environmental Control and Life Support System (ECLSS) thermal analyses is established. It is a composite catalog of Space Shuttle equipment, as defined in the Shuttle Operational Data Book. The major functions and expected usage of each component type are described. Functional descriptions are designed to provide a fundamental understanding of the Orbiter electrical equipment, to insure correlation of equipment usage within nominal analyses, and to aid analysts in the formulation of off-nominal, contingency analyses.

  18. Solar energy research and utilization

    NASA Technical Reports Server (NTRS)

    Cherry, W. R.

    1974-01-01

    The role of solar energy is visualized in the heating and cooling of buildings, in the production of renewable gaseous, liquid and solid fuels, and in the production of electric power over the next 45 years. Potential impacts of solar energy on various energy markets, and estimated costs of such solar energy systems are discussed. Some typical solar energy utilization processes are described in detail. It is expected that at least 20% of the U.S. total energy requirements by 2020 will be delivered from solar energy.

  19. Expected rates with mini-arrays for air showers

    NASA Technical Reports Server (NTRS)

    Hazen, W. E.

    1985-01-01

    As a guide in the design of mini-arrays used to exploit the Linsley effect in the study of air showers, it is useful to calculate the expected rates. The results can aid in the choice of detectors and their placement or in predicting the utility of existing detector systems. Furthermore, the potential of the method can be appraised for the study of large showers. Specifically, we treat the case of a mini-array of dimensions small enough compared to the distance of axes of showers of interest so that it can be considered a point detector. The input information is taken from the many previous studies of air showers by other groups. The calculations will give: (1) the expected integral rate, F(sigma, rho), for disk thickness, sigma, or rise time, t sub 1/2, with local particle density, rho, as a parameter; (2) the effective detection area A(N) with sigma (min) and rho (min) and rho (min) as parameters; (3) the expected rate of collection of data F sub L (N) versus shower size, N.

  20. Utilization of the terrestrial cyanobacteria

    NASA Astrophysics Data System (ADS)

    Katoh, Hiroshi; Tomita-Yokotani, Kaori; Furukawa, Jun; Kimura, Shunta; Yokoshima, Mika; Yamaguchi, Yuji; Takenaka, Hiroyuki

    The terrestrial, N _{2}-fixing cyanobacterium, Nostoc commune has expected to utilize for agriculture, food and terraforming cause of its extracellular polysaccharide, desiccation tolerance and nitrogen fixation. Previously, the first author indicated that desiccation related genes were analyzed and the suggested that the genes were related to nitrogen fixation and metabolisms. In this report, we suggest possibility of agriculture, using the cyanobacterium. Further, we also found radioactive compounds accumulated N. commune (cyanobacterium) in Fukushima, Japan after nuclear accident. Thus, it is investigated to decontaminate radioactive compounds from the surface soil by the cyanobacterium and showed to accumulate radioactive compounds using the cyanobacterium. We will discuss utilization of terrestrial cyanobacteria under closed environment. Keyword: Desiccation, terrestrial cyanobacteria, bioremediation, agriculture