Science.gov

Sample records for expected utility maximization

  1. Expected Power-Utility Maximization Under Incomplete Information and with Cox-Process Observations

    SciTech Connect

    Fujimoto, Kazufumi; Nagai, Hideo; Runggaldier, Wolfgang J.

    2013-02-15

    We consider the problem of maximization of expected terminal power utility (risk sensitive criterion). The underlying market model is a regime-switching diffusion model where the regime is determined by an unobservable factor process forming a finite state Markov process. The main novelty is due to the fact that prices are observed and the portfolio is rebalanced only at random times corresponding to a Cox process where the intensity is driven by the unobserved Markovian factor process as well. This leads to a more realistic modeling for many practical situations, like in markets with liquidity restrictions; on the other hand it considerably complicates the problem to the point that traditional methodologies cannot be directly applied. The approach presented here is specific to the power-utility. For log-utilities a different approach is presented in Fujimoto et al. (Preprint, 2012).

  2. Why Contextual Preference Reversals Maximize Expected Value

    PubMed Central

    2016-01-01

    Contextual preference reversals occur when a preference for one option over another is reversed by the addition of further options. It has been argued that the occurrence of preference reversals in human behavior shows that people violate the axioms of rational choice and that people are not, therefore, expected value maximizers. In contrast, we demonstrate that if a person is only able to make noisy calculations of expected value and noisy observations of the ordinal relations among option features, then the expected value maximizing choice is influenced by the addition of new options and does give rise to apparent preference reversals. We explore the implications of expected value maximizing choice, conditioned on noisy observations, for a range of contextual preference reversal types—including attraction, compromise, similarity, and phantom effects. These preference reversal types have played a key role in the development of models of human choice. We conclude that experiments demonstrating contextual preference reversals are not evidence for irrationality. They are, however, a consequence of expected value maximization given noisy observations. PMID:27337391

  3. Classical subjective expected utility

    PubMed Central

    Cerreia-Vioglio, Simone; Maccheroni, Fabio; Marinacci, Massimo; Montrucchio, Luigi

    2013-01-01

    We consider decision makers who know that payoff-relevant observations are generated by a process that belongs to a given class M, as postulated in Wald [Wald A (1950) Statistical Decision Functions (Wiley, New York)]. We incorporate this Waldean piece of objective information within an otherwise subjective setting à la Savage [Savage LJ (1954) The Foundations of Statistics (Wiley, New York)] and show that this leads to a two-stage subjective expected utility model that accounts for both state and model uncertainty. PMID:23559375

  4. Classical subjective expected utility.

    PubMed

    Cerreia-Vioglio, Simone; Maccheroni, Fabio; Marinacci, Massimo; Montrucchio, Luigi

    2013-04-23

    We consider decision makers who know that payoff-relevant observations are generated by a process that belongs to a given class M, as postulated in Wald [Wald A (1950) Statistical Decision Functions (Wiley, New York)]. We incorporate this Waldean piece of objective information within an otherwise subjective setting à la Savage [Savage LJ (1954) The Foundations of Statistics (Wiley, New York)] and show that this leads to a two-stage subjective expected utility model that accounts for both state and model uncertainty. PMID:23559375

  5. Robust estimation by expectation maximization algorithm

    NASA Astrophysics Data System (ADS)

    Koch, Karl Rudolf

    2013-02-01

    A mixture of normal distributions is assumed for the observations of a linear model. The first component of the mixture represents the measurements without gross errors, while each of the remaining components gives the distribution for an outlier. Missing data are introduced to deliver the information as to which observation belongs to which component. The unknown location parameters and the unknown scale parameter of the linear model are estimated by the EM algorithm, which is iteratively applied. The E (expectation) step of the algorithm determines the expected value of the likelihood function given the observations and the current estimate of the unknown parameters, while the M (maximization) step computes new estimates by maximizing the expectation of the likelihood function. In comparison to Huber's M-estimation, the EM algorithm does not only identify outliers by introducing small weights for large residuals but also estimates the outliers. They can be corrected by the parameters of the linear model freed from the distortions by gross errors. Monte Carlo methods with random variates from the normal distribution then give expectations, variances, covariances and confidence regions for functions of the parameters estimated by taking care of the outliers. The method is demonstrated by the analysis of measurements with gross errors of a laser scanner.

  6. Steganalysis feature improvement using expectation maximization

    NASA Astrophysics Data System (ADS)

    Rodriguez, Benjamin M.; Peterson, Gilbert L.; Agaian, Sos S.

    2007-04-01

    Images and data files provide an excellent opportunity for concealing illegal or clandestine material. Currently, there are over 250 different tools which embed data into an image without causing noticeable changes to the image. From a forensics perspective, when a system is confiscated or an image of a system is generated the investigator needs a tool that can scan and accurately identify files suspected of containing malicious information. The identification process is termed the steganalysis problem which focuses on both blind identification, in which only normal images are available for training, and multi-class identification, in which both the clean and stego images at several embedding rates are available for training. In this paper an investigation of a clustering and classification technique (Expectation Maximization with mixture models) is used to determine if a digital image contains hidden information. The steganalysis problem is for both anomaly detection and multi-class detection. The various clusters represent clean images and stego images with between 1% and 10% embedding percentage. Based on the results it is concluded that the EM classification technique is highly suitable for both blind detection and the multi-class problem.

  7. Expectation maximization applied to GMTI convoy tracking

    NASA Astrophysics Data System (ADS)

    Koch, Wolfgang

    2002-08-01

    Collectively moving ground targets are typical of a military ground situation and have to be treated as separate aggregated entities. For a long-range ground surveillance application with airborne GMTI radar we inparticular address the task of track maintenance for ground moving convoys consisting of a small number of individual vehicles. In the proposed approach the identity of the individual vehicles within the convoy is no longer stressed. Their kinematical state vectors are rather treated as internal degrees of freedom characterizing the convoy, which is considered as a collective unit. In this context, the Expectation Maximization technique (EM), originally developed for incomplete data problems in statistical inference and first applied to tracking applications by STREIT et al. seems to be a promising approach. We suggest to embed the EM algorithm into a more traditional Bayesian tracking framework for dealing with false or unwanted sensor returns. The proposed distinction between external and internal data association conflicts (i.e. those among the convoy vehicles) should also enable the application of sequential track extraction techniques introduced by Van Keuk for aircraft formations, providing estimates of the number of the individual convoy vehicles involved. Even with sophisticated signal processing methods (STAP: Space-Time Adaptive Processing), ground moving vehicles can well be masked by the sensor specific clutter notch (Doppler blinding). This physical phenomenon results in interfering fading effects, which can well last over a longer series of sensor updates and therefore will seriously affect the track quality unless properly handled. Moreover, for ground moving convoys the phenomenon of Doppler blindness often superposes the effects induced by the finite resolution capability of the sensor. In many practical cases a separate modeling of resolution phenomena for convoy targets can therefore be omitted, provided the GMTI detection model is used

  8. Using explicit decision rules to manage issues of justice, risk, and ethics in decision analysis: when is it not rational to maximize expected utility?

    PubMed

    Deber, R B; Goel, V

    1990-01-01

    Concepts of justice, risk, and ethics can be merged with decision analysis by requiring the analyst to specify explicity a decision rule or sequence of rules. Decision rules are categorized by whether they consider: 1) aspects of outcome distributions beyond central tendencies; 2) probabilities as well as utilities of outcomes; and 3) means as well as ends. This formulation suggests that distribution-based decision rules could address both risk (for an individual) and justice (for the population). Rational choice under risk if choices are one-time only (vs. repeated events) or if one branch contains unlikely but disastrous outcomes might ignore probability information. Incorporating risk attitude into decision rules rather than utilities could facilitate use of multiattribute approaches to measuring outcomes. Certain ethical concerns could be addressed by prior specification of rules for allowing particular branches. Examples, including selection of polio vaccine strategies, are discussed, and theoretical and practical implications of a decision rule approach noted. PMID:2196412

  9. Maximizing Resource Utilization in Video Streaming Systems

    ERIC Educational Resources Information Center

    Alsmirat, Mohammad Abdullah

    2013-01-01

    Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…

  10. Blood detection in wireless capsule endoscopy using expectation maximization clustering

    NASA Astrophysics Data System (ADS)

    Hwang, Sae; Oh, JungHwan; Cox, Jay; Tang, Shou Jiang; Tibbals, Harry F.

    2006-03-01

    Wireless Capsule Endoscopy (WCE) is a relatively new technology (FDA approved in 2002) allowing doctors to view most of the small intestine. Other endoscopies such as colonoscopy, upper gastrointestinal endoscopy, push enteroscopy, and intraoperative enteroscopy could be used to visualize up to the stomach, duodenum, colon, and terminal ileum, but there existed no method to view most of the small intestine without surgery. With the miniaturization of wireless and camera technologies came the ability to view the entire gestational track with little effort. A tiny disposable video capsule is swallowed, transmitting two images per second to a small data receiver worn by the patient on a belt. During an approximately 8-hour course, over 55,000 images are recorded to a worn device and then downloaded to a computer for later examination. Typically, a medical clinician spends more than two hours to analyze a WCE video. Research has been attempted to automatically find abnormal regions (especially bleeding) to reduce the time needed to analyze the videos. The manufacturers also provide the software tool to detect the bleeding called Suspected Blood Indicator (SBI), but its accuracy is not high enough to replace human examination. It was reported that the sensitivity and the specificity of SBI were about 72% and 85%, respectively. To address this problem, we propose a technique to detect the bleeding regions automatically utilizing the Expectation Maximization (EM) clustering algorithm. Our experimental results indicate that the proposed bleeding detection method achieves 92% and 98% of sensitivity and specificity, respectively.

  11. Inexact Matching of Ontology Graphs Using Expectation-Maximization

    PubMed Central

    Doshi, Prashant; Kolli, Ravikanth; Thomas, Christopher

    2009-01-01

    We present a new method for mapping ontology schemas that address similar domains. The problem of ontology matching is crucial since we are witnessing a decentralized development and publication of ontological data. We formulate the problem of inferring a match between two ontologies as a maximum likelihood problem, and solve it using the technique of expectation-maximization (EM). Specifically, we adopt directed graphs as our model for ontology schemas and use a generalized version of EM to arrive at a map between the nodes of the graphs. We exploit the structural, lexical and instance similarity between the graphs, and differ from the previous approaches in the way we utilize them to arrive at, a possibly inexact, match. Inexact matching is the process of finding a best possible match between the two graphs when exact matching is not possible or is computationally difficult. In order to scale the method to large ontologies, we identify the computational bottlenecks and adapt the generalized EM by using a memory bounded partitioning scheme. We provide comparative experimental results in support of our method on two well-known ontology alignment benchmarks and discuss their implications. PMID:20160892

  12. Expected Utility Distributions for Flexible, Contingent Execution

    NASA Technical Reports Server (NTRS)

    Bresina, John L.; Washington, Richard

    2000-01-01

    This paper presents a method for using expected utility distributions in the execution of flexible, contingent plans. A utility distribution maps the possible start times of an action to the expected utility of the plan suffix starting with that action. The contingent plan encodes a tree of possible courses of action and includes flexible temporal constraints and resource constraints. When execution reaches a branch point, the eligible option with the highest expected utility at that point in time is selected. The utility distributions make this selection sensitive to the runtime context, yet still efficient. Our approach uses predictions of action duration uncertainty as well as expectations of resource usage and availability to determine when an action can execute and with what probability. Execution windows and probabilities inevitably change as execution proceeds, but such changes do not invalidate the cached utility distributions, thus, dynamic updating of utility information is minimized.

  13. An Expectation-Maximization Method for Calibrating Synchronous Machine Models

    SciTech Connect

    Meng, Da; Zhou, Ning; Lu, Shuai; Lin, Guang

    2013-07-21

    The accuracy of a power system dynamic model is essential to its secure and efficient operation. Lower confidence in model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, this paper proposes an expectation-maximization (EM) method to calibrate the synchronous machine model using phasor measurement unit (PMU) data. First, an extended Kalman filter (EKF) is applied to estimate the dynamic states using measurement data. Then, the parameters are calculated based on the estimated states using maximum likelihood estimation (MLE) method. The EM method iterates over the preceding two steps to improve estimation accuracy. The proposed EM method’s performance is evaluated using a single-machine infinite bus system and compared with a method where both state and parameters are estimated using an EKF method. Sensitivity studies of the parameter calibration using EM method are also presented to show the robustness of the proposed method for different levels of measurement noise and initial parameter uncertainty.

  14. Expectation-Maximization Binary Clustering for Behavioural Annotation.

    PubMed

    Garriga, Joan; Palmer, John R B; Oltra, Aitana; Bartumeus, Frederic

    2016-01-01

    The growing capacity to process and store animal tracks has spurred the development of new methods to segment animal trajectories into elementary units of movement. Key challenges for movement trajectory segmentation are to (i) minimize the need of supervision, (ii) reduce computational costs, (iii) minimize the need of prior assumptions (e.g. simple parametrizations), and (iv) capture biologically meaningful semantics, useful across a broad range of species. We introduce the Expectation-Maximization binary Clustering (EMbC), a general purpose, unsupervised approach to multivariate data clustering. The EMbC is a variant of the Expectation-Maximization Clustering (EMC), a clustering algorithm based on the maximum likelihood estimation of a Gaussian mixture model. This is an iterative algorithm with a closed form step solution and hence a reasonable computational cost. The method looks for a good compromise between statistical soundness and ease and generality of use (by minimizing prior assumptions and favouring the semantic interpretation of the final clustering). Here we focus on the suitability of the EMbC algorithm for behavioural annotation of movement data. We show and discuss the EMbC outputs in both simulated trajectories and empirical movement trajectories including different species and different tracking methodologies. We use synthetic trajectories to assess the performance of EMbC compared to classic EMC and Hidden Markov Models. Empirical trajectories allow us to explore the robustness of the EMbC to data loss and data inaccuracies, and assess the relationship between EMbC output and expert label assignments. Additionally, we suggest a smoothing procedure to account for temporal correlations among labels, and a proper visualization of the output for movement trajectories. Our algorithm is available as an R-package with a set of complementary functions to ease the analysis. PMID:27002631

  15. Expectation-Maximization Binary Clustering for Behavioural Annotation

    PubMed Central

    2016-01-01

    The growing capacity to process and store animal tracks has spurred the development of new methods to segment animal trajectories into elementary units of movement. Key challenges for movement trajectory segmentation are to (i) minimize the need of supervision, (ii) reduce computational costs, (iii) minimize the need of prior assumptions (e.g. simple parametrizations), and (iv) capture biologically meaningful semantics, useful across a broad range of species. We introduce the Expectation-Maximization binary Clustering (EMbC), a general purpose, unsupervised approach to multivariate data clustering. The EMbC is a variant of the Expectation-Maximization Clustering (EMC), a clustering algorithm based on the maximum likelihood estimation of a Gaussian mixture model. This is an iterative algorithm with a closed form step solution and hence a reasonable computational cost. The method looks for a good compromise between statistical soundness and ease and generality of use (by minimizing prior assumptions and favouring the semantic interpretation of the final clustering). Here we focus on the suitability of the EMbC algorithm for behavioural annotation of movement data. We show and discuss the EMbC outputs in both simulated trajectories and empirical movement trajectories including different species and different tracking methodologies. We use synthetic trajectories to assess the performance of EMbC compared to classic EMC and Hidden Markov Models. Empirical trajectories allow us to explore the robustness of the EMbC to data loss and data inaccuracies, and assess the relationship between EMbC output and expert label assignments. Additionally, we suggest a smoothing procedure to account for temporal correlations among labels, and a proper visualization of the output for movement trajectories. Our algorithm is available as an R-package with a set of complementary functions to ease the analysis. PMID:27002631

  16. Generalized expectation-maximization segmentation of brain MR images

    NASA Astrophysics Data System (ADS)

    Devalkeneer, Arnaud A.; Robe, Pierre A.; Verly, Jacques G.; Phillips, Christophe L. M.

    2006-03-01

    Manual segmentation of medical images is unpractical because it is time consuming, not reproducible, and prone to human error. It is also very difficult to take into account the 3D nature of the images. Thus, semi- or fully-automatic methods are of great interest. Current segmentation algorithms based on an Expectation- Maximization (EM) procedure present some limitations. The algorithm by Ashburner et al., 2005, does not allow multichannel inputs, e.g. two MR images of different contrast, and does not use spatial constraints between adjacent voxels, e.g. Markov random field (MRF) constraints. The solution of Van Leemput et al., 1999, employs a simplified model (mixture coefficients are not estimated and only one Gaussian is used by tissue class, with three for the image background). We have thus implemented an algorithm that combines the features of these two approaches: multichannel inputs, intensity bias correction, multi-Gaussian histogram model, and Markov random field (MRF) constraints. Our proposed method classifies tissues in three iterative main stages by way of a Generalized-EM (GEM) algorithm: (1) estimation of the Gaussian parameters modeling the histogram of the images, (2) correction of image intensity non-uniformity, and (3) modification of prior classification knowledge by MRF techniques. The goal of the GEM algorithm is to maximize the log-likelihood across the classes and voxels. Our segmentation algorithm was validated on synthetic data (with the Dice metric criterion) and real data (by a neurosurgeon) and compared to the original algorithms by Ashburner et al. and Van Leemput et al. Our combined approach leads to more robust and accurate segmentation.

  17. PEM-PCA: A Parallel Expectation-Maximization PCA Face Recognition Architecture

    PubMed Central

    Rujirakul, Kanokmon; Arnonkijpanich, Banchar

    2014-01-01

    Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA. PMID:24955405

  18. The Noisy Expectation-Maximization Algorithm for Multiplicative Noise Injection

    NASA Astrophysics Data System (ADS)

    Osoba, Osonde; Kosko, Bart

    2016-03-01

    We generalize the noisy expectation-maximization (NEM) algorithm to allow arbitrary modes of noise injection besides just adding noise to the data. The noise must still satisfy a NEM positivity condition. This generalization includes the important special case of multiplicative noise injection. A generalized NEM theorem shows that all measurable modes of injecting noise will speed the average convergence of the EM algorithm if the noise satisfies a generalized NEM positivity condition. This noise-benefit condition has a simple quadratic form for Gaussian and Cauchy mixture models in the case of multiplicative noise injection. Simulations show a multiplicative-noise EM speed-up of more than 27% in a simple Gaussian mixture model. Injecting blind noise only slowed convergence. A related theorem gives a sufficient condition for an average EM noise benefit for arbitrary modes of noise injection if the data model comes from the general exponential family of probability density functions. A final theorem shows that injected noise slows EM convergence on average if the NEM inequalities reverse and the noise satisfies a negativity condition.

  19. Expectation maximization reconstruction for circular orbit cone-beam CT

    NASA Astrophysics Data System (ADS)

    Dong, Baoyu

    2008-03-01

    Cone-beam computed tomography (CBCT) is a technique for imaging cross-sections of an object using a series of X-ray measurements taken from different angles around the object. It has been widely applied in diagnostic medicine and industrial non-destructive testing. Traditional CT reconstructions are limited by many kinds of artifacts, and they give dissatisfactory image. To reduce image noise and artifacts, we propose a statistical iterative approach for cone-beam CT reconstruction. First the theory of maximum likelihood estimation is extended to X-ray scan, and an expectation-maximization (EM) formula is deduced for direct reconstruction of circular orbit cone-beam CT. Then the EM formula is implemented in cone-beam geometry for artifact reduction. EM algorithm is a feasible iterative method, which is based on the statistical properties of Poisson distribution. It can provide good quality reconstructions after a few iterations for cone-beam CT. In the end, experimental results with computer simulated data and real CT data are presented to verify our method is effective.

  20. Robust Utility Maximization Under Convex Portfolio Constraints

    SciTech Connect

    Matoussi, Anis; Mezghani, Hanen Mnif, Mohamed

    2015-04-15

    We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle.

  1. Matching Pupils and Teachers to Maximize Expected Outcomes.

    ERIC Educational Resources Information Center

    Ward, Joe H., Jr.; And Others

    To achieve a good teacher-pupil match, it is necessary (1) to predict the learning outcomes that will result when each student is instructed by each teacher, (2) to use the predicted performance to compute an Optimality Index for each teacher-pupil combination to indicate the quality of each combination toward maximizing learning for all students,…

  2. Price of oil and OPEC behavior: a utility maximization model

    SciTech Connect

    Adeinat, M.K.

    1985-01-01

    There is growing evidence that OPEC has neither behaved as a cartel, at least in the last decade, nor maximized the discounted value of its profits as would be suggested by the theory of exhaustible resources. This dissertation attempts to find a way out of this dead end by proposing a utility maximization model. According to the utility maximization model, the decisions of how much crude oil each country produces is determined by a country's budgetary needs. The objective of each country is to choose present consumption and future consumption (which must be financed by its future income which can, in turn, be generated either by its investment out of current income or the proceeds of its oil reserves) at time t to maximize its utility function subject to its budget and absorptive capacity constraints. The model predicted that whenever the amount of savings is greater than the country's absorptive capacity as a result of higher prices of oil, it would respond by cutting back its production of oil. This prediction is supported by the following empirical findings: (1) that the marginal propensity to save (MPS) exceeded the marginal propensity to invest (MPI) during the period of study (1967-1981), implying that OPEC countries were facing an absorptive capacity constraint and (2) the quantity of oil production responded negatively to the permanent income in all three countries, the response being highly significant for those countries with the greatest budget surpluses.

  3. An expected utility maximizer walks into a bar…

    PubMed Central

    Glimcher, Paul W.; Lazzaro, Stephanie C.

    2013-01-01

    We conducted field experiments at a bar to test whether blood alcohol concentration (BAC) correlates with violations of the generalized axiom of revealed preference (GARP) and the independence axiom. We found that individuals with BACs well above the legal limit for driving adhere to GARP and independence at rates similar to those who are sober. This finding led to the fielding of a third experiment to explore how risk preferences might vary as a function of BAC. We found gender-specific effects: Men did not exhibit variations in risk preferences across BACs. In contrast, women were more risk averse than men at low BACs but exhibited increasing tolerance towards risks as BAC increased. Based on our estimates, men and women’s risk preferences are predicted to be identical at BACs nearly twice the legal limit for driving. We discuss the implications for policy-makers. PMID:24244072

  4. An expected utility maximizer walks into a bar…

    PubMed

    Burghart, Daniel R; Glimcher, Paul W; Lazzaro, Stephanie C

    2013-06-01

    We conducted field experiments at a bar to test whether blood alcohol concentration (BAC) correlates with violations of the generalized axiom of revealed preference (GARP) and the independence axiom. We found that individuals with BACs well above the legal limit for driving adhere to GARP and independence at rates similar to those who are sober. This finding led to the fielding of a third experiment to explore how risk preferences might vary as a function of BAC. We found gender-specific effects: Men did not exhibit variations in risk preferences across BACs. In contrast, women were more risk averse than men at low BACs but exhibited increasing tolerance towards risks as BAC increased. Based on our estimates, men and women's risk preferences are predicted to be identical at BACs nearly twice the legal limit for driving. We discuss the implications for policy-makers. PMID:24244072

  5. Coding for Parallel Links to Maximize the Expected Value of Decodable Messages

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew A.; Chang, Christopher S.

    2011-01-01

    When multiple parallel communication links are available, it is useful to consider link-utilization strategies that provide tradeoffs between reliability and throughput. Interesting cases arise when there are three or more available links. Under the model considered, the links have known probabilities of being in working order, and each link has a known capacity. The sender has a number of messages to send to the receiver. Each message has a size and a value (i.e., a worth or priority). Messages may be divided into pieces arbitrarily, and the value of each piece is proportional to its size. The goal is to choose combinations of messages to send on the links so that the expected value of the messages decodable by the receiver is maximized. There are three parts to the innovation: (1) Applying coding to parallel links under the model; (2) Linear programming formulation for finding the optimal combinations of messages to send on the links; and (3) Algorithms for assisting in finding feasible combinations of messages, as support for the linear programming formulation. There are similarities between this innovation and methods developed in the field of network coding. However, network coding has generally been concerned with either maximizing throughput in a fixed network, or robust communication of a fixed volume of data. In contrast, under this model, the throughput is expected to vary depending on the state of the network. Examples of error-correcting codes that are useful under this model but which are not needed under previous models have been found. This model can represent either a one-shot communication attempt, or a stream of communications. Under the one-shot model, message sizes and link capacities are quantities of information (e.g., measured in bits), while under the communications stream model, message sizes and link capacities are information rates (e.g., measured in bits/second). This work has the potential to increase the value of data returned from

  6. AREM: Aligning Short Reads from ChIP-Sequencing by Expectation Maximization

    NASA Astrophysics Data System (ADS)

    Newkirk, Daniel; Biesinger, Jacob; Chon, Alvin; Yokomori, Kyoko; Xie, Xiaohui

    High-throughput sequencing coupled to chromatin immunoprecipitation (ChIP-Seq) is widely used in characterizing genome-wide binding patterns of transcription factors, cofactors, chromatin modifiers, and other DNA binding proteins. A key step in ChIP-Seq data analysis is to map short reads from high-throughput sequencing to a reference genome and identify peak regions enriched with short reads. Although several methods have been proposed for ChIP-Seq analysis, most existing methods only consider reads that can be uniquely placed in the reference genome, and therefore have low power for detecting peaks located within repeat sequences. Here we introduce a probabilistic approach for ChIP-Seq data analysis which utilizes all reads, providing a truly genome-wide view of binding patterns. Reads are modeled using a mixture model corresponding to K enriched regions and a null genomic background. We use maximum likelihood to estimate the locations of the enriched regions, and implement an expectation-maximization (E-M) algorithm, called AREM (aligning reads by expectation maximization), to update the alignment probabilities of each read to different genomic locations. We apply the algorithm to identify genome-wide binding events of two proteins: Rad21, a component of cohesin and a key factor involved in chromatid cohesion, and Srebp-1, a transcription factor important for lipid/cholesterol homeostasis. Using AREM, we were able to identify 19,935 Rad21 peaks and 1,748 Srebp-1 peaks in the mouse genome with high confidence, including 1,517 (7.6%) Rad21 peaks and 227 (13%) Srebp-1 peaks that were missed using only uniquely mapped reads. The open source implementation of our algorithm is available at http://sourceforge.net/projects/arem

  7. Optimal weight based on energy imbalance and utility maximization

    NASA Astrophysics Data System (ADS)

    Sun, Ruoyan

    2016-01-01

    This paper investigates the optimal weight for both male and female using energy imbalance and utility maximization. Based on the difference of energy intake and expenditure, we develop a state equation that reveals the weight gain from this energy gap. We ​construct an objective function considering food consumption, eating habits and survival rate to measure utility. Through applying mathematical tools from optimal control methods and qualitative theory of differential equations, we obtain some results. For both male and female, the optimal weight is larger than the physiologically optimal weight calculated by the Body Mass Index (BMI). We also study the corresponding trajectories to steady state weight respectively. Depending on the value of a few parameters, the steady state can either be a saddle point with a monotonic trajectory or a focus with dampened oscillations.

  8. A compact formulation for maximizing the expected number of transplants in kidney exchange programs

    NASA Astrophysics Data System (ADS)

    Alvelos, Filipe; Klimentova, Xenia; Rais, Abdur; Viana, Ana

    2015-05-01

    Kidney exchange programs (KEPs) allow the exchange of kidneys between incompatible donor-recipient pairs. Optimization approaches can help KEPs in defining which transplants should be made among all incompatible pairs according to some objective. The most common objective is to maximize the number of transplants. In this paper, we propose an integer programming model which addresses the objective of maximizing the expected number of transplants, given that there are equal probabilities of failure associated with vertices and arcs. The model is compact, i.e. has a polynomial number of decision variables and constraints, and therefore can be solved directly by a general purpose integer programming solver (e.g. Cplex).

  9. Single-Trial Extraction of Pure Somatosensory Evoked Potential Based on Expectation Maximization Approach.

    PubMed

    Chen, Wei; Chang, Chunqi; Hu, Yong

    2016-01-01

    It is of great importance for intraoperative monitoring to accurately extract somatosensory evoked potentials (SEPs) and track its changes fast. Currently, multi-trial averaging is widely adopted for SEP signal extraction. However, because of the loss of variations related to SEP features across different trials, the estimated SEPs in such a way are not suitable for the purpose of real-time monitoring of every single trial of SEP. In order to handle this issue, a number of single-trial SEP extraction approaches have been developed in the literature, such as ARX and SOBI, but most of them have their performance limited due to not sufficient utilization of multi-trial and multi-condition structures of the signals. In this paper, a novel Bayesian model of SEP signals is proposed to make systemic use of multi-trial and multi-condition priors and other structural information in the signal by integrating both a cortical source propagation model and a SEP basis components model, and an Expectation Maximization (EM) algorithm is developed for single-trial SEP estimation under this model. Numerical simulations demonstrate that the developed method can provide reasonably good single-trial estimations of SEP as long as signal-to-noise ratio (SNR) of the measurements is no worse than -25 dB. The effectiveness of the proposed method is further verified by its application to real SEP measurements of a number of different subjects during spinal surgeries. It is observed that using the proposed approach the main SEP features (i.e., latencies) can be reliably estimated at single-trial basis, and thus the variation of latencies in different trials can be traced, which provides a solid support for surgical intraoperative monitoring. PMID:26742104

  10. The predictive validity of prospect theory versus expected utility in health utility measurement.

    PubMed

    Abellan-Perpiñan, Jose Maria; Bleichrodt, Han; Pinto-Prades, Jose Luis

    2009-12-01

    Most health care evaluations today still assume expected utility even though the descriptive deficiencies of expected utility are well known. Prospect theory is the dominant descriptive alternative for expected utility. This paper tests whether prospect theory leads to better health evaluations than expected utility. The approach is purely descriptive: we explore how simple measurements together with prospect theory and expected utility predict choices and rankings between more complex stimuli. For decisions involving risk prospect theory is significantly more consistent with rankings and choices than expected utility. This conclusion no longer holds when we use prospect theory utilities and expected utilities to predict intertemporal decisions. The latter finding cautions against the common assumption in health economics that health state utilities are transferable across decision contexts. Our results suggest that the standard gamble and algorithms based on, should not be used to value health. PMID:19833400

  11. Disconfirmation of Expectations of Utility in e-Learning

    ERIC Educational Resources Information Center

    Cacao, Rosario

    2013-01-01

    Using pre-training and post-training paired surveys in e-learning based training courses, we have compared the "expectations of utility," measured at the beginning of an e-learning course, with the "perceptions of utility," measured at the end of the course, and related it with the trainees' motivation. We have concluded…

  12. Power Dependence in Individual Bargaining: The Expected Utility of Influence.

    ERIC Educational Resources Information Center

    Lawler, Edward J.; Bacharach, Samuel B.

    1979-01-01

    This study uses power-dependence theory as a framework for examining whether and how parties use information on each other's dependence to estimate the utility of an influence attempt. The effect of dependence in expected utilities is investigated (by role playing) in bargaining between employer and employee for a pay raise. (MF)

  13. Gaussian beam decomposition of high frequency wave fields using expectation-maximization

    SciTech Connect

    Ariel, Gil; Engquist, Bjoern; Tanushev, Nicolay M.; Tsai, Richard

    2011-03-20

    A new numerical method for approximating highly oscillatory wave fields as a superposition of Gaussian beams is presented. The method estimates the number of beams and their parameters automatically. This is achieved by an expectation-maximization algorithm that fits real, positive Gaussians to the energy of the highly oscillatory wave fields and its Fourier transform. Beam parameters are further refined by an optimization procedure that minimizes the difference between the Gaussian beam superposition and the highly oscillatory wave field in the energy norm.

  14. Joint state and parameter estimation of the hemodynamic model by particle smoother expectation maximization method

    NASA Astrophysics Data System (ADS)

    Aslan, Serdar; Taylan Cemgil, Ali; Akın, Ata

    2016-08-01

    Objective. In this paper, we aimed for the robust estimation of the parameters and states of the hemodynamic model by using blood oxygen level dependent signal. Approach. In the fMRI literature, there are only a few successful methods that are able to make a joint estimation of the states and parameters of the hemodynamic model. In this paper, we implemented a maximum likelihood based method called the particle smoother expectation maximization (PSEM) algorithm for the joint state and parameter estimation. Main results. Former sequential Monte Carlo methods were only reliable in the hemodynamic state estimates. They were claimed to outperform the local linearization (LL) filter and the extended Kalman filter (EKF). The PSEM algorithm is compared with the most successful method called square-root cubature Kalman smoother (SCKS) for both state and parameter estimation. SCKS was found to be better than the dynamic expectation maximization (DEM) algorithm, which was shown to be a better estimator than EKF, LL and particle filters. Significance. PSEM was more accurate than SCKS for both the state and the parameter estimation. Hence, PSEM seems to be the most accurate method for the system identification and state estimation for the hemodynamic model inversion literature. This paper do not compare its results with Tikhonov-regularized Newton—CKF (TNF-CKF), a recent robust method which works in filtering sense.

  15. Wobbling and LSF-based maximum likelihood expectation maximization reconstruction for wobbling PET

    NASA Astrophysics Data System (ADS)

    Kim, Hang-Keun; Son, Young-Don; Kwon, Dae-Hyuk; Joo, Yohan; Cho, Zang-Hee

    2016-04-01

    Positron emission tomography (PET) is a widely used imaging modality; however, the PET spatial resolution is not yet satisfactory for precise anatomical localization of molecular activities. Detector size is the most important factor because it determines the intrinsic resolution, which is approximately half of the detector size and determines the ultimate PET resolution. Detector size, however, cannot be made too small because both the decreased detection efficiency and the increased septal penetration effect degrade the image quality. A wobbling and line spread function (LSF)-based maximum likelihood expectation maximization (WL-MLEM) algorithm, which combined the MLEM iterative reconstruction algorithm with wobbled sampling and LSF-based deconvolution using the system matrix, was proposed for improving the spatial resolution of PET without reducing the scintillator or detector size. The new algorithm was evaluated using a simulation, and its performance was compared with that of the existing algorithms, such as conventional MLEM and LSF-based MLEM. Simulations demonstrated that the WL-MLEM algorithm yielded higher spatial resolution and image quality than the existing algorithms. The WL-MLEM algorithm with wobbling PET yielded substantially improved resolution compared with conventional algorithms with stationary PET. The algorithm can be easily extended to other iterative reconstruction algorithms, such as maximum a priori (MAP) and ordered subset expectation maximization (OSEM). The WL-MLEM algorithm with wobbling PET may offer improvements in both sensitivity and resolution, the two most sought-after features in PET design.

  16. Implementation and evaluation of an expectation maximization reconstruction algorithm for gamma emission breast tomosynthesis

    PubMed Central

    Gong, Zongyi; Klanian, Kelly; Patel, Tushita; Sullivan, Olivia; Williams, Mark B.

    2012-01-01

    Purpose: We are developing a dual modality tomosynthesis breast scanner in which x-ray transmission tomosynthesis and gamma emission tomosynthesis are performed sequentially with the breast in a common configuration. In both modalities projection data are obtained over an angular range of less than 180° from one side of the mildly compressed breast resulting in incomplete and asymmetrical sampling. The objective of this work is to implement and evaluate a maximum likelihood expectation maximization (MLEM) reconstruction algorithm for gamma emission breast tomosynthesis (GEBT). Methods: A combination of Monte Carlo simulations and phantom experiments was used to test the MLEM algorithm for GEBT. The algorithm utilizes prior information obtained from the x-ray breast tomosynthesis scan to partially compensate for the incomplete angular sampling and to perform attenuation correction (AC) and resolution recovery (RR). System spatial resolution, image artifacts, lesion contrast, and signal to noise ratio (SNR) were measured as image quality figures of merit. To test the robustness of the reconstruction algorithm and to assess the relative impacts of correction techniques with changing angular range, simulations and experiments were both performed using acquisition angular ranges of 45°, 90° and 135°. For comparison, a single projection containing the same total number of counts as the full GEBT scan was also obtained to simulate planar breast scintigraphy. Results: The in-plane spatial resolution of the reconstructed GEBT images is independent of source position within the reconstructed volume and independent of acquisition angular range. For 45° acquisitions, spatial resolution in the depth dimension (the direction of breast compression) is degraded with increasing source depth (increasing distance from the collimator surface). Increasing the acquisition angular range from 45° to 135° both greatly reduces this depth dependence and improves the average depth

  17. Clustering performance comparison using K-means and expectation maximization algorithms

    PubMed Central

    Jung, Yong Gyu; Kang, Min Soo; Heo, Jun

    2014-01-01

    Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K-means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K-means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results. PMID:26019610

  18. Maximizing Light Utilization Efficiency and Hydrogen Production in Microalgal Cultures

    SciTech Connect

    Melis, Anastasios

    2014-12-31

    The project addressed the following technical barrier from the Biological Hydrogen Production section of the Fuel Cell Technologies Program Multi-Year Research, Development and Demonstration Plan: Low Sunlight Utilization Efficiency in Photobiological Hydrogen Production is due to a Large Photosystem Chlorophyll Antenna Size in Photosynthetic Microorganisms (Barrier AN: Light Utilization Efficiency).

  19. Subjective Expected Utility: A Model of Decision-Making.

    ERIC Educational Resources Information Center

    Fischoff, Baruch; And Others

    1981-01-01

    Outlines a model of decision making known to researchers in the field of behavioral decision theory (BDT) as subjective expected utility (SEU). The descriptive and predictive validity of the SEU model, probability and values assessment using SEU, and decision contexts are examined, and a 54-item reference list is provided. (JL)

  20. A simple test of expected utility theory using professional traders.

    PubMed

    List, John A; Haigh, Michael S

    2005-01-18

    We compare behavior across students and professional traders from the Chicago Board of Trade in a classic Allais paradox experiment. Our experiment tests whether independence, a necessary condition in expected utility theory, is systematically violated. We find that both students and professionals exhibit some behavior consistent with the Allais paradox, but the data pattern does suggest that the trader population falls prey to the Allais paradox less frequently than the student population. PMID:15634739

  1. An iterative reconstruction method of complex images using expectation maximization for radial parallel MRI

    NASA Astrophysics Data System (ADS)

    Choi, Joonsung; Kim, Dongchan; Oh, Changhyun; Han, Yeji; Park, HyunWook

    2013-05-01

    In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method.

  2. An iterative reconstruction method of complex images using expectation maximization for radial parallel MRI.

    PubMed

    Choi, Joonsung; Kim, Dongchan; Oh, Changhyun; Han, Yeji; Park, HyunWook

    2013-05-01

    In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method. PMID:23588215

  3. An online expectation maximization algorithm for exploring general structure in massive networks

    NASA Astrophysics Data System (ADS)

    Chai, Bianfang; Jia, Caiyan; Yu, Jian

    2015-11-01

    Mixture model and stochastic block model (SBM) for structure discovery employ a broad and flexible definition of vertex classes such that they are able to explore a wide variety of structure. Compared to the existing algorithms based on the SBM (their time complexities are O(mc2) , where m and c are the number of edges and clusters), the algorithms of mixture model are capable of dealing with networks with a large number of communities more efficiently due to their O(mc) time complexity. However, the algorithms of mixture model using expectation maximization (EM) technique are still too slow to deal with real million-node networks, since they compute hidden variables on the entire network in each iteration. In this paper, an online variational EM algorithm is designed to improve the efficiency of the EM algorithms. In each iteration, our online algorithm samples a node and estimates its cluster memberships only by its adjacency links, and model parameters are then estimated by the memberships of the sampled node and old model parameters obtained in the previous iteration. The provided online algorithm updates model parameters subsequently by the links of a new sampled node and explores the general structure of massive and growing networks with millions of nodes and hundreds of clusters in hours. Compared to the relevant algorithms on synthetic and real networks, the proposed online algorithm costs less with little or no degradation of accuracy. Results illustrate that the presented algorithm offers a good trade-off between precision and efficiency.

  4. The indexing ambiguity in serial femtosecond crystallography (SFX) resolved using an expectation maximization algorithm

    PubMed Central

    Liu, Haiguang; Spence, John C.H.

    2014-01-01

    Crystallographic auto-indexing algorithms provide crystal orientations and unit-cell parameters and assign Miller indices based on the geometric relations between the Bragg peaks observed in diffraction patterns. However, if the Bravais symmetry is higher than the space-group symmetry, there will be multiple indexing options that are geometrically equivalent, and hence many ways to merge diffraction intensities from protein nanocrystals. Structure factor magnitudes from full reflections are required to resolve this ambiguity but only partial reflections are available from each XFEL shot, which must be merged to obtain full reflections from these ‘stills’. To resolve this chicken-and-egg problem, an expectation maximization algorithm is described that iteratively constructs a model from the intensities recorded in the diffraction patterns as the indexing ambiguity is being resolved. The reconstructed model is then used to guide the resolution of the indexing ambiguity as feedback for the next iteration. Using both simulated and experimental data collected at an X-ray laser for photosystem I in the P63 space group (which supports a merohedral twinning indexing ambiguity), the method is validated. PMID:25485120

  5. Colocalization Estimation Using Graphical Modeling and Variational Bayesian Expectation Maximization: Towards a Parameter-Free Approach.

    PubMed

    Awate, Suyash P; Radhakrishnan, Thyagarajan

    2015-01-01

    In microscopy imaging, colocalization between two biological entities (e.g., protein-protein or protein-cell) refers to the (stochastic) dependencies between the spatial locations of the two entities in the biological specimen. Measuring colocalization between two entities relies on fluorescence imaging of the specimen using two fluorescent chemicals, each of which indicates the presence/absence of one of the entities at any pixel location. State-of-the-art methods for estimating colocalization rely on post-processing image data using an adhoc sequence of algorithms with many free parameters that are tuned visually. This leads to loss of reproducibility of the results. This paper proposes a brand-new framework for estimating the nature and strength of colocalization directly from corrupted image data by solving a single unified optimization problem that automatically deals with noise, object labeling, and parameter tuning. The proposed framework relies on probabilistic graphical image modeling and a novel inference scheme using variational Bayesian expectation maximization for estimating all model parameters, including colocalization, from data. Results on simulated and real-world data demonstrate improved performance over the state of the art. PMID:26221663

  6. Statistical models of synaptic transmission evaluated using the expectation-maximization algorithm.

    PubMed Central

    Stricker, C; Redman, S

    1994-01-01

    Amplitude fluctuations of evoked synaptic responses can be used to extract information on the probabilities of release at the active sites, and on the amplitudes of the synaptic responses generated by transmission at each active site. The parameters that describe this process must be obtained from an incomplete data set represented by the probability density of the evoked synaptic response. In this paper, the equations required to calculate these parameters using the Expectation-Maximization algorithm and the maximum likelihood criterion have been derived for a variety of statistical models of synaptic transmission. These models are ones where the probabilities associated with the different discrete amplitudes in the evoked responses are a) unconstrained, b) binomial, and c) compound binomial. The discrete amplitudes may be separated by equal (quantal) or unequal amounts, with or without quantal variance. Alternative models have been considered where the variance associated with the discrete amplitudes is sufficiently large such that no quantal amplitudes can be detected. These models involve the sum of a normal distribution (to represent failures) and a unimodal distribution (to represent the evoked responses). The implementation of the algorithm is described in each case, and its accuracy and convergence have been demonstrated. PMID:7948679

  7. A Local Scalable Distributed Expectation Maximization Algorithm for Large Peer-to-Peer Networks

    NASA Technical Reports Server (NTRS)

    Bhaduri, Kanishka; Srivastava, Ashok N.

    2009-01-01

    This paper offers a local distributed algorithm for expectation maximization in large peer-to-peer environments. The algorithm can be used for a variety of well-known data mining tasks in a distributed environment such as clustering, anomaly detection, target tracking to name a few. This technology is crucial for many emerging peer-to-peer applications for bioinformatics, astronomy, social networking, sensor networks and web mining. Centralizing all or some of the data for building global models is impractical in such peer-to-peer environments because of the large number of data sources, the asynchronous nature of the peer-to-peer networks, and dynamic nature of the data/network. The distributed algorithm we have developed in this paper is provably-correct i.e. it converges to the same result compared to a similar centralized algorithm and can automatically adapt to changes to the data and the network. We show that the communication overhead of the algorithm is very low due to its local nature. This monitoring algorithm is then used as a feedback loop to sample data from the network and rebuild the model when it is outdated. We present thorough experimental results to verify our theoretical claims.

  8. 76 FR 51060 - Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility, and Integrity of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-17

    ... FR 8452-8460), pursuant to section 515 of the Treasury and General Government Appropriations Act for... FR 8452-8460) that direct each federal agency to (1) Issue its own guidelines ensuring and maximizing... June 2011 (76 FR 37376) intended to ensure and maximize the quality, objectivity, utility,...

  9. Bandwidth utilization maximization of scientific RF communication systems

    SciTech Connect

    Rey, D.; Ryan, W.; Ross, M.

    1997-01-01

    A method for more efficiently utilizing the frequency bandwidth allocated for data transmission is presented. Current space and range communication systems use modulation and coding schemes that transmit 0.5 to 1.0 bits per second per Hertz of radio frequency bandwidth. The goal in this LDRD project is to increase the bandwidth utilization by employing advanced digital communications techniques. This is done with little or no increase in the transmit power which is usually very limited on airborne systems. Teaming with New Mexico State University, an implementation of trellis coded modulation (TCM), a coding and modulation scheme pioneered by Ungerboeck, was developed for this application and simulated on a computer. TCM provides a means for reliably transmitting data while simultaneously increasing bandwidth efficiency. The penalty is increased receiver complexity. In particular, the trellis decoder requires high-speed, application-specific digital signal processing (DSP) chips. A system solution based on the QualComm Viterbi decoder and the Graychip DSP receiver chips is presented.

  10. Optimization in the utility maximization framework for conservation planning: a comparison of solution procedures in a study of multifunctional agriculture

    PubMed Central

    Stoms, David M.; Davis, Frank W.

    2014-01-01

    Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management. PMID:25538868

  11. Optimization in the utility maximization framework for conservation planning: a comparison of solution procedures in a study of multifunctional agriculture

    USGS Publications Warehouse

    Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.

    2014-01-01

    Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.

  12. Association Studies with Imputed Variants Using Expectation-Maximization Likelihood-Ratio Tests

    PubMed Central

    Huang, Kuan-Chieh; Sun, Wei; Wu, Ying; Chen, Mengjie; Mohlke, Karen L.; Lange, Leslie A.; Li, Yun

    2014-01-01

    Genotype imputation has become standard practice in modern genetic studies. As sequencing-based reference panels continue to grow, increasingly more markers are being well or better imputed but at the same time, even more markers with relatively low minor allele frequency are being imputed with low imputation quality. Here, we propose new methods that incorporate imputation uncertainty for downstream association analysis, with improved power and/or computational efficiency. We consider two scenarios: I) when posterior probabilities of all potential genotypes are estimated; and II) when only the one-dimensional summary statistic, imputed dosage, is available. For scenario I, we have developed an expectation-maximization likelihood-ratio test for association based on posterior probabilities. When only imputed dosages are available (scenario II), we first sample the genotype probabilities from its posterior distribution given the dosages, and then apply the EM-LRT on the sampled probabilities. Our simulations show that type I error of the proposed EM-LRT methods under both scenarios are protected. Compared with existing methods, EM-LRT-Prob (for scenario I) offers optimal statistical power across a wide spectrum of MAF and imputation quality. EM-LRT-Dose (for scenario II) achieves a similar level of statistical power as EM-LRT-Prob and, outperforms the standard Dosage method, especially for markers with relatively low MAF or imputation quality. Applications to two real data sets, the Cebu Longitudinal Health and Nutrition Survey study and the Women’s Health Initiative Study, provide further support to the validity and efficiency of our proposed methods. PMID:25383782

  13. Direct reconstruction of the source intensity distribution of a clinical linear accelerator using a maximum likelihood expectation maximization algorithm

    NASA Astrophysics Data System (ADS)

    Papaconstadopoulos, P.; Levesque, I. R.; Maglieri, R.; Seuntjens, J.

    2016-02-01

    Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size (0.5× 0.5 cm2). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.

  14. Direct reconstruction of the source intensity distribution of a clinical linear accelerator using a maximum likelihood expectation maximization algorithm.

    PubMed

    Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J

    2016-02-01

    Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect. PMID:26758232

  15. Expected utility theory and risky choices with health outcomes.

    PubMed

    Hellinger, F J

    1989-03-01

    Studies of people's attitude towards risk in the health sector often involve a comparison of the desirability of alternative medical treatments. Since the outcome of a medical treatment cannot be known with certainty, patients and physicians must make a choice that involves risk. Each medical treatment may be characterized as a gamble (or risky option) with a set of outcomes and associated probabilities. Expected utility theory (EUT) is the standard method to predict people's choices under uncertainty. The author presents the results of a survey that suggests people are very risk averse towards gambles involving health-related outcomes. The survey also indicates that there is significant variability in the risk attitudes across individuals for any given gamble and that there is significant variability in the risk attitudes of a given individual across gambles. The variability of risk attitudes of a given individual suggests that risk attitudes are not absolute but are functions of the parameters in the gamble. PMID:2927183

  16. Recursive expectation-maximization clustering: a method for identifying buffering mechanisms composed of phenomic modules.

    PubMed

    Guo, Jingyu; Tian, Dehua; McKinney, Brett A; Hartman, John L

    2010-06-01

    Interactions between genetic and/or environmental factors are ubiquitous, affecting the phenotypes of organisms in complex ways. Knowledge about such interactions is becoming rate-limiting for our understanding of human disease and other biological phenomena. Phenomics refers to the integrative analysis of how all genes contribute to phenotype variation, entailing genome and organism level information. A systems biology view of gene interactions is critical for phenomics. Unfortunately the problem is intractable in humans; however, it can be addressed in simpler genetic model systems. Our research group has focused on the concept of genetic buffering of phenotypic variation, in studies employing the single-cell eukaryotic organism, S. cerevisiae. We have developed a methodology, quantitative high throughput cellular phenotyping (Q-HTCP), for high-resolution measurements of gene-gene and gene-environment interactions on a genome-wide scale. Q-HTCP is being applied to the complete set of S. cerevisiae gene deletion strains, a unique resource for systematically mapping gene interactions. Genetic buffering is the idea that comprehensive and quantitative knowledge about how genes interact with respect to phenotypes will lead to an appreciation of how genes and pathways are functionally connected at a systems level to maintain homeostasis. However, extracting biologically useful information from Q-HTCP data is challenging, due to the multidimensional and nonlinear nature of gene interactions, together with a relative lack of prior biological information. Here we describe a new approach for mining quantitative genetic interaction data called recursive expectation-maximization clustering (REMc). We developed REMc to help discover phenomic modules, defined as sets of genes with similar patterns of interaction across a series of genetic or environmental perturbations. Such modules are reflective of buffering mechanisms, i.e., genes that play a related role in the maintenance

  17. Recursive expectation-maximization clustering: A method for identifying buffering mechanisms composed of phenomic modules

    NASA Astrophysics Data System (ADS)

    Guo, Jingyu; Tian, Dehua; McKinney, Brett A.; Hartman, John L.

    2010-06-01

    Interactions between genetic and/or environmental factors are ubiquitous, affecting the phenotypes of organisms in complex ways. Knowledge about such interactions is becoming rate-limiting for our understanding of human disease and other biological phenomena. Phenomics refers to the integrative analysis of how all genes contribute to phenotype variation, entailing genome and organism level information. A systems biology view of gene interactions is critical for phenomics. Unfortunately the problem is intractable in humans; however, it can be addressed in simpler genetic model systems. Our research group has focused on the concept of genetic buffering of phenotypic variation, in studies employing the single-cell eukaryotic organism, S. cerevisiae. We have developed a methodology, quantitative high throughput cellular phenotyping (Q-HTCP), for high-resolution measurements of gene-gene and gene-environment interactions on a genome-wide scale. Q-HTCP is being applied to the complete set of S. cerevisiae gene deletion strains, a unique resource for systematically mapping gene interactions. Genetic buffering is the idea that comprehensive and quantitative knowledge about how genes interact with respect to phenotypes will lead to an appreciation of how genes and pathways are functionally connected at a systems level to maintain homeostasis. However, extracting biologically useful information from Q-HTCP data is challenging, due to the multidimensional and nonlinear nature of gene interactions, together with a relative lack of prior biological information. Here we describe a new approach for mining quantitative genetic interaction data called recursive expectation-maximization clustering (REMc). We developed REMc to help discover phenomic modules, defined as sets of genes with similar patterns of interaction across a series of genetic or environmental perturbations. Such modules are reflective of buffering mechanisms, i.e., genes that play a related role in the maintenance

  18. Deriving the Expected Utility of a Predictive Model When the Utilities Are Uncertain

    PubMed Central

    Cooper, Gregory F.; Visweswaran, Shyam

    2005-01-01

    Predictive models are often constructed from clinical databases with the goal of eventually helping make better clinical decisions. Evaluating models using decision theory is therefore natural. When constructing a model using statistical and machine learning methods, however, we are often uncertain about precisely how a model will be used. Thus, decision-independent measures of classification performance, such as the area under an ROC curve, are popular. As a complementary method of evaluation, we investigate techniques for deriving the expected utility of a model under uncertainty about the model's utilities. We demonstrate an example of the application of this approach to the evaluation of two models that diagnose coronary artery disease. PMID:16779022

  19. Whole-body direct 4D parametric PET imaging employing nested generalized Patlak expectation-maximization reconstruction.

    PubMed

    Karakatsanis, Nicolas A; Casey, Michael E; Lodge, Martin A; Rahmim, Arman; Zaidi, Habib

    2016-08-01

    Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible (18)F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published (18)F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10-20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were

  20. An Expectation-Maximization Method for Spatio-Temporal Blind Source Separation Using an AR-MOG Source Model

    PubMed Central

    Hild, Kenneth E.; Attias, Hagai T.; Nagarajan, Srikantan S.

    2009-01-01

    In this paper, we develop a maximum-likelihood (ML) spatio-temporal blind source separation (BSS) algorithm, where the temporal dependencies are explained by assuming that each source is an autoregressive (AR) process and the distribution of the associated independent identically distributed (i.i.d.) inovations process is described using a mixture of Gaussians. Unlike most ML methods, the proposed algorithm takes into account both spatial and temporal information, optimization is performed using the expectation-maximization (EM) method, the source model is adapted to maximize the likelihood, and the update equations have a simple, analytical form. The proposed method, which we refer to as autoregressive mixture of Gaussians (AR-MOG), outperforms nine other methods for artificial mixtures of real audio. We also show results for using AR-MOG to extract the fetal cardiac signal from real magnetocardiographic (MCG) data. PMID:18334368

  1. OPTUM : Optimum Portfolio Tool for Utility Maximization documentation and user's guide.

    SciTech Connect

    VanKuiken, J. C.; Jusko, M. J.; Samsa, M. E.; Decision and Information Sciences

    2008-09-30

    The Optimum Portfolio Tool for Utility Maximization (OPTUM) is a versatile and powerful tool for selecting, optimizing, and analyzing portfolios. The software introduces a compact interface that facilitates problem definition, complex constraint specification, and portfolio analysis. The tool allows simple comparisons between user-preferred choices and optimized selections. OPTUM uses a portable, efficient, mixed-integer optimization engine (lp-solve) to derive the optimal mix of projects that satisfies the constraints and maximizes the total portfolio utility. OPTUM provides advanced features, such as convenient menus for specifying conditional constraints and specialized graphical displays of the optimal frontier and alternative solutions to assist in sensitivity visualization. OPTUM can be readily applied to other nonportfolio, resource-constrained optimization problems.

  2. Maternal Immunization Earlier in Pregnancy Maximizes Antibody Transfer and Expected Infant Seropositivity Against Pertussis

    PubMed Central

    Eberhardt, Christiane S.; Blanchard-Rohner, Geraldine; Lemaître, Barbara; Boukrid, Meriem; Combescure, Christophe; Othenin-Girard, Véronique; Chilin, Antonina; Petre, Jean; de Tejada, Begoña Martinez; Siegrist, Claire-Anne

    2016-01-01

    Background. Maternal immunization against pertussis is currently recommended after the 26th gestational week (GW). Data on the optimal timing of maternal immunization are inconsistent. Methods. We conducted a prospective observational noninferiority study comparing the influence of second-trimester (GW 13–25) vs third-trimester (≥GW 26) tetanus-diphtheria-acellular pertussis (Tdap) immunization in pregnant women who delivered at term. Geometric mean concentrations (GMCs) of cord blood antibodies to recombinant pertussis toxin (PT) and filamentous hemagglutinin (FHA) were assessed by enzyme-linked immunosorbent assay. The primary endpoint were GMCs and expected infant seropositivity rates, defined by birth anti-PT >30 enzyme-linked immunosorbent assay units (EU)/mL to confer seropositivity until 3 months of age. Results. We included 335 women (mean age, 31.0 ± 5.1 years; mean gestational age, 39.3 ± 1.3 GW) previously immunized with Tdap in the second (n = 122) or third (n = 213) trimester. Anti-PT and anti-FHA GMCs were higher following second- vs third-trimester immunization (PT: 57.1 EU/mL [95% confidence interval {CI}, 47.8–68.2] vs 31.1 EU/mL [95% CI, 25.7–37.7], P < .001; FHA: 284.4 EU/mL [95% CI, 241.3–335.2] vs 140.2 EU/mL [95% CI, 115.3–170.3], P < .001). The adjusted GMC ratios after second- vs third-trimester immunization differed significantly (PT: 1.9 [95% CI, 1.4–2.5]; FHA: 2.2 [95% CI, 1.7–3.0], P < .001). Expected infant seropositivity rates reached 80% vs 55% following second- vs third-trimester immunization (adjusted odds ratio, 3.7 [95% CI, 2.1–6.5], P < .001). Conclusions. Early second-trimester maternal Tdap immunization significantly increased neonatal antibodies. Recommending immunization from the second trimester onward would widen the immunization opportunity window and could improve seroprotection. PMID:26797213

  3. Fitting Nonlinear Ordinary Differential Equation Models with Random Effects and Unknown Initial Conditions Using the Stochastic Approximation Expectation-Maximization (SAEM) Algorithm.

    PubMed

    Chow, Sy-Miin; Lu, Zhaohua; Sherwood, Andrew; Zhu, Hongtu

    2016-03-01

    The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation-maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed. PMID:25416456

  4. Near real-time expectation-maximization algorithm: computational performance and passive millimeter-wave imaging field test results

    NASA Astrophysics Data System (ADS)

    Reynolds, William R.; Talcott, Denise; Hilgers, John W.

    2002-07-01

    A new iterative algorithm (EMLS) via the expectation maximization method is derived for extrapolating a non- negative object function from noisy, diffraction blurred image data. The algorithm has the following desirable attributes; fast convergence is attained for high frequency object components, is less sensitive to constraint parameters, and will accommodate randomly missing data. Speed and convergence results are presented. Field test imagery was obtained with a passive millimeter wave imaging sensor having a 30.5 cm aperture. The algorithm was implemented and tested in near real time using field test imagery. Theoretical results and experimental results using the field test imagery will be compared using an effective aperture measure of resolution increase. The effective aperture measure, based on examination of the edge-spread function, will be detailed.

  5. Patch-based augmentation of Expectation-Maximization for brain MRI tissue segmentation at arbitrary age after premature birth.

    PubMed

    Liu, Mengyuan; Kitsch, Averi; Miller, Steven; Chau, Vann; Poskitt, Kenneth; Rousseau, Francois; Shaw, Dennis; Studholme, Colin

    2016-02-15

    Accurate automated tissue segmentation of premature neonatal magnetic resonance images is a crucial task for quantification of brain injury and its impact on early postnatal growth and later cognitive development. In such studies it is common for scans to be acquired shortly after birth or later during the hospital stay and therefore occur at arbitrary gestational ages during a period of rapid developmental change. It is important to be able to segment any of these scans with comparable accuracy. Previous work on brain tissue segmentation in premature neonates has focused on segmentation at specific ages. Here we look at solving the more general problem using adaptations of age specific atlas based methods and evaluate this using a unique manually traced database of high resolution images spanning 20 gestational weeks of development. We examine the complimentary strengths of age specific atlas-based Expectation-Maximization approaches and patch-based methods for this problem and explore the development of two new hybrid techniques, patch-based augmentation of Expectation-Maximization with weighted fusion and a spatial variability constrained patch search. The former approach seeks to combine the advantages of both atlas- and patch-based methods by learning from the performance of the two techniques across the brain anatomy at different developmental ages, while the latter technique aims to use anatomical variability maps learnt from atlas training data to locally constrain the patch-based search range. The proposed approaches were evaluated using leave-one-out cross-validation. Compared with the conventional age specific atlas-based segmentation and direct patch based segmentation, both new approaches demonstrate improved accuracy in the automated labeling of cortical gray matter, white matter, ventricles and sulcal cortical-spinal fluid regions, while maintaining comparable results in deep gray matter. PMID:26702777

  6. Expected Utility Illustrated: A Graphical Analysis of Gambles with More than Two Possible Outcomes

    ERIC Educational Resources Information Center

    Chen, Frederick H.

    2010-01-01

    The author presents a simple geometric method to graphically illustrate the expected utility from a gamble with more than two possible outcomes. This geometric result gives economics students a simple visual aid for studying expected utility theory and enables them to analyze a richer set of decision problems under uncertainty compared to what…

  7. Iterative three-dimensional expectation maximization restoration of single photon emission computed tomography images: Application in striatal imaging

    SciTech Connect

    Gantet, Pierre; Payoux, Pierre; Celler, Anna; Majorel, Cynthia; Gourion, Daniel; Noll, Dominikus; Esquerre, Jean-Paul

    2006-01-15

    Single photon emission computed tomography imaging suffers from poor spatial resolution and high statistical noise. Consequently, the contrast of small structures is reduced, the visual detection of defects is limited and precise quantification is difficult. To improve the contrast, it is possible to include the spatially variant point spread function of the detection system into the iterative reconstruction algorithm. This kind of method is well known to be effective, but time consuming. We have developed a faster method to account for the spatial resolution loss in three dimensions, based on a postreconstruction restoration method. The method uses two steps. First, a noncorrected iterative ordered subsets expectation maximization (OSEM) reconstruction is performed and, in the second step, a three-dimensional (3D) iterative maximum likelihood expectation maximization (ML-EM) a posteriori spatial restoration of the reconstructed volume is done. In this paper, we compare to the standard OSEM-3D method, in three studies (two in simulation and one from experimental data). In the two first studies, contrast, noise, and visual detection of defects are studied. In the third study, a quantitative analysis is performed from data obtained with an anthropomorphic striatal phantom filled with 123-I. From the simulations, we demonstrate that contrast as a function of noise and lesion detectability are very similar for both OSEM-3D and OSEM-R methods. In the experimental study, we obtained very similar values of activity-quantification ratios for different regions in the brain. The advantage of OSEM-R compared to OSEM-3D is a substantial gain of processing time. This gain depends on several factors. In a typical situation, for a 128x128 acquisition of 120 projections, OSEM-R is 13 or 25 times faster than OSEM-3D, depending on the calculation method used in the iterative restoration. In this paper, the OSEM-R method is tested with the approximation of depth independent

  8. Maximizing precipitation utilization in dryland agriculture in South Africa — a review

    NASA Astrophysics Data System (ADS)

    Bennie, A. T. P.; Hensley, M.

    2001-01-01

    Agricultural systems in South Africa have been developed under primarily arid and semi-arid climatic conditions where droughts are common. Adoption of agricultural practices by farmers maximizes precipitation utilization, ensure production, economic and social sustainability. Precipitation use efficiency (PUE, kg produce ha -1 mm -1 rainfall plus the change in soil water content of the root zone) proved to be a valuable parameter for comparing the level of precipitation utilization of different production or management practices for dryland crop production or rangeland utilization. Increasing the length of the fallow period before planting increased the amount of pre-plant stored water in the soil thereby reducing the risk of drought damage to crops that resulted also in better yields. Deep drainage occurred only on sandy soils during wet seasons and values as high as 20% of the annual precipitation were measured during years of above average precipitation. In the experiments reported soil cultivation generally increased runoff. The retention of large amounts (>6 t ha -1) crop residue on the soil surface is required to decrease runoff from cultivated fields. Between 50 and 75% of the annual precipitation is lost through evaporation from the soil surface thus resulting in relatively low PUE-values.

  9. Hemodynamic Segmentation of Brain Perfusion Images with Delay and Dispersion Effects Using an Expectation-Maximization Algorithm

    PubMed Central

    Lu, Chia-Feng; Guo, Wan-Yuo; Chang, Feng-Chi; Huang, Shang-Ran; Chou, Yen-Chun; Wu, Yu-Te

    2013-01-01

    Automatic identification of various perfusion compartments from dynamic susceptibility contrast magnetic resonance brain images can assist in clinical diagnosis and treatment of cerebrovascular diseases. The principle of segmentation methods was based on the clustering of bolus transit-time profiles to discern areas of different tissues. However, the cerebrovascular diseases may result in a delayed and dispersed local perfusion and therefore alter the hemodynamic signal profiles. Assessing the accuracy of the segmentation technique under delayed/dispersed circumstance is critical to accurately evaluate the severity of the vascular disease. In this study, we improved the segmentation method of expectation-maximization algorithm by using the results of hierarchical clustering on whitened perfusion data as initial parameters for a mixture of multivariate Gaussians model. In addition, Monte Carlo simulations were conducted to evaluate the performance of proposed method under different levels of delay, dispersion, and noise of signal profiles in tissue segmentation. The proposed method was used to classify brain tissue types using perfusion data from five normal participants, a patient with unilateral stenosis of the internal carotid artery, and a patient with moyamoya disease. Our results showed that the normal, delayed or dispersed hemodynamics can be well differentiated for patients, and therefore the local arterial input function for impaired tissues can be recognized to minimize the error when estimating the cerebral blood flow. Furthermore, the tissue in the risk of infarct and the tissue with or without the complementary blood supply from the communicating arteries can be identified. PMID:23894386

  10. Expectation-maximization algorithms for learning a finite mixture of univariate survival time distributions from partially specified class values

    SciTech Connect

    Lee, Youngrok

    2013-05-15

    Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates of nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.

  11. Unsupervised Gaussian Mixture-Model With Expectation Maximization for Detecting Glaucomatous Progression in Standard Automated Perimetry Visual Fields

    PubMed Central

    Yousefi, Siamak; Balasubramanian, Madhusudhanan; Goldbaum, Michael H.; Medeiros, Felipe A.; Zangwill, Linda M.; Weinreb, Robert N.; Liebmann, Jeffrey M.; Girkin, Christopher A.; Bowd, Christopher

    2016-01-01

    Purpose To validate Gaussian mixture-model with expectation maximization (GEM) and variational Bayesian independent component analysis mixture-models (VIM) for detecting glaucomatous progression along visual field (VF) defect patterns (GEM–progression of patterns (POP) and VIM-POP). To compare GEM-POP and VIM-POP with other methods. Methods GEM and VIM models separated cross-sectional abnormal VFs from 859 eyes and normal VFs from 1117 eyes into abnormal and normal clusters. Clusters were decomposed into independent axes. The confidence limit (CL) of stability was established for each axis with a set of 84 stable eyes. Sensitivity for detecting progression was assessed in a sample of 83 eyes with known progressive glaucomatous optic neuropathy (PGON). Eyes were classified as progressed if any defect pattern progressed beyond the CL of stability. Performance of GEM-POP and VIM-POP was compared to point-wise linear regression (PLR), permutation analysis of PLR (PoPLR), and linear regression (LR) of mean deviation (MD), and visual field index (VFI). Results Sensitivity and specificity for detecting glaucomatous VFs were 89.9% and 93.8%, respectively, for GEM and 93.0% and 97.0%, respectively, for VIM. Receiver operating characteristic (ROC) curve areas for classifying progressed eyes were 0.82 for VIM-POP, 0.86 for GEM-POP, 0.81 for PoPLR, 0.69 for LR of MD, and 0.76 for LR of VFI. Conclusions GEM-POP was significantly more sensitive to PGON than PoPLR and linear regression of MD and VFI in our sample, while providing localized progression information. Translational Relevance Detection of glaucomatous progression can be improved by assessing longitudinal changes in localized patterns of glaucomatous defect identified by unsupervised machine learning. PMID:27152250

  12. Hybrid metaheuristic approaches to the expectation maximization for estimation of the hidden Markov model for signal modeling.

    PubMed

    Huda, Shamsul; Yearwood, John; Togneri, Roberto

    2014-10-01

    The expectation maximization (EM) is the standard training algorithm for hidden Markov model (HMM). However, EM faces a local convergence problem in HMM estimation. This paper attempts to overcome this problem of EM and proposes hybrid metaheuristic approaches to EM for HMM. In our earlier research, a hybrid of a constraint-based evolutionary learning approach to EM (CEL-EM) improved HMM estimation. In this paper, we propose a hybrid simulated annealing stochastic version of EM (SASEM) that combines simulated annealing (SA) with EM. The novelty of our approach is that we develop a mathematical reformulation of HMM estimation by introducing a stochastic step between the EM steps and combine SA with EM to provide better control over the acceptance of stochastic and EM steps for better HMM estimation. We also extend our earlier work and propose a second hybrid which is a combination of an EA and the proposed SASEM, (EA-SASEM). The proposed EA-SASEM uses the best constraint-based EA strategies from CEL-EM and stochastic reformulation of HMM. The complementary properties of EA and SA and stochastic reformulation of HMM of SASEM provide EA-SASEM with sufficient potential to find better estimation for HMM. To the best of our knowledge, this type of hybridization and mathematical reformulation have not been explored in the context of EM and HMM training. The proposed approaches have been evaluated through comprehensive experiments to justify their effectiveness in signal modeling using the speech corpus: TIMIT. Experimental results show that proposed approaches obtain higher recognition accuracies than the EM algorithm and CEL-EM as well. PMID:24686310

  13. The role of data assimilation in maximizing the utility of geospace observations (Invited)

    NASA Astrophysics Data System (ADS)

    Matsuo, T.

    2013-12-01

    Data assimilation can facilitate maximizing the utility of existing geospace observations by offering an ultimate marriage of inductive (data-driven) and deductive (first-principles based) approaches to addressing critical questions in space weather. Assimilative approaches that incorporate dynamical models are, in particular, capable of making a diverse set of observations consistent with physical processes included in a first-principles model, and allowing unobserved physical states to be inferred from observations. These points will be demonstrated in the context of the application of an ensemble Kalman filter (EnKF) to a thermosphere and ionosphere general circulation model. An important attribute of this approach is that the feedback between plasma and neutral variables is self-consistently treated both in the forecast model as well as in the assimilation scheme. This takes advantage of the intimate coupling between the thermosphere and ionosphere described in general circulation models to enable the inference of unobserved thermospheric states from the relatively plentiful observations of the ionosphere. Given the ever-growing infrastructure for the global navigation satellite system, this is indeed a promising prospect for geospace data assimilation. In principle, similar approaches can be applied to any geospace observing systems to extract more geophysical information from a given set of observations than would otherwise be possible.

  14. MaxBin: an automated binning method to recover individual genomes from metagenomes using an expectation-maximization algorithm

    PubMed Central

    2014-01-01

    Background Recovering individual genomes from metagenomic datasets allows access to uncultivated microbial populations that may have important roles in natural and engineered ecosystems. Understanding the roles of these uncultivated populations has broad application in ecology, evolution, biotechnology and medicine. Accurate binning of assembled metagenomic sequences is an essential step in recovering the genomes and understanding microbial functions. Results We have developed a binning algorithm, MaxBin, which automates the binning of assembled metagenomic scaffolds using an expectation-maximization algorithm after the assembly of metagenomic sequencing reads. Binning of simulated metagenomic datasets demonstrated that MaxBin had high levels of accuracy in binning microbial genomes. MaxBin was used to recover genomes from metagenomic data obtained through the Human Microbiome Project, which demonstrated its ability to recover genomes from real metagenomic datasets with variable sequencing coverages. Application of MaxBin to metagenomes obtained from microbial consortia adapted to grow on cellulose allowed genomic analysis of new, uncultivated, cellulolytic bacterial populations, including an abundant myxobacterial population distantly related to Sorangium cellulosum that possessed a much smaller genome (5 MB versus 13 to 14 MB) but has a more extensive set of genes for biomass deconstruction. For the cellulolytic consortia, the MaxBin results were compared to binning using emergent self-organizing maps (ESOMs) and differential coverage binning, demonstrating that it performed comparably to these methods but had distinct advantages in automation, resolution of related genomes and sensitivity. Conclusions The automatic binning software that we developed successfully classifies assembled sequences in metagenomic datasets into recovered individual genomes. The isolation of dozens of species in cellulolytic microbial consortia, including a novel species of

  15. Partial volume correction of PET-imaged tumor heterogeneity using expectation maximization with a spatially varying point spread function

    PubMed Central

    Barbee, David L; Flynn, Ryan T; Holden, James E; Nickles, Robert J; Jeraj, Robert

    2010-01-01

    Tumor heterogeneities observed in positron emission tomography (PET) imaging are frequently compromised of partial volume effects which may affect treatment prognosis, assessment, or future implementations such as biologically optimized treatment planning (dose painting). This paper presents a method for partial volume correction of PET-imaged heterogeneous tumors. A point source was scanned on a GE Discover LS at positions of increasing radii from the scanner’s center to obtain the spatially varying point spread function (PSF). PSF images were fit in three dimensions to Gaussian distributions using least squares optimization. Continuous expressions were devised for each Gaussian width as a function of radial distance, allowing for generation of the system PSF at any position in space. A spatially varying partial volume correction (SV-PVC) technique was developed using expectation maximization (EM) and a stopping criterion based on the method’s correction matrix generated for each iteration. The SV-PVC was validated using a standard tumor phantom and a tumor heterogeneity phantom, and was applied to a heterogeneous patient tumor. SV-PVC results were compared to results obtained from spatially invariant partial volume correction (SINV-PVC), which used directionally uniform three dimensional kernels. SV-PVC of the standard tumor phantom increased the maximum observed sphere activity by 55 and 40% for 10 and 13 mm diameter spheres, respectively. Tumor heterogeneity phantom results demonstrated that as net changes in the EM correction matrix decreased below 35%, further iterations improved overall quantitative accuracy by less than 1%. SV-PVC of clinically observed tumors frequently exhibited changes of ±30% in regions of heterogeneity. The SV-PVC method implemented spatially varying kernel widths and automatically determined the number of iterations for optimal restoration, parameters which are arbitrarily chosen in SINV-PVC. Comparing SV-PVC to SINV

  16. Maximizing coupling-efficiency of high-power diode lasers utilizing hybrid assembly technology

    NASA Astrophysics Data System (ADS)

    Zontar, D.; Dogan, M.; Fulghum, S.; Müller, T.; Haag, S.; Brecher, C.

    2015-03-01

    In this paper, we present hybrid assembly technology to maximize coupling efficiency for spatially combined laser systems. High quality components, such as center-turned focusing units, as well as suitable assembly strategies are necessary to obtain highest possible output ratios. Alignment strategies are challenging tasks due to their complexity and sensitivity. Especially in low-volume production fully automated systems are economically at a disadvantage, as operator experience is often expensive. However reproducibility and quality of automatically assembled systems can be superior. Therefore automated and manual assembly techniques are combined to obtain high coupling efficiency while preserving maximum flexibility. The paper will describe necessary equipment and software to enable hybrid assembly processes. Micromanipulator technology with high step-resolution and six degrees of freedom provide a large number of possible evaluation points. Automated algorithms are necess ary to speed-up data gathering and alignment to efficiently utilize available granularity for manual assembly processes. Furthermore, an engineering environment is presented to enable rapid prototyping of automation tasks with simultaneous data ev aluation. Integration with simulation environments, e.g. Zemax, allows the verification of assembly strategies in advance. Data driven decision making ensures constant high quality, documents the assembly process and is a basis for further improvement. The hybrid assembly technology has been applied on several applications for efficiencies above 80% and will be discussed in this paper. High level coupling efficiency has been achieved with minimized assembly as a result of semi-automated alignment. This paper will focus on hybrid automation for optimizing and attaching turning mirrors and collimation lenses.

  17. Putting Teens at the Center: Maximizing Public Utility of Urban Space through Youth Involvement in Planning and Employment.

    ERIC Educational Resources Information Center

    Lawson, Laura; McNally, Marcia

    1995-01-01

    Including teens' needs in the planning and maintenance of urban space suggests new methods of layering utility and maximizing benefit to teens and community. Discusses the Berkeley Youth Alternatives (BYA) Youth Employment Landscape Program and BYA Community Garden Patch. Program descriptions and evaluation provide future direction. (LZ)

  18. The behavioral economics of consumer brand choice: patterns of reinforcement and utility maximization.

    PubMed

    Foxall, Gordon R; Oliveira-Castro, Jorge M; Schrezenmaier, Teresa C

    2004-06-30

    Purchasers of fast-moving consumer goods generally exhibit multi-brand choice, selecting apparently randomly among a small subset or "repertoire" of tried and trusted brands. Their behavior shows both matching and maximization, though it is not clear just what the majority of buyers are maximizing. Each brand attracts, however, a small percentage of consumers who are 100%-loyal to it during the period of observation. Some of these are exclusively buyers of premium-priced brands who are presumably maximizing informational reinforcement because their demand for the brand is relatively price-insensitive or inelastic. Others buy exclusively the cheapest brands available and can be assumed to maximize utilitarian reinforcement since their behavior is particularly price-sensitive or elastic. Between them are the majority of consumers whose multi-brand buying takes the form of selecting a mixture of economy -- and premium-priced brands. Based on the analysis of buying patterns of 80 consumers for 9 product categories, the paper examines the continuum of consumers so defined and seeks to relate their buying behavior to the question of how and what consumers maximize. PMID:15157975

  19. Social and Professional Participation of Individuals Who Are Deaf: Utilizing the Psychosocial Potential Maximization Framework

    ERIC Educational Resources Information Center

    Jacobs, Paul G.; Brown, P. Margaret; Paatsch, Louise

    2012-01-01

    This article documents a strength-based understanding of how individuals who are deaf maximize their social and professional potential. This exploratory study was conducted with 49 adult participants who are deaf (n = 30) and who have typical hearing (n = 19) residing in America, Australia, England, and South Africa. The findings support a…

  20. 76 FR 37376 - Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility, and Integrity of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-27

    ... Management and Budget (67 FR 8452-8460), pursuant to section 515 of the Treasury and General Government... FR 8452-8460) that direct each federal agency to (1) Issue its own guidelines ensuring and maximizing... releases, archival records, public filings, subpoenas, or adjudicative processes. 3. ``Influential,''...

  1. The temporal derivative of expected utility: a neural mechanism for dynamic decision-making.

    PubMed

    Zhang, Xian; Hirsch, Joy

    2013-01-15

    Real world tasks involving moving targets, such as driving a vehicle, are performed based on continuous decisions thought to depend upon the temporal derivative of the expected utility (∂V/∂t), where the expected utility (V) is the effective value of a future reward. However, the neural mechanisms that underlie dynamic decision-making are not well understood. This study investigates human neural correlates of both V and ∂V/∂t using fMRI and a novel experimental paradigm based on a pursuit-evasion game optimized to isolate components of dynamic decision processes. Our behavioral data show that players of the pursuit-evasion game adopt an exponential discounting function, supporting the expected utility theory. The continuous functions of V and ∂V/∂t were derived from the behavioral data and applied as regressors in fMRI analysis, enabling temporal resolution that exceeded the sampling rate of image acquisition, hyper-temporal resolution, by taking advantage of numerous trials that provide rich and independent manipulation of those variables. V and ∂V/∂t were each associated with distinct neural activity. Specifically, ∂V/∂t was associated with anterior and posterior cingulate cortices, superior parietal lobule, and ventral pallidum, whereas V was primarily associated with supplementary motor, pre and post central gyri, cerebellum, and thalamus. The association between the ∂V/∂t and brain regions previously related to decision-making is consistent with the primary role of the temporal derivative of expected utility in dynamic decision-making. PMID:22963852

  2. Robust optimal sensor placement for operational modal analysis based on maximum expected utility

    NASA Astrophysics Data System (ADS)

    Li, Binbin; Der Kiureghian, Armen

    2016-06-01

    Optimal sensor placement is essentially a decision problem under uncertainty. The maximum expected utility theory and a Bayesian linear model are used in this paper for robust sensor placement aimed at operational modal identification. To avoid nonlinear relations between modal parameters and measured responses, we choose to optimize the sensor locations relative to identifying modal responses. Since the modal responses contain all the information necessary to identify the modal parameters, the optimal sensor locations for modal response estimation provide at least a suboptimal solution for identification of modal parameters. First, a probabilistic model for sensor placement considering model uncertainty, load uncertainty and measurement error is proposed. The maximum expected utility theory is then applied with this model by considering utility functions based on three principles: quadratic loss, Shannon information, and K-L divergence. In addition, the prior covariance of modal responses under band-limited white-noise excitation is derived and the nearest Kronecker product approximation is employed to accelerate evaluation of the utility function. As demonstration and validation examples, sensor placements in a 16-degrees-of-freedom shear-type building and in Guangzhou TV Tower under ground motion and wind load are considered. Placements of individual displacement meter, velocimeter, accelerometer and placement of mixed sensors are illustrated.

  3. Comparing the performance of FOCE and different expectation-maximization methods in handling complex population physiologically-based pharmacokinetic models.

    PubMed

    Liu, Xiaoxi; Wang, Yuhuan

    2016-08-01

    For the purpose of population pharmacometric modeling, a variety of mathematic algorithms are implemented in major modeling software packages to facilitate the maximum likelihood modeling, such as FO, FOCE, Laplace, ITS and EM. These methods are all designed to estimate the set of parameters that maximize the joint likelihood of observations in a given problem. While FOCE is still currently the most widely used method in population modeling, EM methods are getting more popular as the current-generation methods of choice because of their robustness with more complex models and sparse data structures. There are several versions of EM method implementation that are available in public modeling software packages. Although there have been several studies and reviews comparing the performance of different methods in handling relatively simple models, there has not been a dedicated study to compare different versions of EM algorithms in solving complex PBPK models. This study took everolimus as a model drug and simulated PK data based on published results. Three most popular EM methods (SAEM, IMP and QRPEM) and FOCE (as a benchmark reference) were evaluated for their estimation accuracy and converging speed when solving models of increased complexity. Both sparse and rich sampling data structure were tested. We concluded that FOCE was superior to EM methods for simple structured models. For more complex models and/ or sparse data, EM methods are much more robust. While the estimation accuracy was very close across EM methods, the general ranking of speed (fastest to slowest) was: QRPEM, IMP and SAEM. IMP gave the most realistic estimation of parameter standard errors, while under- and over- estimation of standard errors were observed in SAEM and QRPEM methods. PMID:27215925

  4. Maximizing the diagnostic utility of endoscopic biopsy in dogs and cats with gastrointestinal disease.

    PubMed

    Jergens, Albert E; Willard, Michael D; Allenspach, Karin

    2016-08-01

    Flexible endoscopy has become a valuable tool for the diagnosis of many small animal gastrointestinal (GI) diseases, but the techniques must be performed carefully so that the results are meaningful. This article reviews the current diagnostic utility of flexible endoscopy, including practical/technical considerations for endoscopic biopsy, optimal instrumentation for mucosal specimen collection, the correlation of endoscopic indices to clinical activity and to histopathologic findings, and new developments in the endoscopic diagnosis of GI disease. Recent studies have defined endoscopic biopsy guidelines for the optimal number and quality of diagnostic specimens from different regions of the gut. They also have shown the value of ileal biopsy in the diagnosis of canine and feline chronic enteropathies, and have demonstrated the utility of endoscopic biopsy specimens beyond routine hematoxylin and eosin histopathological analysis, including their use in immunohistochemical, microbiological, and molecular studies. PMID:27387727

  5. Utilizing Maximal Independent Sets as Dominating Sets in Scale-Free Networks

    NASA Astrophysics Data System (ADS)

    Derzsy, N.; Molnar, F., Jr.; Szymanski, B. K.; Korniss, G.

    Dominating sets provide key solution to various critical problems in networked systems, such as detecting, monitoring, or controlling the behavior of nodes. Motivated by graph theory literature [Erdos, Israel J. Math. 4, 233 (1966)], we studied maximal independent sets (MIS) as dominating sets in scale-free networks. We investigated the scaling behavior of the size of MIS in artificial scale-free networks with respect to multiple topological properties (size, average degree, power-law exponent, assortativity), evaluated its resilience to network damage resulting from random failure or targeted attack [Molnar et al., Sci. Rep. 5, 8321 (2015)], and compared its efficiency to previously proposed dominating set selection strategies. We showed that, despite its small set size, MIS provides very high resilience against network damage. Using extensive numerical analysis on both synthetic and real-world (social, biological, technological) network samples, we demonstrate that our method effectively satisfies four essential requirements of dominating sets for their practical applicability on large-scale real-world systems: 1.) small set size, 2.) minimal network information required for their construction scheme, 3.) fast and easy computational implementation, and 4.) resiliency to network damage. Supported by DARPA, DTRA, and NSF.

  6. Expected Utility Based Decision Making under Z-Information and Its Application.

    PubMed

    Aliev, Rashad R; Mraiziq, Derar Atallah Talal; Huseynov, Oleg H

    2015-01-01

    Real-world decision relevant information is often partially reliable. The reasons are partial reliability of the source of information, misperceptions, psychological biases, incompetence, and so forth. Z-numbers based formalization of information (Z-information) represents a natural language (NL) based value of a variable of interest in line with the related NL based reliability. What is important is that Z-information not only is the most general representation of real-world imperfect information but also has the highest descriptive power from human perception point of view as compared to fuzzy number. In this study, we present an approach to decision making under Z-information based on direct computation over Z-numbers. This approach utilizes expected utility paradigm and is applied to a benchmark decision problem in the field of economics. PMID:26366163

  7. Maximizing the utility of monitoring to the adaptive management of natural resources

    USGS Publications Warehouse

    Kendall, William L.; Moore, Clinton T.

    2012-01-01

    Data collection is an important step in any investigation about the structure or processes related to a natural system. In a purely scientific investigation (experiments, quasi-experiments, observational studies), data collection is part of the scientific method, preceded by the identification of hypotheses and the design of any manipulations of the system to test those hypotheses. Data collection and the manipulations that precede it are ideally designed to maximize the information that is derived from the study. That is, such investigations should be designed for maximum power to evaluate the relative validity of the hypotheses posed. When data collection is intended to inform the management of ecological systems, we call it monitoring. Note that our definition of monitoring encompasses a broader range of data-collection efforts than some alternative definitions – e.g. Chapter 3. The purpose of monitoring as we use the term can vary, from surveillance or “thumb on the pulse” monitoring (see Nichols and Williams 2006), intended to detect changes in a system due to any non-specified source (e.g. the North American Breeding Bird Survey), to very specific and targeted monitoring of the results of specific management actions (e.g. banding and aerial survey efforts related to North American waterfowl harvest management). Although a role of surveillance monitoring is to detect unanticipated changes in a system, the same result is possible from a collection of targeted monitoring programs distributed across the same spatial range (Box 4.1). In the face of limited budgets and many specific management questions, tying monitoring as closely as possible to management needs is warranted (Nichols and Williams 2006). Adaptive resource management (ARM; Walters 1986, Williams 1997, Kendall 2001, Moore and Conroy 2006, McCarthy and Possingham 2007, Conroy et al. 2008a) provides a context and specific purpose for monitoring: to evaluate decisions with respect to achievement

  8. Utilization of negative beat-frequencies for maximizing the update-rate of OFDR

    NASA Astrophysics Data System (ADS)

    Gabai, Haniel; Botsev, Yakov; Hahami, Meir; Eyal, Avishay

    2015-07-01

    In traditional OFDR systems, the backscattered profile of a sensing fiber is inefficiently duplicated to the negative band of spectrum. In this work, we present a new OFDR design and algorithm that remove this redundancy and make use of negative beat frequencies. In contrary to conventional OFDR designs, it facilitates efficient use of the available system bandwidth and enables distributed sensing with the maximum allowable interrogation update-rate for a given fiber length. To enable the reconstruction of negative beat frequencies an I/Q type receiver is used. In this receiver, both the in-phase (I) and quadrature (Q) components of the backscatter field are detected. Following detection, both components are digitally combined to produce a complex backscatter signal. Accordingly, due to its asymmetric nature, the produced spectrum will not be corrupted by the appearance of negative beat-frequencies. Here, via a comprehensive computer simulation, we show that in contrast to conventional OFDR systems, I/Q OFDR can be operated at maximum interrogation update-rate for a given fiber length. In addition, we experimentally demonstrate, for the first time, the ability of I/Q OFDR to utilize negative beat-frequencies for long-range distributed sensing.

  9. A Neurodynamic Approach for Real-Time Scheduling via Maximizing Piecewise Linear Utility.

    PubMed

    Guo, Zhishan; Baruah, Sanjoy K

    2016-02-01

    In this paper, we study a set of real-time scheduling problems whose objectives can be expressed as piecewise linear utility functions. This model has very wide applications in scheduling-related problems, such as mixed criticality, response time minimization, and tardiness analysis. Approximation schemes and matrix vectorization techniques are applied to transform scheduling problems into linear constraint optimization with a piecewise linear and concave objective; thus, a neural network-based optimization method can be adopted to solve such scheduling problems efficiently. This neural network model has a parallel structure, and can also be implemented on circuits, on which the converging time can be significantly limited to meet real-time requirements. Examples are provided to illustrate how to solve the optimization problem and to form a schedule. An approximation ratio bound of 0.5 is further provided. Experimental studies on a large number of randomly generated sets suggest that our algorithm is optimal when the set is nonoverloaded, and outperforms existing typical scheduling strategies when there is overload. Moreover, the number of steps for finding an approximate solution remains at the same level when the size of the problem (number of jobs within a set) increases. PMID:26336153

  10. Effects of lung ventilation–perfusion and muscle metabolism–perfusion heterogeneities on maximal O2 transport and utilization

    PubMed Central

    Cano, I; Roca, J; Wagner, P D

    2015-01-01

    Previous models of O2 transport and utilization in health considered diffusive exchange of O2 in lung and muscle, but, reasonably, neglected functional heterogeneities in these tissues. However, in disease, disregarding such heterogeneities would not be justified. Here, pulmonary ventilation–perfusion and skeletal muscle metabolism–perfusion mismatching were added to a prior model of only diffusive exchange. Previously ignored O2 exchange in non-exercising tissues was also included. We simulated maximal exercise in (a) healthy subjects at sea level and altitude, and (b) COPD patients at sea level, to assess the separate and combined effects of pulmonary and peripheral functional heterogeneities on overall muscle O2 uptake ( and on mitochondrial (). In healthy subjects at maximal exercise, the combined effects of pulmonary and peripheral heterogeneities reduced arterial () at sea level by 32 mmHg, but muscle by only 122 ml min−1 (–3.5%). At the altitude of Mt Everest, lung and tissue heterogeneity together reduced by less than 1 mmHg and by 32 ml min−1 (–2.4%). Skeletal muscle heterogeneity led to a wide range of potential among muscle regions, a range that becomes narrower as increases, and in regions with a low ratio of metabolic capacity to blood flow, can exceed that of mixed muscle venous blood. For patients with severe COPD, peak was insensitive to substantial changes in the mitochondrial characteristics for O2 consumption or the extent of muscle heterogeneity. This integrative computational model of O2 transport and utilization offers the potential for estimating profiles of both in health and in diseases such as COPD if the extent for both lung ventilation–perfusion and tissue metabolism–perfusion heterogeneity is known. PMID:25640017

  11. Expectation-maximization of the potential of mean force and diffusion coefficient in Langevin dynamics from single molecule FRET data photon by photon.

    PubMed

    Haas, Kevin R; Yang, Haw; Chu, Jhih-Wei

    2013-12-12

    The dynamics of a protein along a well-defined coordinate can be formally projected onto the form of an overdamped Lagevin equation. Here, we present a comprehensive statistical-learning framework for simultaneously quantifying the deterministic force (the potential of mean force, PMF) and the stochastic force (characterized by the diffusion coefficient, D) from single-molecule Förster-type resonance energy transfer (smFRET) experiments. The likelihood functional of the Langevin parameters, PMF and D, is expressed by a path integral of the latent smFRET distance that follows Langevin dynamics and realized by the donor and the acceptor photon emissions. The solution is made possible by an eigen decomposition of the time-symmetrized form of the corresponding Fokker-Planck equation coupled with photon statistics. To extract the Langevin parameters from photon arrival time data, we advance the expectation-maximization algorithm in statistical learning, originally developed for and mostly used in discrete-state systems, to a general form in the continuous space that allows for a variational calculus on the continuous PMF function. We also introduce the regularization of the solution space in this Bayesian inference based on a maximum trajectory-entropy principle. We use a highly nontrivial example with realistically simulated smFRET data to illustrate the application of this new method. PMID:23937300

  12. Non-linear spatio-temporal filtering of dynamic PET data using a 4-dimensional Gaussian filter and expectation-maximization deconvolution

    PubMed Central

    Holden, J E

    2013-01-01

    We introduce a method for denoising dynamic PET data, spatio-temporal expectation-maximization (STEM) filtering, that combines 4-dimensional Gaussian filtering with EM deconvolution. The initial Gaussian filter suppresses noise at a broad range of spatial and temporal frequencies and EM deconvolution quickly restores the frequencies most important to the signal. We aim to demonstrate that STEM filtering can improve variance in both individual time frames and in parametric images without introducing significant bias. We evaluate STEM filtering with a dynamic phantom study, and with simulated and human dynamic PET studies of a tracer with reversible binding behaviour, [C-11]raclopride, and a tracer with irreversible binding behaviour, [F-18]FDOPA. STEM filtering is compared to a number of established 3 and 4-dimensional denoising methods. STEM filtering provides substantial improvements in variance in both individual time frames and in parametric images generated with a number of kinetic analysis techniques while introducing little bias. STEM filtering does bias early frames, but this does not affect quantitative parameter estimates. STEM filtering is shown to be superior to the other simple denoising methods studied. STEM filtering is a simple and effective denoising method that could be valuable for a wide range of dynamic PET applications. PMID:23370699

  13. In vitro estimation of fast and slow wave parameters of thin trabecular bone using space-alternating generalized expectation-maximization algorithm.

    PubMed

    Grimes, Morad; Bouhadjera, Abdelmalek; Haddad, Sofiane; Benkedidah, Toufik

    2012-07-01

    In testing cancellous bone using ultrasound, two types of longitudinal Biot's waves are observed in the received signal. These are known as fast and slow waves and their appearance depend on the alignment of bone trabeculae in the propagation path and the thickness of the specimen under test (SUT). They can be used as an effective tool for the diagnosis of osteoporosis because wave propagation behavior depends on the bone structure. However, the identification of these waves in the received signal can be difficult to achieve. In this study, ultrasonic wave propagation in a 4mm thick bovine cancellous bone in the direction parallel to the trabecular alignment is considered. The observed Biot's fast and slow longitudinal waves are superimposed; which makes it difficult to extract any information from the received signal. These two waves can be separated using the space alternating generalized expectation maximization (SAGE) algorithm. The latter has been used mainly in speech processing. In this new approach, parameters such as, arrival time, center frequency, bandwidth, amplitude, phase and velocity of each wave are estimated. The B-Scan images and its associated A-scans obtained through simulations using Biot's finite-difference time-domain (FDTD) method are validated experimentally using a thin bone sample obtained from the femoral-head of a 30 months old bovine. PMID:22284937

  14. A constraint-based evolutionary learning approach to the expectation maximization for optimal estimation of the hidden Markov model for speech signal modeling.

    PubMed

    Huda, Shamsul; Yearwood, John; Togneri, Roberto

    2009-02-01

    This paper attempts to overcome the tendency of the expectation-maximization (EM) algorithm to locate a local rather than global maximum when applied to estimate the hidden Markov model (HMM) parameters in speech signal modeling. We propose a hybrid algorithm for estimation of the HMM in automatic speech recognition (ASR) using a constraint-based evolutionary algorithm (EA) and EM, the CEL-EM. The novelty of our hybrid algorithm (CEL-EM) is that it is applicable for estimation of the constraint-based models with many constraints and large numbers of parameters (which use EM) like HMM. Two constraint-based versions of the CEL-EM with different fusion strategies have been proposed using a constraint-based EA and the EM for better estimation of HMM in ASR. The first one uses a traditional constraint-handling mechanism of EA. The other version transforms a constrained optimization problem into an unconstrained problem using Lagrange multipliers. Fusion strategies for the CEL-EM use a staged-fusion approach where EM has been plugged with the EA periodically after the execution of EA for a specific period of time to maintain the global sampling capabilities of EA in the hybrid algorithm. A variable initialization approach (VIA) has been proposed using a variable segmentation to provide a better initialization for EA in the CEL-EM. Experimental results on the TIMIT speech corpus show that CEL-EM obtains higher recognition accuracies than the traditional EM algorithm as well as a top-standard EM (VIA-EM, constructed by applying the VIA to EM). PMID:19068441

  15. Population genetic analysis of bi-allelic structural variants from low-coverage sequence data with an expectation-maximization algorithm

    PubMed Central

    2014-01-01

    Background Population genetics and association studies usually rely on a set of known variable sites that are then genotyped in subsequent samples, because it is easier to genotype than to discover the variation. This is also true for structural variation detected from sequence data. However, the genotypes at known variable sites can only be inferred with uncertainty from low coverage data. Thus, statistical approaches that infer genotype likelihoods, test hypotheses, and estimate population parameters without requiring accurate genotypes are becoming popular. Unfortunately, the current implementations of these methods are intended to analyse only single nucleotide and short indel variation, and they usually assume that the two alleles in a heterozygous individual are sampled with equal probability. This is generally false for structural variants detected with paired ends or split reads. Therefore, the population genetics of structural variants cannot be studied, unless a painstaking and potentially biased genotyping is performed first. Results We present svgem, an expectation-maximization implementation to estimate allele and genotype frequencies, calculate genotype posterior probabilities, and test for Hardy-Weinberg equilibrium and for population differences, from the numbers of times the alleles are observed in each individual. Although applicable to single nucleotide variation, it aims at bi-allelic structural variation of any type, observed by either split reads or paired ends, with arbitrarily high allele sampling bias. We test svgem with simulated and real data from the 1000 Genomes Project. Conclusions svgem makes it possible to use low-coverage sequencing data to study the population distribution of structural variants without having to know their genotypes. Furthermore, this advance allows the combined analysis of structural and nucleotide variation within the same genotype-free statistical framework, thus preventing biases introduced by genotype

  16. Illustrating Caffeine's Pharmacological and Expectancy Effects Utilizing a Balanced Placebo Design.

    ERIC Educational Resources Information Center

    Lotshaw, Sandra C.; And Others

    1996-01-01

    Hypothesizes that pharmacological and expectancy effects may be two principles that govern caffeine consumption in the same way they affect other drug use. Tests this theory through a balanced placebo design on 100 male undergraduate students. Expectancy set and caffeine content appeared equally powerful, and worked additionally, to affect…

  17. Comparison between the Health Belief Model and Subjective Expected Utility Theory: predicting incontinence prevention behaviour in post-partum women.

    PubMed

    Dolman, M; Chase, J

    1996-08-01

    A small-scale study was undertaken to test the relative predictive power of the Health Belief Model and Subjective Expected Utility Theory for the uptake of a behaviour (pelvic floor exercises) to reduce post-partum urinary incontinence in primigravida females. A structured questionnaire was used to gather data relevant to both models from a sample antenatal and postnatal primigravida women. Questions examined the perceived probability of becoming incontinent, the perceived (dis)utility of incontinence, the perceived probability of pelvic floor exercises preventing future urinary incontinence, the costs and benefits of performing pelvic floor exercises and sources of information and knowledge about incontinence. Multiple regression analysis focused on whether or not respondents intended to perform pelvic floor exercises and the factors influencing their decisions. Aggregated data were analysed to compare the Health Belief Model and Subjective Expected Utility Theory directly. PMID:9238593

  18. Expected Utility Theory as a Guide to Contingency (Allowance or Management Reserve) Allocation

    SciTech Connect

    Thibadeau, Barbara M

    2006-01-01

    In this paper, I view a project from the perspective of utility theory. I suggest that, by determining an optimal percent contingency (relative to remaining work) and identifying and enforcing a required change in behavior, from one that is risk-seeking to one that is risk-averse, a project's contingency can be managed more effectively. I argue that early on in a project, risk-seeking behavior dominates. During this period, requests for contingency are less rigorously scrutinized. As the design evolves, more accurate information becomes available. Once the designs have been finalized, the project team must transition from a free-thinking, exploratory mode to an execution mode. If projects do not transition fast enough from a risk-seeking to a risk-averse organization, an inappropriate allocation of project contingency could occur (too much too early in the project). I show that the behavioral patterns used to characterize utility theory are those that exist in the project environment. I define a project's utility and thus, provide project managers with a metric against which all gambles (requests for contingency) can be evaluated. I discuss other research as it relates to utility and project management. From empirical data analysis, I demonstrate that there is a direct correlation between progress on a project's design activities and the rate at which project contingency is allocated and recommend a transition time frame during which the rate of allocation should decrease and the project should transition from risk-seeking to risk-averse. I show that these data are already available from a project's earned value management system and thus, inclusion of this information in the standard monthly reporting suite can enhance a project manager's decision making capability.

  19. Maximizing the utilization and impact of medical educational software by designing for local area network (LAN) implementation.

    PubMed Central

    Stevens, R.; Reber, E.

    1993-01-01

    The design, development and implementation of medical education software often occurs without sufficient consideration of the potential benefits that can be realized by making the software network aware. These benefits can be considerable and can greatly enhance the utilization and potential impact of the software. This article details how multiple aspects of the IMMEX problem solving project have benefited from taking maximum advantage of LAN resources. PMID:8130583

  20. Utilizing WASP and hot waterflood to maximize the value of a thermally mature steam drive in the West Coalinga field

    SciTech Connect

    DeFrancisco, S.T.; Sanford, S.J.; Hong, K.C.

    1995-12-31

    The Water-Alternating-Steam-Process (WASP) has been utilized on Section 13D, West Coalinga Field since 1988. Originally implemented to control premature, high-temperature steam breakthrough, the process has improved sales oil recovery in both breakthrough and non-breakthrough patterns. A desktop, semi-conceptual simulation study was initiated in June 1993 to provide a theoretical basis for optimizing and monitoring the WASP project. The simulation study results showed that the existing WASP injection strategy could be further optimized. It also showed that conversion to continuous hot waterflood was the optimum injection strategy for the steamflood sands. The Section 13D WASP project was gradually converted to hot waterflood during 1994. Conversion to hot waterflood has significantly improved project cash flow and increased the value of the Section 13D thermal project.

  1. Expectant Mothers Maximizing Opportunities: Maternal Characteristics Moderate Multifactorial Prenatal Stress in the Prediction of Birth Weight in a Sample of Children Adopted at Birth

    PubMed Central

    Brotnow, Line; Reiss, David; Stover, Carla S.; Ganiban, Jody; Leve, Leslie D.; Neiderhiser, Jenae M.; Shaw, Daniel S.; Stevens, Hanna E.

    2015-01-01

    Background Mothers’ stress in pregnancy is considered an environmental risk factor in child development. Multiple stressors may combine to increase risk, and maternal personal characteristics may offset the effects of stress. This study aimed to test the effect of 1) multifactorial prenatal stress, integrating objective “stressors” and subjective “distress” and 2) the moderating effects of maternal characteristics (perceived social support, self-esteem and specific personality traits) on infant birthweight. Method Hierarchical regression modeling was used to examine cross-sectional data on 403 birth mothers and their newborns from an adoption study. Results Distress during pregnancy showed a statistically significant association with birthweight (R2 = 0.032, F(2, 398) = 6.782, p = .001). The hierarchical regression model revealed an almost two-fold increase in variance of birthweight predicted by stressors as compared with distress measures (R2Δ = 0.049, F(4, 394) = 5.339, p < .001). Further, maternal characteristics moderated this association (R2Δ = 0.031, F(4, 389) = 3.413, p = .009). Specifically, the expected benefit to birthweight as a function of higher SES was observed only for mothers with lower levels of harm-avoidance and higher levels of perceived social support. Importantly, the results were not better explained by prematurity, pregnancy complications, exposure to drugs, alcohol or environmental toxins. Conclusions The findings support multidimensional theoretical models of prenatal stress. Although both objective stressors and subjectively measured distress predict birthweight, they should be considered distinct and cumulative components of stress. This study further highlights that jointly considering risk factors and protective factors in pregnancy improves the ability to predict birthweight. PMID:26544958

  2. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    SciTech Connect

    Razali, Azhani Mohd Abdullah, Jaafar

    2015-04-29

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.

  3. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    NASA Astrophysics Data System (ADS)

    Razali, Azhani Mohd; Abdullah, Jaafar

    2015-04-01

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.

  4. Expectation versus Reality: The Impact of Utility on Emotional Outcomes after Returning Individualized Genetic Research Results in Pediatric Rare Disease Research, a Qualitative Interview Study

    PubMed Central

    Cacioppo, Cara N.; Chandler, Ariel E.; Towne, Meghan C.; Beggs, Alan H.; Holm, Ingrid A.

    2016-01-01

    Purpose Much information on parental perspectives on the return of individual research results (IRR) in pediatric genomic research is based on hypothetical rather than actual IRR. Our aim was to understand how the expected utility to parents who received IRR on their child from a genetic research study compared to the actual utility of the IRR received. Methods We conducted individual telephone interviews with parents who received IRR on their child through participation in the Manton Center for Orphan Disease Research Gene Discovery Core (GDC) at Boston Children’s Hospital (BCH). Results Five themes emerged around the utility that parents expected and actually received from IRR: predictability, management, family planning, finding answers, and helping science and/or families. Parents expressing negative or mixed emotions after IRR return were those who did not receive the utility they expected from the IRR. Conversely, parents who expressed positive emotions were those who received as much or greater utility than expected. Conclusions Discrepancies between expected and actual utility of IRR affect the experiences of parents and families enrolled in genetic research studies. An informed consent process that fosters realistic expectations between researchers and participants may help to minimize any negative impact on parents and families. PMID:27082877

  5. Expected utility of voluntary vaccination in the middle of an emergent Bluetongue virus serotype 8 epidemic: a decision analysis parameterized for Dutch circumstances.

    PubMed

    Sok, J; Hogeveen, H; Elbers, A R W; Velthuis, A G J; Oude Lansink, A G J M

    2014-08-01

    In order to put a halt to the Bluetongue virus serotype 8 (BTV-8) epidemic in 2008, the European Commission promoted vaccination at a transnational level as a new measure to combat BTV-8. Most European member states opted for a mandatory vaccination campaign, whereas the Netherlands, amongst others, opted for a voluntary campaign. For the latter to be effective, the farmer's willingness to vaccinate should be high enough to reach satisfactory vaccination coverage to stop the spread of the disease. This study looked at a farmer's expected utility of vaccination, which is expected to have a positive impact on the willingness to vaccinate. Decision analysis was used to structure the vaccination decision problem into decisions, events and payoffs, and to define the relationships among these elements. Two scenarios were formulated to distinguish farmers' mindsets, based on differences in dairy heifer management. For each of the scenarios, a decision tree was run for two years to study vaccination behaviour over time. The analysis was done based on the expected utility criterion. This allows to account for the effect of a farmer's risk preference on the vaccination decision. Probabilities were estimated by experts, payoffs were based on an earlier published study. According to the results of the simulation, the farmer decided initially to vaccinate against BTV-8 as the net expected utility of vaccination was positive. Re-vaccination was uncertain due to less expected costs of a continued outbreak. A risk averse farmer in this respect is more likely to re-vaccinate. When heifers were retained for export on the farm, the net expected utility of vaccination was found to be generally larger and thus was re-vaccination more likely to happen. For future animal health programmes that rely on a voluntary approach, results show that the provision of financial incentives can be adjusted to the farmers' willingness to vaccinate over time. Important in this respect are the decision

  6. Managing Expectations: Results from Case Studies of US Water Utilities on Preparing for, Coping with, and Adapting to Extreme Events

    NASA Astrophysics Data System (ADS)

    Beller-Simms, N.; Metchis, K.

    2014-12-01

    Water utilities, reeling from increased impacts of successive extreme events such as floods, droughts, and derechos, are taking a more proactive role in preparing for future incursions. A recent study by Federal and water foundation investigators, reveals how six US water utilities and their regions prepared for, responded to, and coped with recent extreme weather and climate events and the lessons they are using to plan future adaptation and resilience activities. Two case studies will be highlighted. (1) Sonoma County, CA, has had alternating floods and severe droughts. In 2009, this area, home to competing water users, namely, agricultural crops, wineries, tourism, and fisheries faced a three-year drought, accompanied at the end by intense frosts. Competing uses of water threatened the grape harvest, endangered the fish industry and resulted in a series of regulations, and court cases. Five years later, new efforts by partners in the entire watershed have identified mutual opportunities for increased basin sustainability in the face of a changing climate. (2) Washington DC had a derecho in late June 2012, which curtailed water, communications, and power delivery during a record heat spell that impacted hundreds of thousands of residents and lasted over the height of the tourist-intensive July 4th holiday. Lessons from this event were applied three months later in anticipation of an approaching Superstorm Sandy. This study will help other communities in improving their resiliency in the face of future climate extremes. For example, this study revealed that (1) communities are planning with multiple types and occurrences of extreme events which are becoming more severe and frequent and are impacting communities that are expanding into more vulnerable areas and (2) decisions by one sector can not be made in a vacuum and require the scientific, sectoral and citizen communities to work towards sustainable solutions.

  7. Prognostic utility of predischarge dipyridamole-thallium imaging compared to predischarge submaximal exercise electrocardiography and maximal exercise thallium imaging after uncomplicated acute myocardial infarction

    SciTech Connect

    Gimple, L.W.; Hutter, A.M. Jr.; Guiney, T.E.; Boucher, C.A. )

    1989-12-01

    The prognostic value of predischarge dipyridamole-thallium scanning after uncomplicated myocardial infarction was determined by comparison with submaximal exercise electrocardiography and 6-week maximal exercise thallium imaging and by correlation with clinical events. Two endpoints were defined: cardiac events and severe ischemic potential. Of the 40 patients studied, 8 had cardiac events within 6 months (1 died, 3 had myocardial infarction and 4 had unstable angina requiring hospitalization). The finding of any redistribution on dipyridamole-thallium scanning was common (77%) in these patients and had poor specificity (29%). Redistribution outside of the infarct zone, however, had equivalent sensitivity (63%) and better specificity (75%) for events (p less than 0.05). Both predischarge dipyridamole-thallium and submaximal exercise electrocardiography identified 5 of the 8 events (p = 0.04 and 0.07, respectively). The negative predictive accuracy for events for both dipyridamole-thallium and submaximal exercise electrocardiography was 88%. In addition to the 8 patients with events, 16 other patients had severe ischemic potential (6 had coronary bypass surgery, 1 had inoperable 3-vessel disease and 9 had markedly abnormal 6-week maximal exercise tests). Predischarge dipyridamole-thallium and submaximal exercise testing also identified 8 and 7 of these 16 patients with severe ischemic potential, respectively. Six of the 8 cardiac events occurred before 6-week follow-up. A maximal exercise thallium test at 6 weeks identified 1 of the 2 additional events within 6 months correctly. Thallium redistribution after dipyridamole in coronary territories outside the infarct zone is a sensitive and specific predictor of subsequent cardiac events and identifies patients with severe ischemic potential.

  8. Evidence for surprise minimization over value maximization in choice behavior

    PubMed Central

    Schwartenbeck, Philipp; FitzGerald, Thomas H. B.; Mathys, Christoph; Dolan, Ray; Kronbichler, Martin; Friston, Karl

    2015-01-01

    Classical economic models are predicated on the idea that the ultimate aim of choice is to maximize utility or reward. In contrast, an alternative perspective highlights the fact that adaptive behavior requires agents’ to model their environment and minimize surprise about the states they frequent. We propose that choice behavior can be more accurately accounted for by surprise minimization compared to reward or utility maximization alone. Minimizing surprise makes a prediction at variance with expected utility models; namely, that in addition to attaining valuable states, agents attempt to maximize the entropy over outcomes and thus ‘keep their options open’. We tested this prediction using a simple binary choice paradigm and show that human decision-making is better explained by surprise minimization compared to utility maximization. Furthermore, we replicated this entropy-seeking behavior in a control task with no explicit utilities. These findings highlight a limitation of purely economic motivations in explaining choice behavior and instead emphasize the importance of belief-based motivations. PMID:26564686

  9. On deciding to have a lobotomy: either lobotomies were justified or decisions under risk should not always seek to maximise expected utility.

    PubMed

    Cooper, Rachel

    2014-02-01

    In the 1940s and 1950s thousands of lobotomies were performed on people with mental disorders. These operations were known to be dangerous, but thought to offer great hope. Nowadays, the lobotomies of the 1940s and 1950s are widely condemned. The consensus is that the practitioners who employed them were, at best, misguided enthusiasts, or, at worst, evil. In this paper I employ standard decision theory to understand and assess shifts in the evaluation of lobotomy. Textbooks of medical decision making generally recommend that decisions under risk are made so as to maximise expected utility (MEU) I show that using this procedure suggests that the 1940s and 1950s practice of psychosurgery was justifiable. In making sense of this finding we have a choice: Either we can accept that psychosurgery was justified, in which case condemnation of the lobotomists is misplaced. Or, we can conclude that the use of formal decision procedures, such as MEU, is problematic. PMID:24449251

  10. Aging and loss decision making: increased risk aversion and decreased use of maximizing information, with correlated rationality and value maximization

    PubMed Central

    Kurnianingsih, Yoanna A.; Sim, Sam K. Y.; Chee, Michael W. L.; Mullette-Gillman, O’Dhaniel A.

    2015-01-01

    We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble) and choice strategies (what gamble information influences choices) within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk, and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning. We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61–80 years old) were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic decision

  11. DEVELOPMENT OF A VALIDATED MODEL FOR USE IN MINIMIZING NOx EMISSIONS AND MAXIMIZING CARBON UTILIZATION WHEN CO-FIRING BIOMASS WITH COAL

    SciTech Connect

    Larry G. Felix; P. Vann Bush; Stephen Niksa

    2003-04-30

    In full-scale boilers, the effect of biomass cofiring on NO{sub x} and unburned carbon (UBC) emissions has been found to be site-specific. Few sets of field data are comparable and no consistent database of information exists upon which cofiring fuel choice or injection system design can be based to assure that NOX emissions will be minimized and UBC be reduced. This report presents the results of a comprehensive project that generated an extensive set of pilot-scale test data that were used to validate a new predictive model for the cofiring of biomass and coal. All testing was performed at the 3.6 MMBtu/hr (1.75 MW{sub t}) Southern Company Services/Southern Research Institute Combustion Research Facility where a variety of burner configurations, coals, biomasses, and biomass injection schemes were utilized to generate a database of consistent, scalable, experimental results (422 separate test conditions). This database was then used to validate a new model for predicting NO{sub x} and UBC emissions from the cofiring of biomass and coal. This model is based on an Advanced Post-Processing (APP) technique that generates an equivalent network of idealized reactor elements from a conventional CFD simulation. The APP reactor network is a computational environment that allows for the incorporation of all relevant chemical reaction mechanisms and provides a new tool to quantify NOx and UBC emissions for any cofired combination of coal and biomass.

  12. Maximizing the utilization of Laminaria japonica as biomass via improvement of alginate lyase activity in a two-phase fermentation system.

    PubMed

    Oh, Yuri; Xu, Xu; Kim, Ji Young; Park, Jong Moon

    2015-08-01

    Brown seaweed contains up to 67% of carbohydrates by dry weight and presents high potential as a polysaccharide feedstock for biofuel production. To effectively use brown seaweed as a biomass, degradation of alginate is the major challenge due to its complicated structure and low solubility in water. This study focuses on the isolation of alginate degrading bacteria, determining of the optimum fermentation conditions, as well as comparing the conventional single fermentation system with the two-phase fermentation system which is separately using alginate and mannitol extracted from Laminaria japonica. Maximum yield of organic acids production and volatile solids reduction obtained were 0.516 g/g and 79.7%, respectively, using the two-phase fermentation system in which alginate fermentation was carried out at pH 7 and mannitol fermentation at pH 8. The two-phase fermentation system increased the yield of organic acids production by 1.14 times and led to a 1.45-times reduction of VS when compared to the conventional single fermentation system at pH 8. The results show that the two-phase fermentation system improved the utilization of alginate by separating alginate from mannitol leading to enhanced alginate lyase activity. PMID:26098412

  13. Maximally Expressive Modeling

    NASA Technical Reports Server (NTRS)

    Jaap, John; Davis, Elizabeth; Richardson, Lea

    2004-01-01

    Planning and scheduling systems organize tasks into a timeline or schedule. Tasks are logically grouped into containers called models. Models are a collection of related tasks, along with their dependencies and requirements, that when met will produce the desired result. One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed; the information sought is at the cutting edge of scientific endeavor; and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a maximally expressive modeling schema.

  14. A Column Generation Approach to Solve Multi-Team Influence Maximization Problem for Social Lottery Design

    NASA Astrophysics Data System (ADS)

    Jois, Manjunath Holaykoppa Nanjunda

    The conventional Influence Maximization problem is the problem of finding such a team (a small subset) of seed nodes in a social network that would maximize the spread of influence over the whole network. This paper considers a lottery system aimed at maximizing the awareness spread to promote energy conservation behavior as a stochastic Influence Maximization problem with the constraints ensuring lottery fairness. The resulting Multi-Team Influence Maximization problem involves assigning the probabilities to multiple teams of seeds (interpreted as lottery winners) to maximize the expected awareness spread. Such a variation of the Influence Maximization problem is modeled as a Linear Program; however, enumerating all the possible teams is a hard task considering that the feasible team count grows exponentially with the network size. In order to address this challenge, we develop a column generation based approach to solve the problem with a limited number of candidate teams, where new candidates are generated and added to the problem iteratively. We adopt a piecewise linear function to model the impact of including a new team so as to pick only such teams which can improve the existing solution. We demonstrate that with this approach we can solve such influence maximization problems to optimality, and perform computational study with real-world social network data sets to showcase the efficiency of the approach in finding lottery designs for optimal awareness spread. Lastly, we explore other possible scenarios where this model can be utilized to optimally solve the otherwise hard to solve influence maximization problems.

  15. Maximally Expressive Task Modeling

    NASA Technical Reports Server (NTRS)

    Japp, John; Davis, Elizabeth; Maxwell, Theresa G. (Technical Monitor)

    2002-01-01

    Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiment activities for the Space Station. The equipment used in these experiments is some of the most complex hardware ever developed by mankind, the information sought by these experiments is at the cutting edge of scientific endeavor, and the procedures for executing the experiments are intricate and exacting. Scheduling is made more difficult by a scarcity of space station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling space station experiment operations calls for a "maximally expressive" modeling schema. Modeling even the simplest of activities cannot be automated; no sensor can be attached to a piece of equipment that can discern how to use that piece of equipment; no camera can quantify how to operate a piece of equipment. Modeling is a human enterprise-both an art and a science. The modeling schema should allow the models to flow from the keyboard of the user as easily as works of literature flowed from the pen of Shakespeare. The Ground Systems Department at the Marshall Space Flight Center has embarked on an effort to develop a new scheduling engine that is highlighted by a maximally expressive modeling schema. This schema, presented in this paper, is a synergy of technological advances and domain-specific innovations.

  16. Great Expectations.

    ERIC Educational Resources Information Center

    Sullivan, Patricia

    1999-01-01

    Parents must learn to transmit a sense of high expectations to their children (related to behavior and accomplishments) without crushing them with too much pressure. This means setting realistic expectations based on their children's special abilities, listening to their children's feelings about the expectations, and understanding what…

  17. Maximally nonlocal theories cannot be maximally random.

    PubMed

    de la Torre, Gonzalo; Hoban, Matty J; Dhara, Chirag; Prettico, Giuseppe; Acín, Antonio

    2015-04-24

    Correlations that violate a Bell inequality are said to be nonlocal; i.e., they do not admit a local and deterministic explanation. Great effort has been devoted to study how the amount of nonlocality (as measured by a Bell inequality violation) serves to quantify the amount of randomness present in observed correlations. In this work we reverse this research program and ask what do the randomness certification capabilities of a theory tell us about the nonlocality of that theory. We find that, contrary to initial intuition, maximal randomness certification cannot occur in maximally nonlocal theories. We go on and show that quantum theory, in contrast, permits certification of maximal randomness in all dichotomic scenarios. We hence pose the question of whether quantum theory is optimal for randomness; i.e., is it the most nonlocal theory that allows maximal randomness certification? We answer this question in the negative by identifying a larger-than-quantum set of correlations capable of this feat. Not only are these results relevant to understanding quantum mechanics' fundamental features, but also put fundamental restrictions on device-independent protocols based on the no-signaling principle. PMID:25955039

  18. Maximal combustion temperature estimation

    NASA Astrophysics Data System (ADS)

    Golodova, E.; Shchepakina, E.

    2006-12-01

    This work is concerned with the phenomenon of delayed loss of stability and the estimation of the maximal temperature of safe combustion. Using the qualitative theory of singular perturbations and canard techniques we determine the maximal temperature on the trajectories located in the transition region between the slow combustion regime and the explosive one. This approach is used to estimate the maximal temperature of safe combustion in multi-phase combustion models.

  19. Maximization, learning, and economic behavior

    PubMed Central

    Erev, Ido; Roth, Alvin E.

    2014-01-01

    The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design. PMID:25024182

  20. Maximization, learning, and economic behavior.

    PubMed

    Erev, Ido; Roth, Alvin E

    2014-07-22

    The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design. PMID:25024182

  1. Exceeding Expectations

    ERIC Educational Resources Information Center

    Cannon, John

    2011-01-01

    Awareness of expectations is so important in the facilities business. The author's experiences has taught him that it is essential to understand how expectations impact people's lives as well as those for whom they provide services for every day. This article presents examples and ideas that will provide insight and ideas to help educators…

  2. Inclusive fitness maximization: An axiomatic approach.

    PubMed

    Okasha, Samir; Weymark, John A; Bossert, Walter

    2014-06-01

    Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it. PMID:24530825

  3. Maximizing Classroom Participation.

    ERIC Educational Resources Information Center

    Englander, Karen

    2001-01-01

    Discusses how to maximize classroom participation in the English-as-a-Second-or-Foreign-Language classroom, and provides a classroom discussion method that is based on real-life problem solving. (Author/VWL)

  4. Generation and Transmission Maximization Model

    Energy Science and Technology Software Center (ESTSC)

    2001-04-05

    GTMax was developed to study complex marketing and system operational issues facing electric utility power systems. The model maximizes the value of the electric system taking into account not only a single system''s limited energy and transmission resources but also firm contracts, independent power producer (IPP) agreements, and bulk power transaction opportunities on the spot market. GTMax maximizes net revenues of power systems by finding a solution that increases income while keeping expenses at amore » minimum. It does this while ensuring that market transactions and system operations are within the physical and institutional limitations of the power system. When multiple systems are simulated, GTMax identifies utilities that can successfully compete on the market by tracking hourly energy transactions, costs, and revenues. Some limitations that are modeled are power plant seasonal capabilities and terms specified in firm and IPP contracts. GTMax also considers detaile operational limitations such as power plant ramp rates and hydropower reservoir constraints.« less

  5. How To: Maximize Google

    ERIC Educational Resources Information Center

    Branzburg, Jeffrey

    2004-01-01

    Google is shaking out to be the leading Web search engine, with recent research from Nielsen NetRatings reporting about 40 percent of all U.S. households using the tool at least once in January 2004. This brief article discusses how teachers and students can maximize their use of Google.

  6. Maximal Outboxes of Quadrilaterals

    ERIC Educational Resources Information Center

    Zhao, Dongsheng

    2011-01-01

    An outbox of a quadrilateral is a rectangle such that each vertex of the given quadrilateral lies on one side of the rectangle and different vertices lie on different sides. We first investigate those quadrilaterals whose every outbox is a square. Next, we consider the maximal outboxes of rectangles and those quadrilaterals with perpendicular…

  7. Infrared Maximally Abelian Gauge

    SciTech Connect

    Mendes, Tereza; Cucchieri, Attilio; Mihara, Antonio

    2007-02-27

    The confinement scenario in Maximally Abelian gauge (MAG) is based on the concepts of Abelian dominance and of dual superconductivity. Recently, several groups pointed out the possible existence in MAG of ghost and gluon condensates with mass dimension 2, which in turn should influence the infrared behavior of ghost and gluon propagators. We present preliminary results for the first lattice numerical study of the ghost propagator and of ghost condensation for pure SU(2) theory in the MAG.

  8. Quantum-Inspired Maximizer

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2008-01-01

    A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).

  9. MAXIM: The Blackhole Imager

    NASA Technical Reports Server (NTRS)

    Gendreau, Keith; Cash, Webster; Gorenstein, Paul; Windt, David; Kaaret, Phil; Reynolds, Chris

    2004-01-01

    The Beyond Einstein Program in NASA's Office of Space Science Structure and Evolution of the Universe theme spells out the top level scientific requirements for a Black Hole Imager in its strategic plan. The MAXIM mission will provide better than one tenth of a microarcsecond imaging in the X-ray band in order to satisfy these requirements. We will overview the driving requirements to achieve these goals and ultimately resolve the event horizon of a supermassive black hole. We will present the current status of this effort that includes a study of a baseline design as well as two alternative approaches.

  10. Varieties of maximal line subbundles

    NASA Astrophysics Data System (ADS)

    Oxbury, W. M.

    2000-07-01

    The point of this note is to make an observation concerning the variety M(E) parametrizing line subbundles of maximal degree in a generic stable vector bundle E over an algebraic curve C. M(E) is smooth and projective and its dimension is known in terms of the rank and degree of E and the genus of C (see Section 1). Our observation (Theorem 3·1) is that it has exactly the Chern numbers of an étale cover of the symmetric product S[delta]C where [delta] = dim M(E).This suggests looking for a natural map M(E) [rightward arrow] S[delta]C; however, it is not clear what such a map should be. Indeed, we exhibit an example in which M(E) is connected and deforms non-trivially with E, while there are only finitely many isomorphism classes of étale cover of the symmetric product. This shows that for a general deformation in the family M(E) cannot be such a cover (see Section 4).One may conjecture that M(E) is always connected. This would follow from ampleness of a certain Picard-type bundle on the Jacobian and there seems to be some evidence for expecting this, though we do not pursue this question here.Note that by forgetting the inclusion of a maximal line subbundle in E we get a natural map from M(E) to the Jacobian whose image W(E) is analogous to the classical (Brill-Noether) varieties of special line bundles. (In this sense M(E) is precisely a generalization of the symmetric products of C.) In Section 2 we give some results on W(E) which generalise standard Brill-Noether properties. These are due largely to Laumon, to whom the author is grateful for the reference [9].

  11. Creating a Bridge between Data Collection and Program Planning: A Technical Assistance Model to Maximize the Use of HIV/AIDS Surveillance and Service Utilization Data for Planning Purposes

    ERIC Educational Resources Information Center

    Logan, Jennifer A.; Beatty, Maile; Woliver, Renee; Rubinstein, Eric P.; Averbach, Abigail R.

    2005-01-01

    Over time, improvements in HIV/AIDS surveillance and service utilization data have increased their usefulness for planning programs, targeting resources, and otherwise informing HIV/AIDS policy. However, community planning groups, service providers, and health department staff often have difficulty in interpreting and applying the wide array of…

  12. Maximizing Brightness in Photoinjectors

    SciTech Connect

    Limborg-Deprey, C.; Tomizawa, H.; /JAERI-RIKEN, Hyogo

    2011-11-30

    If the laser pulse driving photoinjectors could be arbitrarily shaped, the emittance growth induced by space charge effects could be totally compensated for. In particular, for RF guns the photo-electron distribution leaving the cathode should have a 3D-ellipsoidal shape. The emittance at the end of the injector could be as small as the cathode emittance. We explore how the emittance and the brightness can be optimized for photoinjector based on RF gun depending on the peak current requirements. Techniques available to produce those ideal laser pulse shapes are also discussed. If the laser pulse driving photoinjectors could be arbitrarily shaped, the emittance growth induced by space charge effects could be totally compensated for. In particular, for RF guns, the photo-electron distribution leaving the cathode should be close to a uniform distribution contained in a 3D-ellipsoid contour. For photo-cathodes which have very fast emission times, and assuming a perfectly uniform emitting surface, this could be achieved by shaping the laser in a pulse of constant fluence and limited in space by a 3D-ellipsoid contour. Simulations show that in such conditions, with the standard linear emittance compensation, the emittance at the end of the photo-injector beamline approaches the minimum value imposed by the cathode emittance. Brightness, which is expressed as the ratio of peak current over the product of the two transverse emittance, seems to be maximized for small charges. Numerical simulations also show that for very high charge per bunch (10nC), emittances as small as 2 mm-mrad could be reached by using 3D-ellipsoidal laser pulses in an S-Band gun. The production of 3D-ellipsoidal pulses is very challenging, but seems worthwhile the effort. We briefly discuss some of the present ideas and difficulties of achieving such pulses.

  13. Smoking Outcome Expectancies among College Students.

    ERIC Educational Resources Information Center

    Brandon, Thomas H.; Baker, Timothy B.

    Alcohol expectancies have been found to predict later onset of drinking among adolescents. This study examined whether the relationship between level of alcohol use and expectancies is paralleled with cigarette smoking, and attempted to identify the content of smoking expectancies. An instrument to measure the subjective expected utility of…

  14. Maximize x(a - x)

    ERIC Educational Resources Information Center

    Lange, L. H.

    1974-01-01

    Five different methods for determining the maximizing condition for x(a - x) are presented. Included is the ancient Greek version and a method attributed to Fermat. None of the proofs use calculus. (LS)

  15. On the maximal diphoton width

    NASA Astrophysics Data System (ADS)

    Salvio, Alberto; Staub, Florian; Strumia, Alessandro; Urbano, Alfredo

    2016-03-01

    Motivated by the 750 GeV diphoton excess found at LHC, we compute the maximal width into γγ that a neutral scalar can acquire through a loop of charged fermions or scalars as function of the maximal scale at which the theory holds, taking into account vacuum (meta)stability bounds. We show how an extra gauge symmetry can qualitatively weaken such bounds, and explore collider probes and connections with Dark Matter.

  16. All maximally entangling unitary operators

    SciTech Connect

    Cohen, Scott M.

    2011-11-15

    We characterize all maximally entangling bipartite unitary operators, acting on systems A and B of arbitrary finite dimensions d{sub A}{<=}d{sub B}, when ancillary systems are available to both parties. Several useful and interesting consequences of this characterization are discussed, including an understanding of why the entangling and disentangling capacities of a given (maximally entangling) unitary can differ and a proof that these capacities must be equal when d{sub A}=d{sub B}.

  17. Multidimensional Scaling for Measuring Alcohol Expectancies.

    ERIC Educational Resources Information Center

    Rather, Bruce; And Others

    Although expectancies for alcohol have been shown to influence drinking behavior, current expectancy questionnaires do not lend themselves to the study of how expectancies are represented in memory. Two studies were conducted which utilized multidimensional scaling techniques designed to produce hypothesized representations of cognitive…

  18. BIOMASS UTILIZATION

    EPA Science Inventory

    The biomass utilization task consists of the evaluation of a biomass conversion technology including research and development initiatives. The project is expected to provide information on co-control of pollutants, as well as, to prove the feasibility of biomass conversion techn...

  19. Maximizing TDRS Command Load Lifetime

    NASA Technical Reports Server (NTRS)

    Brown, Aaron J.

    2002-01-01

    was therefore the key to achieving this goal. This goal was eventually realized through development of an Excel spreadsheet tool called EMMIE (Excel Mean Motion Interactive Estimation). EMMIE utilizes ground ephemeris nodal data to perform a least-squares fit to inferred mean anomaly as a function of time, thus generating an initial estimate for mean motion. This mean motion in turn drives a plot of estimated downtrack position difference versus time. The user can then manually iterate the mean motion, and determine an optimal value that will maximize command load lifetime. Once this optimal value is determined, the mean motion initially calculated by the command builder tool is overwritten with the new optimal value, and the command load is built for uplink to ISS. EMMIE also provides the capability for command load lifetime to be tracked through multiple TORS ephemeris updates. Using EMMIE, TORS command load lifetimes of approximately 30 days have been achieved.

  20. Utilizing Partnerships to Maximize Resources in College Counseling Services

    ERIC Educational Resources Information Center

    Stewart, Allison; Moffat, Meridith; Travers, Heather; Cummins, Douglas

    2015-01-01

    Research indicates an increasing number of college students are experiencing severe psychological problems that are impacting their academic performance. However, many colleges and universities operate with constrained budgets that limit their ability to provide adequate counseling services for their student population. Moreover, accessing…

  1. Do Speakers and Listeners Observe the Gricean Maxim of Quantity?

    ERIC Educational Resources Information Center

    Engelhardt, Paul E.; Bailey, Karl G. D.; Ferreira, Fernanda

    2006-01-01

    The Gricean Maxim of Quantity is believed to govern linguistic performance. Speakers are assumed to provide as much information as required for referent identification and no more, and listeners are believed to expect unambiguous but concise descriptions. In three experiments we examined the extent to which naive participants are sensitive to the…

  2. Changing expectancies: cognitive mechanisms and context effects.

    PubMed

    Wiers, Reinout W; Wood, Mark D; Darkes, Jack; Corbin, William R; Jones, Barry T; Sher, Kenneth J

    2003-02-01

    This article presents the proceedings of a symposium at the 2002 RSA Meeting in San Francisco, organized by Reinout W. Wiers and Mark D. Wood. The symposium combined two topics of recent interest in studies of alcohol expectancies: cognitive mechanisms in expectancy challenge studies, and context-related changes of expectancies. With increasing recognition of the substantial role played by alcohol expectancies in drinking, investigators have begun to develop and evaluate expectancy challenge procedures as a potentially promising new prevention strategy. The two major issues addressed in the symposium were whether expectancy challenges result in changes in expectancies that mediate intervention (outcome relations), and the influence of simulated bar environments ("bar labs," in which challenges are usually done) on expectancies. The presentations were (1) An introduction, by Jack Darkes; (2) Investigating the utility of alcohol expectancy challenge with heavy drinking college students, by Mark D. Wood; (3) Effects of an expectancy challenge on implicit and explicit expectancies and drinking, by Reinout W. Wiers; (4) Effects of graphic feedback and simulated bar assessments on alcohol expectancies and consumption, by William R. Corbin; (5) Implicit alcohol associations and context, by Barry T Jones; and (6) A discussion by Kenneth J. Sher, who pointed out that it is important not only to study changes of expectancies in the paradigm of an expectancy challenge but also to consider the role of changing expectancies in natural development and in treatments not explicitly aimed at changing expectancies. PMID:12605068

  3. The Naïve Utility Calculus: Computational Principles Underlying Commonsense Psychology.

    PubMed

    Jara-Ettinger, Julian; Gweon, Hyowon; Schulz, Laura E; Tenenbaum, Joshua B

    2016-08-01

    We propose that human social cognition is structured around a basic understanding of ourselves and others as intuitive utility maximizers: from a young age, humans implicitly assume that agents choose goals and actions to maximize the rewards they expect to obtain relative to the costs they expect to incur. This 'naïve utility calculus' allows both children and adults observe the behavior of others and infer their beliefs and desires, their longer-term knowledge and preferences, and even their character: who is knowledgeable or competent, who is praiseworthy or blameworthy, who is friendly, indifferent, or an enemy. We review studies providing support for the naïve utility calculus, and we show how it captures much of the rich social reasoning humans engage in from infancy. PMID:27388875

  4. Cognitive Somatic Behavioral Interventions for Maximizing Gymnastic Performance.

    ERIC Educational Resources Information Center

    Ravizza, Kenneth; Rotella, Robert

    Psychological training programs developed and implemented for gymnasts of a wide range of age and varying ability levels are examined. The programs utilized strategies based on cognitive-behavioral intervention. The approach contends that mental training plays a crucial role in maximizing performance for most gymnasts. The object of the training…

  5. Using Debate to Maximize Learning Potential: A Case Study

    ERIC Educational Resources Information Center

    Firmin, Michael W.; Vaughn, Aaron; Dye, Amanda

    2007-01-01

    Following a review of the literature, an educational case study is provided for the benefit of faculty preparing college courses. In particular, we provide a transcribed debate utilized in a General Psychology course as a best practice example of how to craft a debate which maximizes student learning. The work is presented as a model for the…

  6. Factors affecting maximal acid secretion

    PubMed Central

    Desai, H. G.

    1969-01-01

    The mechanisms by which different factors affect the maximal acid secretion of the stomach are discussed with particular reference to nationality, sex, age, body weight or lean body mass, procedural details, mode of calculation, the nature, dose and route of administration of a stimulus, the synergistic action of another stimulus, drugs, hormones, electrolyte levels, anaemia or deficiency of the iron-dependent enzyme system, vagal continuity and parietal cell mass. PMID:4898322

  7. Learning to maximize reward rate: a model based on semi-Markov decision processes

    PubMed Central

    Khodadadi, Arash; Fakhari, Pegah; Busemeyer, Jerome R.

    2014-01-01

    When animals have to make a number of decisions during a limited time interval, they face a fundamental problem: how much time they should spend on each decision in order to achieve the maximum possible total outcome. Deliberating more on one decision usually leads to more outcome but less time will remain for other decisions. In the framework of sequential sampling models, the question is how animals learn to set their decision threshold such that the total expected outcome achieved during a limited time is maximized. The aim of this paper is to provide a theoretical framework for answering this question. To this end, we consider an experimental design in which each trial can come from one of the several possible “conditions.” A condition specifies the difficulty of the trial, the reward, the penalty and so on. We show that to maximize the expected reward during a limited time, the subject should set a separate value of decision threshold for each condition. We propose a model of learning the optimal value of decision thresholds based on the theory of semi-Markov decision processes (SMDP). In our model, the experimental environment is modeled as an SMDP with each “condition” being a “state” and the value of decision thresholds being the “actions” taken in those states. The problem of finding the optimal decision thresholds then is cast as the stochastic optimal control problem of taking actions in each state in the corresponding SMDP such that the average reward rate is maximized. Our model utilizes a biologically plausible learning algorithm to solve this problem. The simulation results show that at the beginning of learning the model choses high values of decision threshold which lead to sub-optimal performance. With experience, however, the model learns to lower the value of decision thresholds till finally it finds the optimal values. PMID:24904252

  8. Second use of transportation batteries: Maximizing the value of batteries for transportation and grid services

    SciTech Connect

    Viswanathan, Vilayanur V.; Kintner-Meyer, Michael CW

    2010-09-30

    Plug-in hybrid electric vehicles (PHEVs) and electric vehicles (EVs) are expected to gain significant market share over the next decade. The economic viability for such vehicles is contingent upon the availability of cost-effective batteries with high power and energy density. For initial commercial success, government subsidies will be highly instrumental in allowing PHEVs to gain a foothold. However, in the long-term, for electric vehicles to be commercially viable, the economics have to be self-sustaining. Towards the end of battery life in the vehicle, the energy capacity left in the battery is not sufficient to provide the designed range for the vehicle. Typically, the automotive manufacturers indicated the need for battery replacement when the remaining energy capacity reaches 70-80%. There is still sufficient power (kW) and energy capacity (kWh) left in the battery to support various grid ancillary services such as balancing, spinning reserve, load following services. As renewable energy penetration increases, the need for such balancing services is expected to increase. This work explores optimality for the replacement of transportation batteries to be subsequently used for grid services. This analysis maximizes the value of an electric vehicle battery to be used as a transportation battery (in its first life) and then as a resource for providing grid services (in its second life). The results are presented across a range of key parameters, such as depth of discharge (DOD), number of batteries used over the life of the vehicle, battery life in vehicle, battery state of health (SOH) at end of life in vehicle and ancillary services rate. The results provide valuable insights for the automotive industry into maximizing the utility and the value of the vehicle batteries in an effort to either reduce the selling price of EVs and PHEVs or maximize the profitability of the emerging electrification of transportation.

  9. Maximizing algebraic connectivity in air transportation networks

    NASA Astrophysics Data System (ADS)

    Wei, Peng

    In air transportation networks the robustness of a network regarding node and link failures is a key factor for its design. An experiment based on the real air transportation network is performed to show that the algebraic connectivity is a good measure for network robustness. Three optimization problems of algebraic connectivity maximization are then formulated in order to find the most robust network design under different constraints. The algebraic connectivity maximization problem with flight routes addition or deletion is first formulated. Three methods to optimize and analyze the network algebraic connectivity are proposed. The Modified Greedy Perturbation Algorithm (MGP) provides a sub-optimal solution in a fast iterative manner. The Weighted Tabu Search (WTS) is designed to offer a near optimal solution with longer running time. The relaxed semi-definite programming (SDP) is used to set a performance upper bound and three rounding techniques are discussed to find the feasible solution. The simulation results present the trade-off among the three methods. The case study on two air transportation networks of Virgin America and Southwest Airlines show that the developed methods can be applied in real world large scale networks. The algebraic connectivity maximization problem is extended by adding the leg number constraint, which considers the traveler's tolerance for the total connecting stops. The Binary Semi-Definite Programming (BSDP) with cutting plane method provides the optimal solution. The tabu search and 2-opt search heuristics can find the optimal solution in small scale networks and the near optimal solution in large scale networks. The third algebraic connectivity maximization problem with operating cost constraint is formulated. When the total operating cost budget is given, the number of the edges to be added is not fixed. Each edge weight needs to be calculated instead of being pre-determined. It is illustrated that the edge addition and the

  10. User Expectations: Nurses' Perspective.

    PubMed

    Gürsel, Güney

    2016-01-01

    Healthcare is a technology-intensive industry. Although all healthcare staff needs qualified computer support, physicians and nurses need more. As nursing practice is an information intensive issue, understanding nurses' expectations from healthcare information systems (HCIS) is a must issue to meet their needs and help them in a better way. In this study perceived importance of nurses' expectations from HCIS is investigated, and two HCIS is evaluated for meeting the expectations of nurses by using fuzzy logic methodologies. PMID:27332398

  11. Knowledge discovery by accuracy maximization

    PubMed Central

    Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo

    2014-01-01

    Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold’s topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan’s presidency and not from its beginning. PMID:24706821

  12. Maximally coherent mixed states: Complementarity between maximal coherence and mixedness

    NASA Astrophysics Data System (ADS)

    Singh, Uttam; Bera, Manabendra Nath; Dhar, Himadri Shekhar; Pati, Arun Kumar

    2015-05-01

    Quantum coherence is a key element in topical research on quantum resource theories and a primary facilitator for design and implementation of quantum technologies. However, the resourcefulness of quantum coherence is severely restricted by environmental noise, which is indicated by the loss of information in a quantum system, measured in terms of its purity. In this work, we derive the limits imposed by the mixedness of a quantum system on the amount of quantum coherence that it can possess. We obtain an analytical trade-off between the two quantities that upperbound the maximum quantum coherence for fixed mixedness in a system. This gives rise to a class of quantum states, "maximally coherent mixed states," whose coherence cannot be increased further under any purity-preserving operation. For the above class of states, quantum coherence and mixedness satisfy a complementarity relation, which is crucial to understand the interplay between a resource and noise in open quantum systems.

  13. Reflections on Expectations

    ERIC Educational Resources Information Center

    Santini, Joseph

    2014-01-01

    This article describes a teachers reflections on the matter of student expectations. Santini begins with a common understanding of the "Pygmalion effect" from research projects conducted in earlier years that intimated "people's expectations could influence other people in the world around them." In the world of deaf…

  14. A Rational Expectations Experiment.

    ERIC Educational Resources Information Center

    Peterson, Norris A.

    1990-01-01

    Presents a simple classroom simulation of the Lucas supply curve mechanism with rational expectations. Concludes that the exercise has proved very useful as an introduction to the concepts of rational and adaptive expectations, the Lucas supply curve, the natural rate hypothesis, and random supply shocks. (DB)

  15. An Unexpected Expected Value.

    ERIC Educational Resources Information Center

    Schwartzman, Steven

    1993-01-01

    Discusses the surprising result that the expected number of marbles of one color drawn from a set of marbles of two colors after two draws without replacement is the same as the expected number of that color marble after two draws with replacement. Presents mathematical models to help explain this phenomenon. (MDH)

  16. Maximal acceleration and radiative processes

    NASA Astrophysics Data System (ADS)

    Papini, Giorgio

    2015-08-01

    We derive the radiation characteristics of an accelerated, charged particle in a model due to Caianiello in which the proper acceleration of a particle of mass m has the upper limit 𝒜m = 2mc3/ℏ. We find two power laws, one applicable to lower accelerations, the other more suitable for accelerations closer to 𝒜m and to the related physical singularity in the Ricci scalar. Geometrical constraints and power spectra are also discussed. By comparing the power laws due to the maximal acceleration (MA) with that for particles in gravitational fields, we find that the model of Caianiello allows, in principle, the use of charged particles as tools to distinguish inertial from gravitational fields locally.

  17. Lighting spectrum to maximize colorfulness.

    PubMed

    Masuda, Osamu; Nascimento, Sérgio M C

    2012-02-01

    The spectrum of modern illumination can be computationally tailored considering the visual effects of lighting. We investigated the spectral profiles of the white illumination maximizing the theoretical limits of the perceivable object colors. A large number of metamers with various degrees of smoothness were generated on and around the Planckian locus, and the volume in the CIELAB space of the optimal colors for each metamer was calculated. The optimal spectrum was found at the color temperature of around 5.7×10(3) K, had three peaks at both ends of the visible band and at around 510 nm, and was 25% better than daylight and 35% better than Thornton's prime color lamp. PMID:22297368

  18. Health expectancy indicators.

    PubMed Central

    Robine, J. M.; Romieu, I.; Cambois, E.

    1999-01-01

    An outline is presented of progress in the development of health expectancy indicators, which are growing in importance as a means of assessing the health status of populations and determining public health priorities. PMID:10083720

  19. Maximal dinucleotide and trinucleotide circular codes.

    PubMed

    Michel, Christian J; Pellegrini, Marco; Pirillo, Giuseppe

    2016-01-21

    We determine here the number and the list of maximal dinucleotide and trinucleotide circular codes. We prove that there is no maximal dinucleotide circular code having strictly less than 6 elements (maximum size of dinucleotide circular codes). On the other hand, a computer calculus shows that there are maximal trinucleotide circular codes with less than 20 elements (maximum size of trinucleotide circular codes). More precisely, there are maximal trinucleotide circular codes with 14, 15, 16, 17, 18 and 19 elements and no maximal trinucleotide circular code having less than 14 elements. We give the same information for the maximal self-complementary dinucleotide and trinucleotide circular codes. The amino acid distribution of maximal trinucleotide circular codes is also determined. PMID:26382231

  20. Maximal switchability of centralized networks

    NASA Astrophysics Data System (ADS)

    Vakulenko, Sergei; Morozov, Ivan; Radulescu, Ovidiu

    2016-08-01

    We consider continuous time Hopfield-like recurrent networks as dynamical models for gene regulation and neural networks. We are interested in networks that contain n high-degree nodes preferably connected to a large number of N s weakly connected satellites, a property that we call n/N s -centrality. If the hub dynamics is slow, we obtain that the large time network dynamics is completely defined by the hub dynamics. Moreover, such networks are maximally flexible and switchable, in the sense that they can switch from a globally attractive rest state to any structurally stable dynamics when the response time of a special controller hub is changed. In particular, we show that a decrease of the controller hub response time can lead to a sharp variation in the network attractor structure: we can obtain a set of new local attractors, whose number can increase exponentially with N, the total number of nodes of the nework. These new attractors can be periodic or even chaotic. We provide an algorithm, which allows us to design networks with the desired switching properties, or to learn them from time series, by adjusting the interactions between hubs and satellites. Such switchable networks could be used as models for context dependent adaptation in functional genetics or as models for cognitive functions in neuroscience.

  1. A Maximally Supersymmetric Kondo Model

    SciTech Connect

    Harrison, Sarah; Kachru, Shamit; Torroba, Gonzalo; /Stanford U., Phys. Dept. /SLAC

    2012-02-17

    We study the maximally supersymmetric Kondo model obtained by adding a fermionic impurity to N = 4 supersymmetric Yang-Mills theory. While the original Kondo problem describes a defect interacting with a free Fermi liquid of itinerant electrons, here the ambient theory is an interacting CFT, and this introduces qualitatively new features into the system. The model arises in string theory by considering the intersection of a stack of M D5-branes with a stack of N D3-branes, at a point in the D3 worldvolume. We analyze the theory holographically, and propose a dictionary between the Kondo problem and antisymmetric Wilson loops in N = 4 SYM. We perform an explicit calculation of the D5 fluctuations in the D3 geometry and determine the spectrum of defect operators. This establishes the stability of the Kondo fixed point together with its basic thermodynamic properties. Known supergravity solutions for Wilson loops allow us to go beyond the probe approximation: the D5s disappear and are replaced by three-form flux piercing a new topologically non-trivial S3 in the corrected geometry. This describes the Kondo model in terms of a geometric transition. A dual matrix model reflects the basic properties of the corrected gravity solution in its eigenvalue distribution.

  2. Maximizing the optical network capacity

    PubMed Central

    Bayvel, Polina; Maher, Robert; Liga, Gabriele; Shevchenko, Nikita A.; Lavery, Domaniç; Killey, Robert I.

    2016-01-01

    Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity. PMID:26809572

  3. Maximizing the optical network capacity.

    PubMed

    Bayvel, Polina; Maher, Robert; Xu, Tianhua; Liga, Gabriele; Shevchenko, Nikita A; Lavery, Domaniç; Alvarado, Alex; Killey, Robert I

    2016-03-01

    Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity. PMID:26809572

  4. Maximal Oxygen Intake and Maximal Work Performance of Active College Women.

    ERIC Educational Resources Information Center

    Higgs, Susanne L.

    Maximal oxygen intake and associated physiological variables were measured during strenuous exercise on women subjects (N=20 physical education majors). Following assessment of maximal oxygen intake, all subjects underwent a performance test at the work level which had elicited their maximal oxygen intake. Mean maximal oxygen intake was 41.32…

  5. Performance expectation plan

    SciTech Connect

    Ray, P.E.

    1998-09-04

    This document outlines the significant accomplishments of fiscal year 1998 for the Tank Waste Remediation System (TWRS) Project Hanford Management Contract (PHMC) team. Opportunities for improvement to better meet some performance expectations have been identified. The PHMC has performed at an excellent level in administration of leadership, planning, and technical direction. The contractor has met and made notable improvement of attaining customer satisfaction in mission execution. This document includes the team`s recommendation that the PHMC TWRS Performance Expectation Plan evaluation rating for fiscal year 1998 be an Excellent.

  6. Maximally Expressive Modeling of Operations Tasks

    NASA Technical Reports Server (NTRS)

    Jaap, John; Richardson, Lea; Davis, Elizabeth

    2002-01-01

    Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed, the information sought is at the cutting edge of scientific endeavor, and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a "maximally expressive" modeling schema.

  7. Heterogeneity in expected longevities.

    PubMed

    Pijoan-Mas, Josep; Ríos-Rull, José-Víctor

    2014-12-01

    We develop a new methodology to compute differences in the expected longevity of individuals of a given cohort who are in different socioeconomic groups at a certain age. We address the two main problems associated with the standard use of life expectancy: (1) that people's socioeconomic characteristics change, and (2) that mortality has decreased over time. Our methodology uncovers substantial heterogeneity in expected longevities, yet much less heterogeneity than what arises from the naive application of life expectancy formulae. We decompose the longevity differences into differences in health at age 50, differences in the evolution of health with age, and differences in mortality conditional on health. Remarkably, education, wealth, and income are health-protecting but have very little impact on two-year mortality rates conditional on health. Married people and nonsmokers, however, benefit directly in their immediate mortality. Finally, we document an increasing time trend of the socioeconomic gradient of longevity in the period 1992-2008, and we predict an increase in the socioeconomic gradient of mortality rates for the coming years. PMID:25391225

  8. Maintaining High Expectations

    ERIC Educational Resources Information Center

    Williams, Roger; Williams, Sherry

    2014-01-01

    Author and husband, Roger Williams, is hearing and signs fluently, and author and wife, Sherry Williams, is deaf and uses both speech and signs, although she is most comfortable signing. As parents of six children--deaf and hearing--they are determined to encourage their children to do their best, and they always set their expectations high. They…

  9. Parenting with High Expectations

    ERIC Educational Resources Information Center

    Timperlake, Benna Hull; Sanders, Genelle Timperlake

    2014-01-01

    In some ways raising deaf or hard of hearing children is no different than raising hearing children; expectations must be established and periodically tweaked. Benna Hull Timperlake, who with husband Roger, raised two hearing children in addition to their deaf daughter, Genelle Timperlake Sanders, and Genelle, now a deaf professional, share their…

  10. Great Expectations. [Lesson Plan].

    ERIC Educational Resources Information Center

    Devine, Kelley

    Based on Charles Dickens' novel "Great Expectations," this lesson plan presents activities designed to help students understand the differences between totalitarianism and democracy; and a that a writer of a story considers theme, plot, characters, setting, and point of view. The main activity of the lesson involves students working in groups to…

  11. Does mental exertion alter maximal muscle activation?

    PubMed Central

    Rozand, Vianney; Pageaux, Benjamin; Marcora, Samuele M.; Papaxanthis, Charalambos; Lepers, Romuald

    2014-01-01

    Mental exertion is known to impair endurance performance, but its effects on neuromuscular function remain unclear. The purpose of this study was to test the hypothesis that mental exertion reduces torque and muscle activation during intermittent maximal voluntary contractions of the knee extensors. Ten subjects performed in a randomized order three separate mental exertion conditions lasting 27 min each: (i) high mental exertion (incongruent Stroop task), (ii) moderate mental exertion (congruent Stroop task), (iii) low mental exertion (watching a movie). In each condition, mental exertion was combined with 10 intermittent maximal voluntary contractions of the knee extensor muscles (one maximal voluntary contraction every 3 min). Neuromuscular function was assessed using electrical nerve stimulation. Maximal voluntary torque, maximal muscle activation and other neuromuscular parameters were similar across mental exertion conditions and did not change over time. These findings suggest that mental exertion does not affect neuromuscular function during intermittent maximal voluntary contractions of the knee extensors. PMID:25309404

  12. Inflation in maximal gauged supergravities

    SciTech Connect

    Kodama, Hideo; Nozawa, Masato

    2015-05-18

    We discuss the dynamics of multiple scalar fields and the possibility of realistic inflation in the maximal gauged supergravity. In this paper, we address this problem in the framework of recently discovered 1-parameter deformation of SO(4,4) and SO(5,3) dyonic gaugings, for which the base point of the scalar manifold corresponds to an unstable de Sitter critical point. In the gauge-field frame where the embedding tensor takes the value in the sum of the 36 and 36’ representations of SL(8), we present a scheme that allows us to derive an analytic expression for the scalar potential. With the help of this formalism, we derive the full potential and gauge coupling functions in analytic forms for the SO(3)×SO(3)-invariant subsectors of SO(4,4) and SO(5,3) gaugings, and argue that there exist no new critical points in addition to those discovered so far. For the SO(4,4) gauging, we also study the behavior of 6-dimensional scalar fields in this sector near the Dall’Agata-Inverso de Sitter critical point at which the negative eigenvalue of the scalar mass square with the largest modulus goes to zero as the deformation parameter s approaches a critical value s{sub c}. We find that when the deformation parameter s is taken sufficiently close to the critical value, inflation lasts more than 60 e-folds even if the initial point of the inflaton allows an O(0.1) deviation in Planck units from the Dall’Agata-Inverso critical point. It turns out that the spectral index n{sub s} of the curvature perturbation at the time of the 60 e-folding number is always about 0.96 and within the 1σ range n{sub s}=0.9639±0.0047 obtained by Planck, irrespective of the value of the η parameter at the critical saddle point. The tensor-scalar ratio predicted by this model is around 10{sup −3} and is close to the value in the Starobinsky model.

  13. Glacier Surface Monitoring by Maximizing Mutual Information

    NASA Astrophysics Data System (ADS)

    Erten, E.; Rossi, C.; Hajnsek, I.

    2012-07-01

    The contribution of Polarimetric Synthetic Aperture Radar (PolSAR) images compared with the single-channel SAR in terms of temporal scene characterization has been found and described to add valuable information in the literature. However, despite a number of recent studies focusing on single polarized glacier monitoring, the potential of polarimetry to estimate the surface velocity of glaciers has not been explored due to the complex mechanism of polarization through glacier/snow. In this paper, a new approach to the problem of monitoring glacier surface velocity is proposed by means of temporal PolSAR images, using a basic concept from information theory: Mutual Information (MI). The proposed polarimetric tracking method applies the MI to measure the statistical dependence between temporal polarimetric images, which is assumed to be maximal if the images are geometrically aligned. Since the proposed polarimetric tracking method is very powerful and general, it can be implemented into any kind of multivariate remote sensing data such as multi-spectral optical and single-channel SAR images. The proposed polarimetric tracking is then used to retrieve surface velocity of Aletsch glacier located in Switzerland and of Inyltshik glacier in Kyrgyzstan with two different SAR sensors; Envisat C-band (single polarized) and DLR airborne L-band (fully polarimetric) systems, respectively. The effect of number of channel (polarimetry) into tracking investigations demonstrated that the presence of snow, as expected, effects the location of the phase center in different polarization, such as glacier tracking with temporal HH compared to temporal VV channels. Shortly, a change in polarimetric signature of the scatterer can change the phase center, causing a question of how much of what I am observing is motion then penetration. In this paper, it is shown that considering the multi-channel SAR statistics, it is possible to optimize the separate these contributions.

  14. Specificity of a Maximal Step Exercise Test

    ERIC Educational Resources Information Center

    Darby, Lynn A.; Marsh, Jennifer L.; Shewokis, Patricia A.; Pohlman, Roberta L.

    2007-01-01

    To adhere to the principle of "exercise specificity" exercise testing should be completed using the same physical activity that is performed during exercise training. The present study was designed to assess whether aerobic step exercisers have a greater maximal oxygen consumption (max VO sub 2) when tested using an activity specific, maximal step…

  15. An Activity for Exploring Marital Expectations

    ERIC Educational Resources Information Center

    Saur, William G.

    1976-01-01

    The learning activity, designed for the use of high school students in a family life education course, is designed to explore attitudes towards mate qualities in order to increase the students' awareness of marital expectations. The activity utilizes the format of an auction game and a group discussion. (EC)

  16. Post-Secondary Expectations and Educational Attainment

    ERIC Educational Resources Information Center

    Sciarra, Daniel T.; Ambrosino, Katherine E.

    2011-01-01

    This study utilized student, teacher, and parent expectations during high school to analyze their predictive effect on post-secondary education status two years after scheduled graduation. The sample included 5,353 students, parents and teachers who participated in the Educational Longitudinal Study (ELS; 2002-2006). The researchers analyzed data…

  17. Statistical mechanics of maximal independent sets

    NASA Astrophysics Data System (ADS)

    Dall'Asta, Luca; Pin, Paolo; Ramezanpour, Abolfazl

    2009-12-01

    The graph theoretic concept of maximal independent set arises in several practical problems in computer science as well as in game theory. A maximal independent set is defined by the set of occupied nodes that satisfy some packing and covering constraints. It is known that finding minimum and maximum-density maximal independent sets are hard optimization problems. In this paper, we use cavity method of statistical physics and Monte Carlo simulations to study the corresponding constraint satisfaction problem on random graphs. We obtain the entropy of maximal independent sets within the replica symmetric and one-step replica symmetry breaking frameworks, shedding light on the metric structure of the landscape of solutions and suggesting a class of possible algorithms. This is of particular relevance for the application to the study of strategic interactions in social and economic networks, where maximal independent sets correspond to pure Nash equilibria of a graphical game of public goods allocation.

  18. Utilizing Alcohol Expectancies in the Treatment of Alcoholism.

    ERIC Educational Resources Information Center

    Brown, Sandra A.

    The heterogeneity of alcoholic populations may be one reason that few specific therapeutic approaches to the treatment of alcoholism have been consistently demonstrated to improve treatment outome across studies. To individualize alcoholism treatment, dimensions which are linked to drinking or relapse and along which alcoholics display significant…

  19. Utilization of the Garland Assessment of Graduation Expectations Test Results.

    ERIC Educational Resources Information Center

    Strozeski, Michael W.

    Virtually every school system is concerned with two educational considerations: (1) where the students are academically, and (2) how to get the students to a particular set of points. Minimum competency testing has been proposed as one way to handle these concerns. Competency testing has, however, been criticized for encouraging "teaching to the…

  20. The futility of utility: how market dynamics marginalize Adam Smith

    NASA Astrophysics Data System (ADS)

    McCauley, Joseph L.

    2000-10-01

    Economic theorizing is based on the postulated, nonempiric notion of utility. Economists assume that prices, dynamics, and market equilibria are supposed to be derived from utility. The results are supposed to represent mathematically the stabilizing action of Adam Smith's invisible hand. In deterministic excess demand dynamics I show the following. A utility function generally does not exist mathematically due to nonintegrable dynamics when production/investment are accounted for, resolving Mirowski's thesis. Price as a function of demand does not exist mathematically either. All equilibria are unstable. I then explain how deterministic chaos can be distinguished from random noise at short times. In the generalization to liquid markets and finance theory described by stochastic excess demand dynamics, I also show the following. Market price distributions cannot be rescaled to describe price movements as ‘equilibrium’ fluctuations about a systematic drift in price. Utility maximization does not describe equilibrium. Maximization of the Gibbs entropy of the observed price distribution of an asset would describe equilibrium, if equilibrium could be achieved, but equilibrium does not describe real, liquid markets (stocks, bonds, foreign exchange). There are three inconsistent definitions of equilibrium used in economics and finance, only one of which is correct. Prices in unregulated free markets are unstable against both noise and rising or falling expectations: Adam Smith's stabilizing invisible hand does not exist, either in mathematical models of liquid market data, or in real market data.

  1. Illustrated Examples of the Effects of Risk Preferences and Expectations on Bargaining Outcomes.

    ERIC Educational Resources Information Center

    Dickinson, David L.

    2003-01-01

    Describes bargaining examples that use expected utility theory. Provides example results that are intuitive, shown graphically and algebraically, and offer upper-level student samples that illustrate the usefulness of the expected utility theory. (JEH)

  2. Dialysis centers - what to expect

    MedlinePlus

    ... treatment. Many people have dialysis in a treatment center. This article focuses on hemodialysis at a treatment center. ... Artificial kidneys - dialysis centers - what to expect; Dialysis - what to expect; Renal replacement therapy - dialysis centers - what to expect

  3. Matching, maximizing, and hill-climbing

    PubMed Central

    Hinson, John M.; Staddon, J. E. R.

    1983-01-01

    In simple situations, animals consistently choose the better of two alternatives. On concurrent variable-interval variable-interval and variable-interval variable-ratio schedules, they approximately match aggregate choice and reinforcement ratios. The matching law attempts to explain the latter result but does not address the former. Hill-climbing rules such as momentary maximizing can account for both. We show that momentary maximizing constrains molar choice to approximate matching; that molar choice covaries with pigeons' momentary-maximizing estimate; and that the “generalized matching law” follows from almost any hill-climbing rule. PMID:16812350

  4. Are all maximally entangled states pure?

    NASA Astrophysics Data System (ADS)

    Cavalcanti, D.; Brandão, F. G. S. L.; Terra Cunha, M. O.

    2005-10-01

    We study if all maximally entangled states are pure through several entanglement monotones. In the bipartite case, we find that the same conditions which lead to the uniqueness of the entropy of entanglement as a measure of entanglement exclude the existence of maximally mixed entangled states. In the multipartite scenario, our conclusions allow us to generalize the idea of the monogamy of entanglement: we establish the polygamy of entanglement, expressing that if a general state is maximally entangled with respect to some kind of multipartite entanglement, then it is necessarily factorized of any other system.

  5. Are all maximally entangled states pure?

    SciTech Connect

    Cavalcanti, D.; Brandao, F.G.S.L.; Terra Cunha, M.O.

    2005-10-15

    We study if all maximally entangled states are pure through several entanglement monotones. In the bipartite case, we find that the same conditions which lead to the uniqueness of the entropy of entanglement as a measure of entanglement exclude the existence of maximally mixed entangled states. In the multipartite scenario, our conclusions allow us to generalize the idea of the monogamy of entanglement: we establish the polygamy of entanglement, expressing that if a general state is maximally entangled with respect to some kind of multipartite entanglement, then it is necessarily factorized of any other system.

  6. MAXIM Pathfinder x-ray interferometry mission

    NASA Astrophysics Data System (ADS)

    Gendreau, Keith C.; Cash, Webster C.; Shipley, Ann F.; White, Nicholas

    2003-03-01

    The MAXIM Pathfinder (MP) mission is under study as a scientific and technical stepping stone for the full MAXIM X-ray interferometry mission. While full MAXIM will resolve the event horizons of black holes with 0.1 microarcsecond imaging, MP will address scientific and technical issues as a 100 microarcsecond imager with some capabilities to resolve microarcsecond structure. We will present the primary science goals of MP. These include resolving stellar coronae, distinguishing between jets and accretion disks in AGN. This paper will also present the baseline design of MP. We will overview the challenging technical requirements and solutions for formation flying, target acquisition, and metrology.

  7. Maximal hypersurfaces in asymptotically stationary spacetimes

    NASA Astrophysics Data System (ADS)

    Chrusciel, Piotr T.; Wald, Robert M.

    1992-12-01

    The purpose of the work is to extend the results on the existence of maximal hypersurfaces to encompass some situations considered by other authors. The existence of maximal hypersurface in asymptotically stationary spacetimes is proven. Existence of maximal surface and of foliations by maximal hypersurfaces is proven in two classes of asymptotically flat spacetimes which possess a one parameter group of isometries whose orbits are timelike 'near infinity'. The first class consists of strongly causal asymptotically flat spacetimes which contain no 'blackhole or white hole' (but may contain 'ergoregions' where the Killing orbits fail to be timelike). The second class of space times possess a black hole and a white hole, with the black and white hole horizon intersecting in a compact 2-surface S.

  8. Sociology of Low Expectations

    PubMed Central

    Samuel, Gabrielle; Williams, Clare

    2015-01-01

    Social scientists have drawn attention to the role of hype and optimistic visions of the future in providing momentum to biomedical innovation projects by encouraging innovation alliances. In this article, we show how less optimistic, uncertain, and modest visions of the future can also provide innovation projects with momentum. Scholars have highlighted the need for clinicians to carefully manage the expectations of their prospective patients. Using the example of a pioneering clinical team providing deep brain stimulation to children and young people with movement disorders, we show how clinicians confront this requirement by drawing on their professional knowledge and clinical expertise to construct visions of the future with their prospective patients; visions which are personalized, modest, and tainted with uncertainty. We refer to this vision-constructing work as recalibration, and we argue that recalibration enables clinicians to manage the tension between the highly optimistic and hyped visions of the future that surround novel biomedical interventions, and the exigencies of delivering those interventions in a clinical setting. Drawing on work from science and technology studies, we suggest that recalibration enrolls patients in an innovation alliance by creating a shared understanding of how the “effectiveness” of an innovation shall be judged. PMID:26527846

  9. New standard exceeds expectations

    SciTech Connect

    Bennett, M.J. )

    1993-08-01

    The new ASTM environmental due diligence standard is delivering far more than expected when it was conceived in 1990. Its use goes well beyond the relatively narrow legal liability protection that was the primary goal in its development. The real estate industry, spearheaded by the lending community, was preoccupied with environmental risk and liability. Lenders throughout the concept's evolution have been at the forefront in defining environmental due diligence. The lender liability rule is intended to protect property owners from CERCLA liability for property they own or companies they manage (for example, as a result of foreclosure). The new site assessment standard increasingly is considered a benchmark for prudent environmental due diligence in the interest of risk management, not legal liability. The focus on risk management, including collateral devaluation and corporate credit risk, are becoming dominant areas of policy focus in the lending industry. Lenders now are revising their policies to incorporate transactions beyond issues of real estate, in which a company's economic viability and ability to service debt could be impacted by an environmental problem unrelated to property transfers.

  10. Expectations and speech intelligibility.

    PubMed

    Babel, Molly; Russell, Jamie

    2015-05-01

    Socio-indexical cues and paralinguistic information are often beneficial to speech processing as this information assists listeners in parsing the speech stream. Associations that particular populations speak in a certain speech style can, however, make it such that socio-indexical cues have a cost. In this study, native speakers of Canadian English who identify as Chinese Canadian and White Canadian read sentences that were presented to listeners in noise. Half of the sentences were presented with a visual-prime in the form of a photo of the speaker and half were presented in control trials with fixation crosses. Sentences produced by Chinese Canadians showed an intelligibility cost in the face-prime condition, whereas sentences produced by White Canadians did not. In an accentedness rating task, listeners rated White Canadians as less accented in the face-prime trials, but Chinese Canadians showed no such change in perceived accentedness. These results suggest a misalignment between an expected and an observed speech signal for the face-prime trials, which indicates that social information about a speaker can trigger linguistic associations that come with processing benefits and costs. PMID:25994710

  11. Gaussian maximally multipartite-entangled states

    SciTech Connect

    Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio; Lupo, Cosmo; Mancini, Stefano

    2009-12-15

    We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n<=7.

  12. Natural selection and the maximization of fitness.

    PubMed

    Birch, Jonathan

    2016-08-01

    The notion that natural selection is a process of fitness maximization gets a bad press in population genetics, yet in other areas of biology the view that organisms behave as if attempting to maximize their fitness remains widespread. Here I critically appraise the prospects for reconciliation. I first distinguish four varieties of fitness maximization. I then examine two recent developments that may appear to vindicate at least one of these varieties. The first is the 'new' interpretation of Fisher's fundamental theorem of natural selection, on which the theorem is exactly true for any evolving population that satisfies some minimal assumptions. The second is the Formal Darwinism project, which forges links between gene frequency change and optimal strategy choice. In both cases, I argue that the results fail to establish a biologically significant maximization principle. I conclude that it may be a mistake to look for universal maximization principles justified by theory alone. A more promising approach may be to find maximization principles that apply conditionally and to show that the conditions were satisfied in the evolution of particular traits. PMID:25899152

  13. AUC-Maximizing Ensembles through Metalearning

    PubMed Central

    LeDell, Erin; van der Laan, Mark J.; Peterson, Maya

    2016-01-01

    Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree. PMID:27227721

  14. Formation Control of the MAXIM L2 Libration Orbit Mission

    NASA Technical Reports Server (NTRS)

    Folta, David; Hartman, Kate; Howell, Kathleen; Marchand, Belinda

    2004-01-01

    The Micro-Arcsecond X-ray Imaging Mission (MAXIM), a proposed concept for the Structure and Evolution of the Universe (SEU) Black Hole Imager mission, is designed to make a ten million-fold improvement in X-ray image clarity of celestial objects by providing better than 0.1 micro-arcsecond imaging. Currently the mission architecture comprises 25 spacecraft, 24 as optics modules and one as the detector, which will form sparse sub-apertures of a grazing incidence X-ray interferometer covering the 0.3-10 keV bandpass. This formation must allow for long duration continuous science observations and also for reconfiguration that permits re-pointing of the formation. To achieve these mission goals, the formation is required to cooperatively point at desired targets. Once pointed, the individual elements of the MAXIM formation must remain stable, maintaining their relative positions and attitudes below a critical threshold. These pointing and formation stability requirements impact the control and design of the formation. In this paper, we provide analysis of control efforts that are dependent upon the stability and the configuration and dimensions of the MAXIM formation. We emphasize the utilization of natural motions in the Lagrangian regions to minimize the control efforts and we address continuous control via input feedback linearization (IFL). Results provide control cost, configuration options, and capabilities as guidelines for the development of this complex mission.

  15. Formation Control of the MAXIM L2 Libration Orbit Mission

    NASA Technical Reports Server (NTRS)

    Folta, David; Hartman, Kate; Howell, Kathleen; Marchand, Belinda

    2004-01-01

    The Micro-Arcsecond Imaging Mission (MAXIM), a proposed concept for the Structure and Evolution of the Universe (SEU) Black Hole Imaging mission, is designed to make a ten million-fold improvement in X-ray image clarity of celestial objects by providing better than 0.1 microarcsecond imaging. To achieve mission requirements, MAXIM will have to improve on pointing by orders of magnitude. This pointing requirement impacts the control and design of the formation. Currently the architecture is comprised of 25 spacecraft, which will form the sparse apertures of a grazing incidence X-ray interferometer covering the 0.3-10 keV bandpass. This configuration will deploy 24 spacecraft as optics modules and one as the detector. The formation must allow for long duration continuous science observations and also for reconfiguration that permits re-pointing of the formation. In this paper, we provide analysis and trades of several control efforts that are dependent upon the pointing requirements and the configuration and dimensions of the MAXIM formation. We emphasize the utilization of natural motions in the Lagrangian regions that minimize the control efforts and we address both continuous and discrete control via LQR and feedback linearization. Results provide control cost, configuration options, and capabilities as guidelines for the development of this complex mission.

  16. Explanatory Variance in Maximal Oxygen Uptake

    PubMed Central

    Robert McComb, Jacalyn J.; Roh, Daesung; Williams, James S.

    2006-01-01

    The purpose of this study was to develop a prediction equation that could be used to estimate maximal oxygen uptake (VO2max) from a submaximal water running protocol. Thirty-two volunteers (n =19 males, n = 13 females), ages 18 - 24 years, underwent the following testing procedures: (a) a 7-site skin fold assessment; (b) a land VO2max running treadmill test; and (c) a 6 min water running test. For the water running submaximal protocol, the participants were fitted with an Aqua Jogger Classic Uni-Sex Belt and a Polar Heart Rate Monitor; the participants’ head, shoulders, hips and feet were vertically aligned, using a modified running/bicycle motion. A regression model was used to predict VO2max. The criterion variable, VO2max, was measured using open-circuit calorimetry utilizing the Bruce Treadmill Protocol. Predictor variables included in the model were percent body fat (% BF), height, weight, gender, and heart rate following a 6 min water running protocol. Percent body fat accounted for 76% (r = -0.87, SEE = 3.27) of the variance in VO2max. No other variables significantly contributed to the explained variance in VO2max. The equation for the estimation of VO2max is as follows: VO2max ml.kg-1·min-1 = 56.14 - 0.92 (% BF). Key Points Body Fat is an important predictor of VO2 max. Individuals with low skill level in water running may shorten their stride length to avoid the onset of fatigue at higher work-loads, therefore, the net oxygen cost of the exercise cannot be controlled in inexperienced individuals in water running at fatiguing workloads. Experiments using water running protocols to predict VO2max should use individuals trained in the mechanics of water running. A submaximal water running protocol is needed in the research literature for individuals trained in the mechanics of water running, given the popularity of water running rehabilitative exercise programs and training programs. PMID:24260003

  17. Expecting the Best for Students: Teacher Expectations and Academic Outcomes

    ERIC Educational Resources Information Center

    Rubie-Davies, Christine; Hattie, John; Hamilton, Richard

    2006-01-01

    Background: Research into teacher expectations has shown that these have an effect on student achievement. Some researchers have explored the impact of various student characteristics on teachers' expectations. One attribute of interest is ethnicity. Aims: This study aimed to explore differences in teachers' expectations and judgments of student…

  18. Great Expectations: Temporal Expectation Modulates Perceptual Processing Speed

    ERIC Educational Resources Information Center

    Vangkilde, Signe; Coull, Jennifer T.; Bundesen, Claus

    2012-01-01

    In a crowded dynamic world, temporal expectations guide our attention in time. Prior investigations have consistently demonstrated that temporal expectations speed motor behavior. We explore effects of temporal expectation on "perceptual" speed in three nonspeeded, cued recognition paradigms. Different hazard rate functions for the cue-stimulus…

  19. Resources and energetics determined dinosaur maximal size

    PubMed Central

    McNab, Brian K.

    2009-01-01

    Some dinosaurs reached masses that were ≈8 times those of the largest, ecologically equivalent terrestrial mammals. The factors most responsible for setting the maximal body size of vertebrates are resource quality and quantity, as modified by the mobility of the consumer, and the vertebrate's rate of energy expenditure. If the food intake of the largest herbivorous mammals defines the maximal rate at which plant resources can be consumed in terrestrial environments and if that limit applied to dinosaurs, then the large size of sauropods occurred because they expended energy in the field at rates extrapolated from those of varanid lizards, which are ≈22% of the rates in mammals and 3.6 times the rates of other lizards of equal size. Of 2 species having the same energy income, the species that uses the most energy for mass-independent maintenance of necessity has a smaller size. The larger mass found in some marine mammals reflects a greater resource abundance in marine environments. The presumptively low energy expenditures of dinosaurs potentially permitted Mesozoic communities to support dinosaur biomasses that were up to 5 times those found in mammalian herbivores in Africa today. The maximal size of predatory theropods was ≈8 tons, which if it reflected the maximal capacity to consume vertebrates in terrestrial environments, corresponds in predatory mammals to a maximal mass less than a ton, which is what is observed. Some coelurosaurs may have evolved endothermy in association with the evolution of feathered insulation and a small mass. PMID:19581600

  20. Energy Band Calculations for Maximally Even Superlattices

    NASA Astrophysics Data System (ADS)

    Krantz, Richard; Byrd, Jason

    2007-03-01

    Superlattices are multiple-well, semiconductor heterostructures that can be described by one-dimensional potential wells separated by potential barriers. We refer to a distribution of wells and barriers based on the theory of maximally even sets as a maximally even superlattice. The prototypical example of a maximally even set is the distribution of white and black keys on a piano keyboard. Black keys may represent wells and the white keys represent barriers. As the number of wells and barriers increase, efficient and stable methods of calculation are necessary to study these structures. We have implemented a finite-element method using the discrete variable representation (FE-DVR) to calculate E versus k for these superlattices. Use of the FE-DVR method greatly reduces the amount of calculation necessary for the eigenvalue problem.

  1. Maximal Holevo Quantity Based on Weak Measurements

    PubMed Central

    Wang, Yao-Kun; Fei, Shao-Ming; Wang, Zhi-Xi; Cao, Jun-Peng; Fan, Heng

    2015-01-01

    The Holevo bound is a keystone in many applications of quantum information theory. We propose “ maximal Holevo quantity for weak measurements” as the generalization of the maximal Holevo quantity which is defined by the optimal projective measurements. The scenarios that weak measurements is necessary are that only the weak measurements can be performed because for example the system is macroscopic or that one intentionally tries to do so such that the disturbance on the measured system can be controlled for example in quantum key distribution protocols. We evaluate systematically the maximal Holevo quantity for weak measurements for Bell-diagonal states and find a series of results. Furthermore, we find that weak measurements can be realized by noise and project measurements. PMID:26090962

  2. Caffeine, maximal power output and fatigue.

    PubMed Central

    Williams, J H; Signorile, J F; Barnes, W S; Henrich, T W

    1988-01-01

    The purpose of this investigation was to determine the effects of caffeine ingestion on maximal power output and fatigue during short term, high intensity exercise. Nine adult males performed 15 s maximal exercise bouts 60 min after ingestion of caffeine (7 mg.kg-1) or placebo. Exercise bouts were carried out on a modified cycle ergometer which allowed power output to be computed for each one-half pedal stroke via microcomputer. Peak power output under caffeine conditions was not significantly different from that obtained following placebo ingestion. Similarly, time to peak power, total work, power fatigue index and power fatigue rate did not differ significantly between caffeine and placebo conditions. These results suggest that caffeine ingestion does not increase one's maximal ability to generate power. Further, caffeine does not alter the rate or magnitude of fatigue during high intensity, dynamic exercise. PMID:3228680

  3. An information maximization model of eye movements

    NASA Technical Reports Server (NTRS)

    Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra

    2005-01-01

    We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.

  4. Measuring Alcohol Expectancies in Youth

    ERIC Educational Resources Information Center

    Randolph, Karen A.; Gerend, Mary A.; Miller, Brenda A.

    2006-01-01

    Beliefs about the consequences of using alcohol, alcohol expectancies, are powerful predictors of underage drinking. The Alcohol Expectancies Questionnaire-Adolescent form (AEQ-A) has been widely used to measure expectancies in youth. Despite its broad use, the factor structure of the AEQ-A has not been firmly established. It is also not known…

  5. A Reward-Maximizing Spiking Neuron as a Bounded Rational Decision Maker.

    PubMed

    Leibfried, Felix; Braun, Daniel A

    2015-08-01

    Rate distortion theory describes how to communicate relevant information most efficiently over a channel with limited capacity. One of the many applications of rate distortion theory is bounded rational decision making, where decision makers are modeled as information channels that transform sensory input into motor output under the constraint that their channel capacity is limited. Such a bounded rational decision maker can be thought to optimize an objective function that trades off the decision maker's utility or cumulative reward against the information processing cost measured by the mutual information between sensory input and motor output. In this study, we interpret a spiking neuron as a bounded rational decision maker that aims to maximize its expected reward under the computational constraint that the mutual information between the neuron's input and output is upper bounded. This abstract computational constraint translates into a penalization of the deviation between the neuron's instantaneous and average firing behavior. We derive a synaptic weight update rule for such a rate distortion optimizing neuron and show in simulations that the neuron efficiently extracts reward-relevant information from the input by trading off its synaptic strengths against the collected reward. PMID:26079747

  6. On the Relationship between Maximal Reliability and Maximal Validity of Linear Composites

    ERIC Educational Resources Information Center

    Penev, Spiridon; Raykov, Tenko

    2006-01-01

    A linear combination of a set of measures is often sought as an overall score summarizing subject performance. The weights in this composite can be selected to maximize its reliability or to maximize its validity, and the optimal choice of weights is in general not the same for these two optimality criteria. We explore several relationships…

  7. Patient (customer) expectations in hospitals.

    PubMed

    Bostan, Sedat; Acuner, Taner; Yilmaz, Gökhan

    2007-06-01

    The expectations of patient are one of the determining factors of healthcare service. The purpose of this study is to measure the Patients' Expectations, based on Patient's Rights. This study was done with Likert-Survey in Trabzon population. The analyses showed that the level of the expectations of the patient was high on the factor of receiving information and at an acceptable level on the other factors. Statistical meaningfulness was determined between age, sex, education, health insurance, and the income of the family and the expectations of the patients (p<0.05). According to this study, the current legal regulations have higher standards than the expectations of the patients. The reason that the satisfaction of the patients high level is interpreted due to the fact that the level of the expectation is low. It is suggested that the educational and public awareness studies on the patients' rights must be done in order to increase the expectations of the patients. PMID:17028043

  8. Understanding violations of Gricean maxims in preschoolers and adults.

    PubMed

    Okanda, Mako; Asada, Kosuke; Moriguchi, Yusuke; Itakura, Shoji

    2015-01-01

    This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants' understanding of the following maxims was assessed: be informative (first maxim of quantity), avoid redundancy (second maxim of quantity), be truthful (maxim of quality), be relevant (maxim of relation), avoid ambiguity (second maxim of manner), and be polite (maxim of politeness). Sensitivity to violations of Gricean maxims increased with age: 4-year-olds' understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner), and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed. PMID:26191018

  9. Understanding violations of Gricean maxims in preschoolers and adults

    PubMed Central

    Okanda, Mako; Asada, Kosuke; Moriguchi, Yusuke; Itakura, Shoji

    2015-01-01

    This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants' understanding of the following maxims was assessed: be informative (first maxim of quantity), avoid redundancy (second maxim of quantity), be truthful (maxim of quality), be relevant (maxim of relation), avoid ambiguity (second maxim of manner), and be polite (maxim of politeness). Sensitivity to violations of Gricean maxims increased with age: 4-year-olds' understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner), and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed. PMID:26191018

  10. Maximizing the Spectacle of Water Fountains

    ERIC Educational Resources Information Center

    Simoson, Andrew J.

    2009-01-01

    For a given initial speed of water from a spigot or jet, what angle of the jet will maximize the visual impact of the water spray in the fountain? This paper focuses on fountains whose spigots are arranged in circular fashion, and couches the measurement of the visual impact in terms of the surface area and the volume under the fountain's natural…

  11. A Model of College Tuition Maximization

    ERIC Educational Resources Information Center

    Bosshardt, Donald I.; Lichtenstein, Larry; Zaporowski, Mark P.

    2009-01-01

    This paper develops a series of models for optimal tuition pricing for private colleges and universities. The university is assumed to be a profit maximizing, price discriminating monopolist. The enrollment decision of student's is stochastic in nature. The university offers an effective tuition rate, comprised of stipulated tuition less financial…

  12. Maximal aerobic exercise following prolonged sleep deprivation.

    PubMed

    Goodman, J; Radomski, M; Hart, L; Plyley, M; Shephard, R J

    1989-12-01

    The effect of 60 h without sleep upon maximal oxygen intake was examined in 12 young women, using a cycle ergometer protocol. The arousal of the subjects was maintained by requiring the performance of a sequence of cognitive tasks throughout the experimental period. Well-defined oxygen intake plateaus were obtained both before and after sleep deprivation, and no change of maximal oxygen intake was observed immediately following sleep deprivation. The endurance time for exhausting exercise also remained unchanged, as did such markers of aerobic performance as peak exercise ventilation, peak heart rate, peak respiratory gas exchange ratio, and peak blood lactate. However, as in an earlier study of sleep deprivation with male subjects (in which a decrease of treadmill maximal oxygen intake was observed), the formula of Dill and Costill (4) indicated the development of a substantial (11.6%) increase of estimated plasma volume percentage with corresponding decreases in hematocrit and red cell count. Possible factors sustaining maximal oxygen intake under the conditions of the present experiment include (1) maintained arousal of the subjects with no decrease in peak exercise ventilation or the related respiratory work and (2) use of a cycle ergometer rather than a treadmill test with possible concurrent differences in the impact of hematocrit levels and plasma volume expansion upon peak cardiac output and thus oxygen delivery to the working muscles. PMID:2628360

  13. Does evolution lead to maximizing behavior?

    PubMed

    Lehmann, Laurent; Alger, Ingela; Weibull, Jörgen

    2015-07-01

    A long-standing question in biology and economics is whether individual organisms evolve to behave as if they were striving to maximize some goal function. We here formalize this "as if" question in a patch-structured population in which individuals obtain material payoffs from (perhaps very complex multimove) social interactions. These material payoffs determine personal fitness and, ultimately, invasion fitness. We ask whether individuals in uninvadable population states will appear to be maximizing conventional goal functions (with population-structure coefficients exogenous to the individual's behavior), when what is really being maximized is invasion fitness at the genetic level. We reach two broad conclusions. First, no simple and general individual-centered goal function emerges from the analysis. This stems from the fact that invasion fitness is a gene-centered multigenerational measure of evolutionary success. Second, when selection is weak, all multigenerational effects of selection can be summarized in a neutral type-distribution quantifying identity-by-descent between individuals within patches. Individuals then behave as if they were striving to maximize a weighted sum of material payoffs (own and others). At an uninvadable state it is as if individuals would freely choose their actions and play a Nash equilibrium of a game with a goal function that combines self-interest (own material payoff), group interest (group material payoff if everyone does the same), and local rivalry (material payoff differences). PMID:26082379

  14. How to Generate Good Profit Maximization Problems

    ERIC Educational Resources Information Center

    Davis, Lewis

    2014-01-01

    In this article, the author considers the merits of two classes of profit maximization problems: those involving perfectly competitive firms with quadratic and cubic cost functions. While relatively easy to develop and solve, problems based on quadratic cost functions are too simple to address a number of important issues, such as the use of…

  15. Ehrenfest's Lottery--Time and Entropy Maximization

    ERIC Educational Resources Information Center

    Ashbaugh, Henry S.

    2010-01-01

    Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…

  16. Faculty Salaries and the Maximization of Prestige

    ERIC Educational Resources Information Center

    Melguizo, Tatiana; Strober, Myra H.

    2007-01-01

    Through the lens of the emerging economic theory of higher education, we look at the relationship between salary and prestige. Starting from the premise that academic institutions seek to maximize prestige, we hypothesize that monetary rewards are higher for faculty activities that confer prestige. We use data from the 1999 National Study of…

  17. Maximizing the Phytonutrient Content of Potatoes

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We are exploring to what extent the rich genetic diversity of potatoes can be used to maximize the nutritional potential of potatoes. Metabolic profiling is being used to screen potatoes for genotypes with elevated amounts of vitamins and phytonutrients. Substantial differences in phytonutrients am...

  18. Expectancies vs. Background in the Prediction of Adult Drinking Patterns.

    ERIC Educational Resources Information Center

    Brown, Sandra A.

    Alcoholism research has independently focused on background characteristics and alcohol-related expectations, e.g., social and physical pleasure, reduced tension, and increased assertiveness, as important variables in identifying high risk individuals. To assess the utility of alcohol reinforcement expectations as predictors of drinking patterns,…

  19. Educational Expectations and Attainment. NBER Working Paper No. 15683

    ERIC Educational Resources Information Center

    Jacob, Brian A.; Wilder, Tamara

    2010-01-01

    This paper examines the role of educational expectations in the educational attainment process. We utilize data from a variety of datasets to document and analyze the trends in educational expectations between the mid-1970s and the early 2000s. We focus on differences across racial/ethnic and socioeconomic groups and examine how young people…

  20. Teacher Expectancy Related to Student Performance in Vocational Education.

    ERIC Educational Resources Information Center

    Pandya, Himanshu S.

    A study was designed (1) to discover the effect of teacher expectation on student performance in the cognitive and in the psychomotor skills, and (2) to analyze students' attitudes toward teachers because of teacher expectations. The study utilized two different instructional units. The quality milk production unit was used to teach cognitive…

  1. The evolution of utility functions and psychological altruism.

    PubMed

    Clavien, Christine; Chapuisat, Michel

    2016-04-01

    Numerous studies show that humans tend to be more cooperative than expected given the assumption that they are rational maximizers of personal gain. As a result, theoreticians have proposed elaborated formal representations of human decision-making, in which utility functions including "altruistic" or "moral" preferences replace the purely self-oriented "Homo economicus" function. Here we review mathematical approaches that provide insights into the mathematical stability of alternative utility functions. Candidate utility functions may be evaluated with help of game theory, classical modeling of social evolution that focuses on behavioral strategies, and modeling of social evolution that focuses directly on utility functions. We present the advantages of the latter form of investigation and discuss one surprisingly precise result: "Homo economicus" as well as "altruistic" utility functions are less stable than a function containing a preference for the common welfare that is only expressed in social contexts composed of individuals with similar preferences. We discuss the contribution of mathematical models to our understanding of human other-oriented behavior, with a focus on the classical debate over psychological altruism. We conclude that human can be psychologically altruistic, but that psychological altruism evolved because it was generally expressed towards individuals that contributed to the actor's fitness, such as own children, romantic partners and long term reciprocators. PMID:26598465

  2. Intervening in Expectation Communication: The "Alterability" of Teacher Expectations.

    ERIC Educational Resources Information Center

    Cooper, Harris M.

    Theoretical and practical implications of the proposition that teachers' differential behavior toward high and low expectation students serves a control function were tested. As predicted, initial performance expectations were found related to later perceptions of control over performance, even when the initial relationship between expectations…

  3. Loops and multiple edges in modularity maximization of networks

    NASA Astrophysics Data System (ADS)

    Cafieri, Sonia; Hansen, Pierre; Liberti, Leo

    2010-04-01

    The modularity maximization model proposed by Newman and Girvan for the identification of communities in networks works for general graphs possibly with loops and multiple edges. However, the applications usually correspond to simple graphs. These graphs are compared to a null model where the degree distribution is maintained but edges are placed at random. Therefore, in this null model there will be loops and possibly multiple edges. Sharp bounds on the expected number of loops, and their impact on the modularity, are derived. Then, building upon the work of Massen and Doye, but using algebra rather than simulation, we propose modified null models associated with graphs without loops but with multiple edges, graphs with loops but without multiple edges and graphs without loops nor multiple edges. We validate our models by using the exact algorithm for clique partitioning of Grötschel and Wakabayashi.

  4. Price of anarchy is maximized at the percolation threshold.

    PubMed

    Skinner, Brian

    2015-05-01

    When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called price of anarchy (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly placed congestible and incongestible links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold. PMID:26066138

  5. The price of anarchy is maximized at the percolation threshold

    NASA Astrophysics Data System (ADS)

    Skinner, Brian

    2015-03-01

    When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called ``price of anarchy'' (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly-placed ``congestible'' and ``incongestible'' links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold.

  6. Price of anarchy is maximized at the percolation threshold

    NASA Astrophysics Data System (ADS)

    Skinner, Brian

    2015-05-01

    When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called price of anarchy (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly placed congestible and incongestible links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold.

  7. Labview utilities

    Energy Science and Technology Software Center (ESTSC)

    2011-09-30

    The software package provides several utilities written in LabView. These utilities don't form independent programs, but rather can be used as a library or controls in other labview programs. The utilities include several new controls (xcontrols), VIs for input and output routines, as well as other 'helper'-functions not provided in the standard LabView environment.

  8. Nondecoupling of maximal supergravity from the superstring.

    PubMed

    Green, Michael B; Ooguri, Hirosi; Schwarz, John H

    2007-07-27

    We consider the conditions necessary for obtaining perturbative maximal supergravity in d dimensions as a decoupling limit of type II superstring theory compactified on a (10-d) torus. For dimensions d=2 and d=3, it is possible to define a limit in which the only finite-mass states are the 256 massless states of maximal supergravity. However, in dimensions d>or=4, there are infinite towers of additional massless and finite-mass states. These correspond to Kaluza-Klein charges, wound strings, Kaluza-Klein monopoles, or branes wrapping around cycles of the toroidal extra dimensions. We conclude that perturbative supergravity cannot be decoupled from string theory in dimensions>or=4. In particular, we conjecture that pure N=8 supergravity in four dimensions is in the Swampland. PMID:17678349

  9. Maximal CP violation in flavor neutrino masses

    NASA Astrophysics Data System (ADS)

    Kitabayashi, Teruyuki; Yasuè, Masaki

    2016-03-01

    Since flavor neutrino masses Mμμ,ττ,μτ can be expressed in terms of Mee,eμ,eτ, mutual dependence among Mμμ,ττ,μτ is derived by imposing some constraints on Mee,eμ,eτ. For appropriately imposed constraints on Mee,eμ,eτ giving rise to both maximal CP violation and the maximal atmospheric neutrino mixing, we show various specific textures of neutrino mass matrices including the texture with Mττ = Mμμ∗ derived as the simplest solution to the constraint of Mττ ‑ Mμμ = imaginary, which is required by the constraint of Meμcos θ23 ‑ Meτsin θ23 = real for cos 2θ23 = 0. It is found that Majorana CP violation depends on the phase of Mee.

  10. Hamiltonian formalism and path entropy maximization

    NASA Astrophysics Data System (ADS)

    Davis, Sergio; González, Diego

    2015-10-01

    Maximization of the path information entropy is a clear prescription for constructing models in non-equilibrium statistical mechanics. Here it is shown that, following this prescription under the assumption of arbitrary instantaneous constraints on position and velocity, a Lagrangian emerges which determines the most probable trajectory. Deviations from the probability maximum can be consistently described as slices in time by a Hamiltonian, according to a nonlinear Langevin equation and its associated Fokker-Planck equation. The connections unveiled between the maximization of path entropy and the Langevin/Fokker-Planck equations imply that missing information about the phase space coordinate never decreases in time, a purely information-theoretical version of the second law of thermodynamics. All of these results are independent of any physical assumptions, and thus valid for any generalized coordinate as a function of time, or any other parameter. This reinforces the view that the second law is a fundamental property of plausible inference.

  11. Maximal temperature in a simple thermodynamical system

    NASA Astrophysics Data System (ADS)

    Dai, De-Chang; Stojkovic, Dejan

    2016-06-01

    Temperature in a simple thermodynamical system is not limited from above. It is also widely believed that it does not make sense talking about temperatures higher than the Planck temperature in the absence of the full theory of quantum gravity. Here, we demonstrate that there exist a maximal achievable temperature in a system where particles obey the laws of quantum mechanics and classical gravity before we reach the realm of quantum gravity. Namely, if two particles with a given center of mass energy come at the distance shorter than the Schwarzschild diameter apart, according to classical gravity they will form a black hole. It is possible to calculate that a simple thermodynamical system will be dominated by black holes at a critical temperature which is about three times lower than the Planck temperature. That represents the maximal achievable temperature in a simple thermodynamical system.

  12. Experimental implementation of maximally synchronizable networks

    NASA Astrophysics Data System (ADS)

    Sevilla-Escoboza, R.; Buldú, J. M.; Boccaletti, S.; Papo, D.; Hwang, D.-U.; Huerta-Cuellar, G.; Gutiérrez, R.

    2016-04-01

    Maximally synchronizable networks (MSNs) are acyclic directed networks that maximize synchronizability. In this paper, we investigate the feasibility of transforming networks of coupled oscillators into their corresponding MSNs. By tuning the weights of any given network so as to reach the lowest possible eigenratio λN /λ2, the synchronized state is guaranteed to be maintained across the longest possible range of coupling strengths. We check the robustness of the resulting MSNs with an experimental implementation of a network of nonlinear electronic oscillators and study the propagation of the synchronization errors through the network. Importantly, a method to study the effects of topological uncertainties on the synchronizability is proposed and explored both theoretically and experimentally.

  13. Basic principles of maximizing dental office productivity.

    PubMed

    Mamoun, John

    2012-01-01

    To maximize office productivity, dentists should focus on performing tasks that only they can perform and not spend office hours performing tasks that can be delegated to non-dentist personnel. An important element of maximizing productivity is to arrange the schedule so that multiple patients are seated simultaneously in different operatories. Doing so allows the dentist to work on one patient in one operatory without needing to wait for local anesthetic to take effect on another patient in another operatory, or for assistants to perform tasks (such as cleaning up, taking radiographs, performing prophylaxis, or transporting and preparing equipment and supplies) in other operatories. Another way to improve productivity is to structure procedures so that fewer steps are needed to set up and implement them. In addition, during procedures, four-handed dental passing methods can be used to provide the dentist with supplies or equipment when needed. This article reviews basic principles of maximizing dental office productivity, based on the author's observations of business logistics used by various dental offices. PMID:22414506

  14. Formation Control for the MAXIM Mission

    NASA Technical Reports Server (NTRS)

    Luquette, Richard J.; Leitner, Jesse; Gendreau, Keith; Sanner, Robert M.

    2004-01-01

    Over the next twenty years, a wave of change is occurring in the space-based scientific remote sensing community. While the fundamental limits in the spatial and angular resolution achievable in spacecraft have been reached, based on today s technology, an expansive new technology base has appeared over the past decade in the area of Distributed Space Systems (DSS). A key subset of the DSS technology area is that which covers precision formation flying of space vehicles. Through precision formation flying, the baselines, previously defined by the largest monolithic structure which could fit in the largest launch vehicle fairing, are now virtually unlimited. Several missions including the Micro-Arcsecond X-ray Imaging Mission (MAXIM), and the Stellar Imager will drive the formation flying challenges to achieve unprecedented baselines for high resolution, extended-scene, interferometry in the ultraviolet and X-ray regimes. This paper focuses on establishing the feasibility for the formation control of the MAXIM mission. MAXIM formation flying requirements are on the order of microns, while Stellar Imager mission requirements are on the order of nanometers. This paper specifically addresses: (1) high-level science requirements for these missions and how they evolve into engineering requirements; and (2) the development of linearized equations of relative motion for a formation operating in an n-body gravitational field. Linearized equations of motion provide the ground work for linear formation control designs.

  15. Revenue maximization in survivable WDM networks

    NASA Astrophysics Data System (ADS)

    Sridharan, Murari; Somani, Arun K.

    2000-09-01

    Service availability is an indispensable requirement for many current and future applications over the Internet and hence has to be addressed as part of the optical QoS service model. Network service providers can offer varying classes of services based on the choice of protection employed which can vary from full protection to no protection. Based on the service classes, traffic in the network falls into one of the three classes viz., full protection, no protection and best-effort. The network typically relies on the best-effort traffic for maximizing revenue. We consider two variations on the best-effort class, (1) all connections are accepted and network tries to protect as many as possible and (2) a mix of protected and unprotected connections and the goal is to maximize revenue. In this paper, we present a mathematical formulation, that captures service differentiation based on lightpath protection, for revenue maximization in a wavelength routed backbone networks. Our approach also captures the service disruption aspect into the problem formulation, as there may be a penalty for disrupting currently working connections.

  16. Maximal acceleration is non-rotating

    NASA Astrophysics Data System (ADS)

    Page, Don N.

    1998-06-01

    In a stationary axisymmetric spacetime, the angular velocity of a stationary observer whose acceleration vector is Fermi-Walker transported is also the angular velocity that locally extremizes the magnitude of the acceleration of such an observer. The converse is also true if the spacetime is symmetric under reversing both t and 0264-9381/15/6/020/img1 together. Thus a congruence of non-rotating acceleration worldlines (NAW) is equivalent to a stationary congruence accelerating locally extremely (SCALE). These congruences are defined completely locally, unlike the case of zero angular momentum observers (ZAMOs), which requires knowledge around a symmetry axis. The SCALE subcase of a stationary congruence accelerating maximally (SCAM) is made up of stationary worldlines that may be considered to be locally most nearly at rest in a stationary axisymmetric gravitational field. Formulae for the angular velocity and other properties of the SCALEs are given explicitly on a generalization of an equatorial plane, infinitesimally near a symmetry axis, and in a slowly rotating gravitational field, including the far-field limit, where the SCAM is shown to be counter-rotating relative to infinity. These formulae are evaluated in particular detail for the Kerr-Newman metric. Various other congruences are also defined, such as a stationary congruence rotating at minimum (SCRAM), and stationary worldlines accelerating radially maximally (SWARM), both of which coincide with a SCAM on an equatorial plane of reflection symmetry. Applications are also made to the gravitational fields of maximally rotating stars, the Sun and the Solar System.

  17. Sibling Status Effects: Adult Expectations.

    ERIC Educational Resources Information Center

    Baskett, Linda Musun

    1985-01-01

    This study attempted to determine what expectations or beliefs adults might hold about a child based on his or her sibling status alone. Ratings on 50 adjective pairs for each of three sibling status types, only, oldest, and youngest child, were assessed in relation to adult expectations, birth order, and parental status of rater. (Author/DST)

  18. Increasing Expectations for Student Effort.

    ERIC Educational Resources Information Center

    Schilling, Karen Maitland; Schilling, Karl L.

    1999-01-01

    States that few higher education institutions have publicly articulated clear expectations of the knowledge and skills students are to attain. Describes gap between student and faculty expectations for academic effort. Reports that what is required in students' first semester appears to play a strong role in shaping the time investments made in…

  19. Student Expectations of Grade Inflation.

    ERIC Educational Resources Information Center

    Landrum, R. Eric

    1999-01-01

    College students completed evaluation-of-teaching surveys in five different courses to develop an evaluation instrument that would provide results concerning faculty performance. Two questions examined students' expectations regarding grades. Results indicated a significant degree of expected grade inflation. Large proportions of students doing…

  20. Institutional Differences: Expectations and Perceptions.

    ERIC Educational Resources Information Center

    Silver, Harold

    1982-01-01

    The history of higher education has paid scant attention to the attitudes and expectations of its customers, students, and employers of graduates. Recent research on student and employer attitudes toward higher education sectors has not taken into account these expectations in the context of recent higher education history. (Author/MSE)

  1. Expectations of Garland [Junior College].

    ERIC Educational Resources Information Center

    Garland Junior Coll., Boston, MA.

    A survey was conducted at Garland Junior College to determine the educational expectations of 69 new students, 122 parents, and 22 college faculty and administrators. Each group in this private women's college was asked to rank, in terms of expectations they held, the following items: learn job skills, mature in relations with others, become more…

  2. Maximal violation of tight Bell inequalities for maximal high-dimensional entanglement

    SciTech Connect

    Lee, Seung-Woo; Jaksch, Dieter

    2009-07-15

    We propose a Bell inequality for high-dimensional bipartite systems obtained by binning local measurement outcomes and show that it is tight. We find a binning method for even d-dimensional measurement outcomes for which this Bell inequality is maximally violated by maximally entangled states. Furthermore, we demonstrate that the Bell inequality is applicable to continuous variable systems and yields strong violations for two-mode squeezed states.

  3. Uplink Array Calibration via Far-Field Power Maximization

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V.; Mukai, R.; Lee, D.

    2006-01-01

    Uplink antenna arrays have the potential to greatly increase the Deep Space Network s high-data-rate uplink capabilities as well as useful range, and to provide additional uplink signal power during critical spacecraft emergencies. While techniques for calibrating an array of receive antennas have been addressed previously, proven concepts for uplink array calibration have yet to be demonstrated. This article describes a method of utilizing the Moon as a natural far-field reflector for calibrating a phased array of uplink antennas. Using this calibration technique, the radio frequency carriers transmitted by each antenna of the array are optimally phased to ensure that the uplink power received by the spacecraft is maximized.

  4. Utility Static Generation Reliability

    Energy Science and Technology Software Center (ESTSC)

    1993-03-05

    PICES (Probabilistic Investigation of Capacity and Energy Shortages) was developed for estimating an electric utility''s expected frequency and duration of capacity deficiencies on a daily on and off-peak basis. In addition to the system loss-of-load probability (LOLP) and loss-of-load expectation (LOLE) indices, PICES calculates the expected frequency and duration of system capacity deficiencies and the probability, expectation, and expected frequency and duration of a range of system reserve margin states. Results are aggregated and printedmore » on a weekly, monthly, or annual basis. The program employs hourly load data and either the two-state (on/off) or a more sophisticated three-state (on/partially on/fully off) generating unit representation. Unit maintenance schedules are determined on a weekly, levelized reserve margin basis. In addition to the 8760-hour annual load record, the user provides the following information for each unit: plant capacity, annual maintenance requirement, two or three-state unit failure and repair rates, and for three-state models, the partial state capacity deficiency. PICES can also supply default failure and repair rate values, based on the Edison Electric Institute''s 1979 Report on Equipment Availability for the Ten-Year Period 1968 Through 1977, for many common plant types. Multi-year analysis can be performed by specifying as input data the annual peak load growth rates and plant addition and retirement schedules for each year in the study.« less

  5. Physical activity extends life expectancy

    Cancer.gov

    Leisure-time physical activity is associated with longer life expectancy, even at relatively low levels of activity and regardless of body weight, according to a study by a team of researchers led by the NCI.

  6. Dialysis centers - what to expect

    MedlinePlus

    ... what to expect; Renal replacement therapy - dialysis centers; End-stage renal disease - dialysis centers; Kidney failure - dialysis ... swells and the hand on that side feels cold Your hand gets cold, numb, or weak Also ...

  7. Maternal Competence, Expectation, and Involvement

    ERIC Educational Resources Information Center

    Heath, Douglas H.

    1977-01-01

    Presents a study of maternal competence, expectations and involvement in child rearing decisions in relation to paternal personality and marital characteristics. Subjects were 45 thirty-year-old mothers. (BD)

  8. Maximizing versus satisficing: happiness is a matter of choice.

    PubMed

    Schwartz, Barry; Ward, Andrew; Monterosso, John; Lyubomirsky, Sonja; White, Katherine; Lehman, Darrin R

    2002-11-01

    Can people feel worse off as the options they face increase? The present studies suggest that some people--maximizers--can. Study 1 reported a Maximization Scale, which measures individual differences in desire to maximize. Seven samples revealed negative correlations between maximization and happiness, optimism, self-esteem, and life satisfaction, and positive correlations between maximization and depression, perfectionism, and regret. Study 2 found maximizers less satisfied than nonmaximizers (satisficers) with consumer decisions, and more likely to engage in social comparison. Study 3 found maximizers more adversely affected by upward social comparison. Study 4 found maximizers more sensitive to regret and less satisfied in an ultimatum bargaining game. The interaction between maximizing and choice is discussed in terms of regret, adaptation, and self-blame. PMID:12416921

  9. Coloring random graphs and maximizing local diversity.

    PubMed

    Bounkong, S; van Mourik, J; Saad, D

    2006-11-01

    We study a variation of the graph coloring problem on random graphs of finite average connectivity. Given the number of colors, we aim to maximize the number of different colors at neighboring vertices (i.e., one edge distance) of any vertex. Two efficient algorithms, belief propagation and Walksat, are adapted to carry out this task. We present experimental results based on two types of random graphs for different system sizes and identify the critical value of the connectivity for the algorithms to find a perfect solution. The problem and the suggested algorithms have practical relevance since various applications, such as distributed storage, can be mapped onto this problem. PMID:17280022

  10. Using molecular biology to maximize concurrent training.

    PubMed

    Baar, Keith

    2014-11-01

    Very few sports use only endurance or strength. Outside of running long distances on a flat surface and power-lifting, practically all sports require some combination of endurance and strength. Endurance and strength can be developed simultaneously to some degree. However, the development of a high level of endurance seems to prohibit the development or maintenance of muscle mass and strength. This interaction between endurance and strength is called the concurrent training effect. This review specifically defines the concurrent training effect, discusses the potential molecular mechanisms underlying this effect, and proposes strategies to maximize strength and endurance in the high-level athlete. PMID:25355186