Science.gov

Sample records for expected utility maximization

  1. Expected Power-Utility Maximization Under Incomplete Information and with Cox-Process Observations

    SciTech Connect

    Fujimoto, Kazufumi; Nagai, Hideo; Runggaldier, Wolfgang J.

    2013-02-15

    We consider the problem of maximization of expected terminal power utility (risk sensitive criterion). The underlying market model is a regime-switching diffusion model where the regime is determined by an unobservable factor process forming a finite state Markov process. The main novelty is due to the fact that prices are observed and the portfolio is rebalanced only at random times corresponding to a Cox process where the intensity is driven by the unobserved Markovian factor process as well. This leads to a more realistic modeling for many practical situations, like in markets with liquidity restrictions; on the other hand it considerably complicates the problem to the point that traditional methodologies cannot be directly applied. The approach presented here is specific to the power-utility. For log-utilities a different approach is presented in Fujimoto et al. (Preprint, 2012).

  2. Classical subjective expected utility

    PubMed Central

    Cerreia-Vioglio, Simone; Maccheroni, Fabio; Marinacci, Massimo; Montrucchio, Luigi

    2013-01-01

    We consider decision makers who know that payoff-relevant observations are generated by a process that belongs to a given class M, as postulated in Wald [Wald A (1950) Statistical Decision Functions (Wiley, New York)]. We incorporate this Waldean piece of objective information within an otherwise subjective setting à la Savage [Savage LJ (1954) The Foundations of Statistics (Wiley, New York)] and show that this leads to a two-stage subjective expected utility model that accounts for both state and model uncertainty. PMID:23559375

  3. Why Contextual Preference Reversals Maximize Expected Value

    PubMed Central

    2016-01-01

    Contextual preference reversals occur when a preference for one option over another is reversed by the addition of further options. It has been argued that the occurrence of preference reversals in human behavior shows that people violate the axioms of rational choice and that people are not, therefore, expected value maximizers. In contrast, we demonstrate that if a person is only able to make noisy calculations of expected value and noisy observations of the ordinal relations among option features, then the expected value maximizing choice is influenced by the addition of new options and does give rise to apparent preference reversals. We explore the implications of expected value maximizing choice, conditioned on noisy observations, for a range of contextual preference reversal types—including attraction, compromise, similarity, and phantom effects. These preference reversal types have played a key role in the development of models of human choice. We conclude that experiments demonstrating contextual preference reversals are not evidence for irrationality. They are, however, a consequence of expected value maximization given noisy observations. PMID:27337391

  4. Fractional stereo matching using expectation-maximization.

    PubMed

    Xiong, Wei; Chung, Hin Shun; Jia, Jiaya

    2009-03-01

    In our fractional stereo matching problem, a foreground object with a fractional boundary is blended with a background scene using unknown transparencies. Due to the spatially varying disparities in different layers, one foreground pixel may be blended with different background pixels in stereo images, making the color constancy commonly assumed in traditional stereo matching not hold any more. To tackle this problem, in this paper, we introduce a probabilistic framework constraining the matching of pixel colors, disparities, and alpha values in different layers, and propose an automatic optimization method to solve a Maximizing a Posterior (MAP) problem using Expectation-Maximization (EM), given only a short-baseline stereo input image pair. Our method encodes the effect of background occlusion by layer blending without requiring a special detection process. The alpha computation process in our unified framework can be regarded as a new approach by natural image matting, which handles appropriately the situation when the background color is similar to that of the foreground object. We demonstrate the efficacy of our method by experimenting with challenging stereo images and making comparisons with state-of-the-art methods.

  5. Expectation maximization for hard X-ray count modulation profiles

    NASA Astrophysics Data System (ADS)

    Benvenuto, F.; Schwartz, R.; Piana, M.; Massone, A. M.

    2013-07-01

    Context. This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) instrument. Aims: Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized to analyze count modulation profiles in solar hard X-ray imaging based on rotating modulation collimators. Methods: The algorithm described in this paper solves the maximum likelihood problem iteratively and encodes a positivity constraint into the iterative optimization scheme. The result is therefore a classical expectation maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, at the same time, a very satisfactory Cash-statistic (C-statistic). Results: The method is applied to both reproduce synthetic flaring configurations and reconstruct images from experimental data corresponding to three real events. In this second case, the performance of expectation maximization, when compared to Pixon image reconstruction, shows a comparable accuracy and a notably reduced computational burden; when compared to CLEAN, shows a better fidelity with respect to the measurements with a comparable computational effectiveness. Conclusions: If optimally stopped, expectation maximization represents a very reliable method for image reconstruction in the RHESSI context when count modulation profiles are used as input data.

  6. Maximizing Resource Utilization in Video Streaming Systems

    ERIC Educational Resources Information Center

    Alsmirat, Mohammad Abdullah

    2013-01-01

    Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…

  7. Inexact Matching of Ontology Graphs Using Expectation-Maximization

    PubMed Central

    Doshi, Prashant; Kolli, Ravikanth; Thomas, Christopher

    2009-01-01

    We present a new method for mapping ontology schemas that address similar domains. The problem of ontology matching is crucial since we are witnessing a decentralized development and publication of ontological data. We formulate the problem of inferring a match between two ontologies as a maximum likelihood problem, and solve it using the technique of expectation-maximization (EM). Specifically, we adopt directed graphs as our model for ontology schemas and use a generalized version of EM to arrive at a map between the nodes of the graphs. We exploit the structural, lexical and instance similarity between the graphs, and differ from the previous approaches in the way we utilize them to arrive at, a possibly inexact, match. Inexact matching is the process of finding a best possible match between the two graphs when exact matching is not possible or is computationally difficult. In order to scale the method to large ontologies, we identify the computational bottlenecks and adapt the generalized EM by using a memory bounded partitioning scheme. We provide comparative experimental results in support of our method on two well-known ontology alignment benchmarks and discuss their implications. PMID:20160892

  8. The Experienced Utility of Expected Utility Approaches

    DTIC Science & Technology

    1980-04-01

    than one using unit weights. Similarly, goods obtained with high probability should be valued more than those obtained with low probability. Therefore...consequences are added, low or negative utility associated with one conse- quence can, in principle, be compensated for by sufficiently high utility on...calibrated probability assessor should have more true statements associated with high than with low probabilities. Specifically, XX% of the statements

  9. Expected Utility Distributions for Flexible, Contingent Execution

    NASA Technical Reports Server (NTRS)

    Bresina, John L.; Washington, Richard

    2000-01-01

    This paper presents a method for using expected utility distributions in the execution of flexible, contingent plans. A utility distribution maps the possible start times of an action to the expected utility of the plan suffix starting with that action. The contingent plan encodes a tree of possible courses of action and includes flexible temporal constraints and resource constraints. When execution reaches a branch point, the eligible option with the highest expected utility at that point in time is selected. The utility distributions make this selection sensitive to the runtime context, yet still efficient. Our approach uses predictions of action duration uncertainty as well as expectations of resource usage and availability to determine when an action can execute and with what probability. Execution windows and probabilities inevitably change as execution proceeds, but such changes do not invalidate the cached utility distributions, thus, dynamic updating of utility information is minimized.

  10. Expectation-Maximization Binary Clustering for Behavioural Annotation

    PubMed Central

    2016-01-01

    The growing capacity to process and store animal tracks has spurred the development of new methods to segment animal trajectories into elementary units of movement. Key challenges for movement trajectory segmentation are to (i) minimize the need of supervision, (ii) reduce computational costs, (iii) minimize the need of prior assumptions (e.g. simple parametrizations), and (iv) capture biologically meaningful semantics, useful across a broad range of species. We introduce the Expectation-Maximization binary Clustering (EMbC), a general purpose, unsupervised approach to multivariate data clustering. The EMbC is a variant of the Expectation-Maximization Clustering (EMC), a clustering algorithm based on the maximum likelihood estimation of a Gaussian mixture model. This is an iterative algorithm with a closed form step solution and hence a reasonable computational cost. The method looks for a good compromise between statistical soundness and ease and generality of use (by minimizing prior assumptions and favouring the semantic interpretation of the final clustering). Here we focus on the suitability of the EMbC algorithm for behavioural annotation of movement data. We show and discuss the EMbC outputs in both simulated trajectories and empirical movement trajectories including different species and different tracking methodologies. We use synthetic trajectories to assess the performance of EMbC compared to classic EMC and Hidden Markov Models. Empirical trajectories allow us to explore the robustness of the EMbC to data loss and data inaccuracies, and assess the relationship between EMbC output and expert label assignments. Additionally, we suggest a smoothing procedure to account for temporal correlations among labels, and a proper visualization of the output for movement trajectories. Our algorithm is available as an R-package with a set of complementary functions to ease the analysis. PMID:27002631

  11. Robust Utility Maximization Under Convex Portfolio Constraints

    SciTech Connect

    Matoussi, Anis; Mezghani, Hanen Mnif, Mohamed

    2015-04-15

    We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle.

  12. Matching Pupils and Teachers to Maximize Expected Outcomes.

    ERIC Educational Resources Information Center

    Ward, Joe H., Jr.; And Others

    To achieve a good teacher-pupil match, it is necessary (1) to predict the learning outcomes that will result when each student is instructed by each teacher, (2) to use the predicted performance to compute an Optimality Index for each teacher-pupil combination to indicate the quality of each combination toward maximizing learning for all students,…

  13. A multiscale expectation-maximization semisupervised classifier suitable for badly posed image classification.

    PubMed

    Baraldi, Andrea; Bruzzone, Lorenzo; Blonda, Palma

    2006-08-01

    This paper deals with the problem of badly posed image classification. Although underestimated in practice, bad-posedness is likely to affect many real-world image classification tasks, where reference samples are difficult to collect (e.g., in remote sensing (RS) image mapping) and/or spatial autocorrelation is relevant. In an image classification context affected by a lack of reference samples, an original inductive learning multiscale image classifier, termed multiscale semisupervised expectation maximization (MSEM), is proposed. The rationale behind MSEM is to combine useful complementary properties of two alternative data mapping procedures recently published outside of image processing literature, namely, the multiscale modified Pappas adaptive clustering (MPAC) algorithm and the sample-based semisupervised expectation maximization (SEM) classifier. To demonstrate its potential utility, MSEM is compared against nonstandard classifiers, such as MPAC, SEM and the single-scale contextual SEM (CSEM) classifier, besides against well-known standard classifiers in two RS image classification problems featuring few reference samples and modestly useful texture information. These experiments yield weak (subjective) but numerous quantitative map quality indexes that are consistent with both theoretical considerations and qualitative evaluations by expert photointerpreters. According to these quantitative results, MSEM is competitive in terms of overall image mapping performance at the cost of a computational overhead three to six times superior to that of its most interesting rival, SEM. More in general, our experiments confirm that, even if they rely on heavy class-conditional normal distribution assumptions that may not be true in many real-world problems (e.g., in highly textured images), semisupervised classifiers based on the iterative expectation maximization Gaussian mixture model solution can be very powerful in practice when: 1) there is a lack of reference

  14. Coding for Parallel Links to Maximize the Expected Value of Decodable Messages

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew A.; Chang, Christopher S.

    2011-01-01

    When multiple parallel communication links are available, it is useful to consider link-utilization strategies that provide tradeoffs between reliability and throughput. Interesting cases arise when there are three or more available links. Under the model considered, the links have known probabilities of being in working order, and each link has a known capacity. The sender has a number of messages to send to the receiver. Each message has a size and a value (i.e., a worth or priority). Messages may be divided into pieces arbitrarily, and the value of each piece is proportional to its size. The goal is to choose combinations of messages to send on the links so that the expected value of the messages decodable by the receiver is maximized. There are three parts to the innovation: (1) Applying coding to parallel links under the model; (2) Linear programming formulation for finding the optimal combinations of messages to send on the links; and (3) Algorithms for assisting in finding feasible combinations of messages, as support for the linear programming formulation. There are similarities between this innovation and methods developed in the field of network coding. However, network coding has generally been concerned with either maximizing throughput in a fixed network, or robust communication of a fixed volume of data. In contrast, under this model, the throughput is expected to vary depending on the state of the network. Examples of error-correcting codes that are useful under this model but which are not needed under previous models have been found. This model can represent either a one-shot communication attempt, or a stream of communications. Under the one-shot model, message sizes and link capacities are quantities of information (e.g., measured in bits), while under the communications stream model, message sizes and link capacities are information rates (e.g., measured in bits/second). This work has the potential to increase the value of data returned from

  15. AREM: Aligning Short Reads from ChIP-Sequencing by Expectation Maximization

    NASA Astrophysics Data System (ADS)

    Newkirk, Daniel; Biesinger, Jacob; Chon, Alvin; Yokomori, Kyoko; Xie, Xiaohui

    High-throughput sequencing coupled to chromatin immunoprecipitation (ChIP-Seq) is widely used in characterizing genome-wide binding patterns of transcription factors, cofactors, chromatin modifiers, and other DNA binding proteins. A key step in ChIP-Seq data analysis is to map short reads from high-throughput sequencing to a reference genome and identify peak regions enriched with short reads. Although several methods have been proposed for ChIP-Seq analysis, most existing methods only consider reads that can be uniquely placed in the reference genome, and therefore have low power for detecting peaks located within repeat sequences. Here we introduce a probabilistic approach for ChIP-Seq data analysis which utilizes all reads, providing a truly genome-wide view of binding patterns. Reads are modeled using a mixture model corresponding to K enriched regions and a null genomic background. We use maximum likelihood to estimate the locations of the enriched regions, and implement an expectation-maximization (E-M) algorithm, called AREM (aligning reads by expectation maximization), to update the alignment probabilities of each read to different genomic locations. We apply the algorithm to identify genome-wide binding events of two proteins: Rad21, a component of cohesin and a key factor involved in chromatid cohesion, and Srebp-1, a transcription factor important for lipid/cholesterol homeostasis. Using AREM, we were able to identify 19,935 Rad21 peaks and 1,748 Srebp-1 peaks in the mouse genome with high confidence, including 1,517 (7.6%) Rad21 peaks and 227 (13%) Srebp-1 peaks that were missed using only uniquely mapped reads. The open source implementation of our algorithm is available at http://sourceforge.net/projects/arem

  16. The Japanese utilities` expectations for subchannel analysis

    SciTech Connect

    Toba, Akio; Omoto, Akira

    1995-12-01

    Boiling water reactor (BWR) utilities in Japan began to consider the development of a mechanistic model to describe the critical heat transfer conditions in the BWR fuel subchannel. Such a mechanistic model will not only decrease the necessity of tests, but will also help by removing some overly conservative safety margins in thermal hydraulics. With the use of a postdryout heat transfer correlation, new acceptance criteria may be applicable to evaluate the fuel integrity. Mechanistic subchannel analysis models will certainly back up this approach. This model will also be applicable to the analysis of large-size fuel bundles and examination of corrosion behavior.

  17. Optimal weight based on energy imbalance and utility maximization

    NASA Astrophysics Data System (ADS)

    Sun, Ruoyan

    2016-01-01

    This paper investigates the optimal weight for both male and female using energy imbalance and utility maximization. Based on the difference of energy intake and expenditure, we develop a state equation that reveals the weight gain from this energy gap. We ​construct an objective function considering food consumption, eating habits and survival rate to measure utility. Through applying mathematical tools from optimal control methods and qualitative theory of differential equations, we obtain some results. For both male and female, the optimal weight is larger than the physiologically optimal weight calculated by the Body Mass Index (BMI). We also study the corresponding trajectories to steady state weight respectively. Depending on the value of a few parameters, the steady state can either be a saddle point with a monotonic trajectory or a focus with dampened oscillations.

  18. Treatments of Missing Data: A Monte Carlo Comparison of RBHDI, Iterative Stochastic Regression Imputation, and Expectation-Maximization.

    ERIC Educational Resources Information Center

    Gold, Michael Steven; Bentler, Peter M.

    2000-01-01

    Describes a Monte Carlo investigation of four methods for treating incomplete data: (1) resemblance based hot-deck imputation (RBHDI); (2) iterated stochastic regression imputation; (3) structured model expectation maximization; and (4) saturated model expectation maximization. Results favored the expectation maximization methods. (SLD)

  19. A classification of bioinformatics algorithms from the viewpoint of maximizing expected accuracy (MEA).

    PubMed

    Hamada, Michiaki; Asai, Kiyoshi

    2012-05-01

    Many estimation problems in bioinformatics are formulated as point estimation problems in a high-dimensional discrete space. In general, it is difficult to design reliable estimators for this type of problem, because the number of possible solutions is immense, which leads to an extremely low probability for every solution-even for the one with the highest probability. Therefore, maximum score and maximum likelihood estimators do not work well in this situation although they are widely employed in a number of applications. Maximizing expected accuracy (MEA) estimation, in which accuracy measures of the target problem and the entire distribution of solutions are considered, is a more successful approach. In this review, we provide an extensive discussion of algorithms and software based on MEA. We describe how a number of algorithms used in previous studies can be classified from the viewpoint of MEA. We believe that this review will be useful not only for users wishing to utilize software to solve the estimation problems appearing in this article, but also for developers wishing to design algorithms on the basis of MEA.

  20. 76 FR 51060 - Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility, and Integrity of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-17

    ... Commission has disseminated elsewhere. Archival records, including library holdings. Archival information... Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility, and Integrity of Information AGENCY... guidelines to ensure and maximize the quality, objectivity, utility, and integrity of...

  1. Role of UCG in maximizing coal utilization: site specific study

    SciTech Connect

    Linn, J. K.; Love, S. L.

    1980-01-01

    The Department of Energy is sponsoring a project to develop a planning scheme for improving the utilization of coal deposits. This prototype study, called Total Economic Coal Utilization (TECU), is being applied to specific coal reserves within the Centralia-Chehalis District of Washington State. A significant aspect of the study is to determine the potential role for in situ gasification in maximizing the energy recovery and use. The results obtained indicate that UCG could be used to realize a sizeable increase in the amount of energy that can be economically recovered from the District. Since UCG technology has not reached the commercialization stage, some significant assumptions had to be made for this study. These are that the in situ process will work reliably and that product gas cleanup will proceed without major problems. However, if these conditions are met, this assessment indicates that in situ coal gasification could increase the extractable energy from Washington's Centralia-Chehalis coal deposits by a substantial amount and that this additional energy could be accessed at reasonable cost.

  2. A compact formulation for maximizing the expected number of transplants in kidney exchange programs

    NASA Astrophysics Data System (ADS)

    Alvelos, Filipe; Klimentova, Xenia; Rais, Abdur; Viana, Ana

    2015-05-01

    Kidney exchange programs (KEPs) allow the exchange of kidneys between incompatible donor-recipient pairs. Optimization approaches can help KEPs in defining which transplants should be made among all incompatible pairs according to some objective. The most common objective is to maximize the number of transplants. In this paper, we propose an integer programming model which addresses the objective of maximizing the expected number of transplants, given that there are equal probabilities of failure associated with vertices and arcs. The model is compact, i.e. has a polynomial number of decision variables and constraints, and therefore can be solved directly by a general purpose integer programming solver (e.g. Cplex).

  3. Adolescent Beer Drinking: Subjective Expected Utility and Gender Differences.

    ERIC Educational Resources Information Center

    Bauman, Karl E.; Bryan, Elizabeth S.

    1983-01-01

    Found support for the hypothesis that subjective-expected utility (the degree to which expected consequences of a behavior when considered together by the individual are perceived as either more positive or more nagative) accounts for the fact that male adolescents drink more beer than females. (CMG)

  4. Disconfirmation of Expectations of Utility in e-Learning

    ERIC Educational Resources Information Center

    Cacao, Rosario

    2013-01-01

    Using pre-training and post-training paired surveys in e-learning based training courses, we have compared the "expectations of utility," measured at the beginning of an e-learning course, with the "perceptions of utility," measured at the end of the course, and related it with the trainees' motivation. We have concluded that…

  5. The expectation maximization algorithm applied to the search of point sources of astroparticles

    NASA Astrophysics Data System (ADS)

    Aguilar, Juan Antonio; Hernández-Rey, Juan José

    2008-03-01

    The expectation-maximization algorithm, widely employed in cluster and pattern recognition analysis, is proposed in this article for the search of point sources of astroparticles. We show how to adapt the method for the particular case in which a faint source signal over a large background is expected. In particular, the method is applied to the point source search in neutrino telescopes. A generic neutrino telescope of an area of 1 km2 located in the Mediterranean Sea has been simulated. Results in terms of minimum detectable number of events are given and the method is compared advantageously with the results of a classical method with binning.

  6. Expectation-Maximization Method for EEG-Based Continuous Cursor Control

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaoyuan; Guan, Cuntai; Wu, Jiankang; Cheng, Yimin; Wang, Yixiao

    2006-12-01

    To develop effective learning algorithms for continuous prediction of cursor movement using EEG signals is a challenging research issue in brain-computer interface (BCI). In this paper, we propose a novel statistical approach based on expectation-maximization (EM) method to learn the parameters of a classifier for EEG-based cursor control. To train a classifier for continuous prediction, trials in training data-set are first divided into segments. The difficulty is that the actual intention (label) at each time interval (segment) is unknown. To handle the uncertainty of the segment label, we treat the unknown labels as the hidden variables in the lower bound on the log posterior and maximize this lower bound via an EM-like algorithm. Experimental results have shown that the averaged accuracy of the proposed method is among the best.

  7. Acceleration of Expectation-Maximization algorithm for length-biased right-censored data.

    PubMed

    Chan, Kwun Chuen Gary

    2017-01-01

    Vardi's Expectation-Maximization (EM) algorithm is frequently used for computing the nonparametric maximum likelihood estimator of length-biased right-censored data, which does not admit a closed-form representation. The EM algorithm may converge slowly, particularly for heavily censored data. We studied two algorithms for accelerating the convergence of the EM algorithm, based on iterative convex minorant and Aitken's delta squared process. Numerical simulations demonstrate that the acceleration algorithms converge more rapidly than the EM algorithm in terms of number of iterations and actual timing. The acceleration method based on a modification of Aitken's delta squared performed the best under a variety of settings.

  8. Maximum-entropy expectation-maximization algorithm for image reconstruction and sensor field estimation.

    PubMed

    Hong, Hunsop; Schonfeld, Dan

    2008-06-01

    In this paper, we propose a maximum-entropy expectation-maximization (MEEM) algorithm. We use the proposed algorithm for density estimation. The maximum-entropy constraint is imposed for smoothness of the estimated density function. The derivation of the MEEM algorithm requires determination of the covariance matrix in the framework of the maximum-entropy likelihood function, which is difficult to solve analytically. We, therefore, derive the MEEM algorithm by optimizing a lower-bound of the maximum-entropy likelihood function. We note that the classical expectation-maximization (EM) algorithm has been employed previously for 2-D density estimation. We propose to extend the use of the classical EM algorithm for image recovery from randomly sampled data and sensor field estimation from randomly scattered sensor networks. We further propose to use our approach in density estimation, image recovery and sensor field estimation. Computer simulation experiments are used to demonstrate the superior performance of the proposed MEEM algorithm in comparison to existing methods.

  9. Expectation maximization-based likelihood inference for flexible cure rate models with Weibull lifetimes.

    PubMed

    Balakrishnan, Narayanaswamy; Pal, Suvra

    2016-08-01

    Recently, a flexible cure rate survival model has been developed by assuming the number of competing causes of the event of interest to follow the Conway-Maxwell-Poisson distribution. This model includes some of the well-known cure rate models discussed in the literature as special cases. Data obtained from cancer clinical trials are often right censored and expectation maximization algorithm can be used in this case to efficiently estimate the model parameters based on right censored data. In this paper, we consider the competing cause scenario and assuming the time-to-event to follow the Weibull distribution, we derive the necessary steps of the expectation maximization algorithm for estimating the parameters of different cure rate survival models. The standard errors of the maximum likelihood estimates are obtained by inverting the observed information matrix. The method of inference developed here is examined by means of an extensive Monte Carlo simulation study. Finally, we illustrate the proposed methodology with a real data on cancer recurrence.

  10. Maximizing Light Utilization Efficiency and Hydrogen Production in Microalgal Cultures

    SciTech Connect

    Melis, Anastasios

    2014-12-31

    The project addressed the following technical barrier from the Biological Hydrogen Production section of the Fuel Cell Technologies Program Multi-Year Research, Development and Demonstration Plan: Low Sunlight Utilization Efficiency in Photobiological Hydrogen Production is due to a Large Photosystem Chlorophyll Antenna Size in Photosynthetic Microorganisms (Barrier AN: Light Utilization Efficiency).

  11. A joint shape evolution approach to medical image segmentation using expectation-maximization algorithm.

    PubMed

    Farzinfar, Mahshid; Teoh, Eam Khwang; Xue, Zhong

    2011-11-01

    This study proposes an expectation-maximization (EM)-based curve evolution algorithm for segmentation of magnetic resonance brain images. In the proposed algorithm, the evolution curve is constrained not only by a shape-based statistical model but also by a hidden variable model from image observation. The hidden variable model herein is defined by the local voxel labeling, which is unknown and estimated by the expected likelihood function derived from the image data and prior anatomical knowledge. In the M-step, the shapes of the structures are estimated jointly by encoding the hidden variable model and the statistical prior model obtained from the training stage. In the E-step, the expected observation likelihood and the prior distribution of the hidden variables are estimated. In experiments, the proposed automatic segmentation algorithm is applied to multiple gray nuclei structures such as caudate, putamens and thalamus of three-dimensional magnetic resonance imaging in volunteers and patients. As for the robustness and accuracy of the segmentation algorithm, the results of the proposed EM-joint shape-based algorithm outperformed those obtained using the statistical shape model-based techniques in the same framework and a current state-of-the-art region competition level set method.

  12. Implementation and evaluation of an expectation maximization reconstruction algorithm for gamma emission breast tomosynthesis

    PubMed Central

    Gong, Zongyi; Klanian, Kelly; Patel, Tushita; Sullivan, Olivia; Williams, Mark B.

    2012-01-01

    Purpose: We are developing a dual modality tomosynthesis breast scanner in which x-ray transmission tomosynthesis and gamma emission tomosynthesis are performed sequentially with the breast in a common configuration. In both modalities projection data are obtained over an angular range of less than 180° from one side of the mildly compressed breast resulting in incomplete and asymmetrical sampling. The objective of this work is to implement and evaluate a maximum likelihood expectation maximization (MLEM) reconstruction algorithm for gamma emission breast tomosynthesis (GEBT). Methods: A combination of Monte Carlo simulations and phantom experiments was used to test the MLEM algorithm for GEBT. The algorithm utilizes prior information obtained from the x-ray breast tomosynthesis scan to partially compensate for the incomplete angular sampling and to perform attenuation correction (AC) and resolution recovery (RR). System spatial resolution, image artifacts, lesion contrast, and signal to noise ratio (SNR) were measured as image quality figures of merit. To test the robustness of the reconstruction algorithm and to assess the relative impacts of correction techniques with changing angular range, simulations and experiments were both performed using acquisition angular ranges of 45°, 90° and 135°. For comparison, a single projection containing the same total number of counts as the full GEBT scan was also obtained to simulate planar breast scintigraphy. Results: The in-plane spatial resolution of the reconstructed GEBT images is independent of source position within the reconstructed volume and independent of acquisition angular range. For 45° acquisitions, spatial resolution in the depth dimension (the direction of breast compression) is degraded with increasing source depth (increasing distance from the collimator surface). Increasing the acquisition angular range from 45° to 135° both greatly reduces this depth dependence and improves the average depth

  13. Teams ranking of Malaysia Super League using Bayesian expectation maximization for Generalized Bradley Terry Model

    NASA Astrophysics Data System (ADS)

    Nor, Shahdiba Binti Md; Mahmud, Zamalia

    2016-10-01

    The analysis of sports data has always aroused great interest among statisticians and sports data have been investigated from different perspectives often aim at forecasting the results. The study focuses on the 12 teams who join the Malaysian Super League (MSL) for season 2015. This paper used Bayesian Expectation Maximization for Generalized Bradley Terry Model to estimate all the football team's rankings. The Generalized Bradley-Terry model is possible to find the maximum likelihood (ML) estimate of the skill ratings λ using a simple iterative procedure. In order to maximize the function of ML, we need inferential bayesian method to get posterior distribution which can be computed quickly. The team's ability was estimated based on the previous year's game results by calculating the probability of winning based on the final scores for each team. It was found that model with tie scores does make a difference in affect the model of estimating the football team's ability in winning the next match. However, team with better results in the previous year has a better chance for scoring in the next game.

  14. Fitting a mixture model by expectation maximization to discover motifs in biopolymers

    SciTech Connect

    Bailey, T.L.; Elkan, C.

    1994-12-31

    The algorithm described in this paper discovers one or more motifs in a collection of DNA or protein sequences by using the technique of expectation maximization to fit a two-component finite mixture model to the set of sequences. Multiple motifs are found by fitting a mixture model to the data, probabilistically erasing the occurrences of the motif thus found, and repeating the process to find successive motifs. The algorithm requires only a set of unaligned sequences and a number specifying the width of the motifs as input. It returns a model of each motif and a threshold which together can be used as a Bayes-optimal classifier for searching for occurrences of the motif in other databases. The algorithm estimates how many times each motif occurs in each sequence in the dataset and outputs an alignment of the occurrences of the motif. The algorithm is capable of discovering several different motifs with differing numbers of occurrences in a single dataset.

  15. Expectation maximization and the retrieval of the atmospheric extinction coefficients by inversion of Raman lidar data.

    PubMed

    Garbarino, Sara; Sorrentino, Alberto; Massone, Anna Maria; Sannino, Alessia; Boselli, Antonella; Wang, Xuan; Spinelli, Nicola; Piana, Michele

    2016-09-19

    We consider the problem of retrieving the aerosol extinction coefficient from Raman lidar measurements. This is an ill-posed inverse problem that needs regularization, and we propose to use the Expectation-Maximization (EM) algorithm to provide stable solutions. Indeed, EM is an iterative algorithm that imposes a positivity constraint on the solution, and provides regularization if iterations are stopped early enough. We describe the algorithm and propose a stopping criterion inspired by a statistical principle. We then discuss its properties concerning the spatial resolution. Finally, we validate the proposed approach by using both synthetic data and experimental measurements; we compare the reconstructions obtained by EM with those obtained by the Tikhonov method, by the Levenberg-Marquardt method, as well as those obtained by combining data smoothing and numerical derivation.

  16. Correspondenceless 3D-2D registration based on expectation conditional maximization

    NASA Astrophysics Data System (ADS)

    Kang, X.; Taylor, R. H.; Armand, M.; Otake, Y.; Yau, W. P.; Cheung, P. Y. S.; Hu, Y.

    2011-03-01

    3D-2D registration is a fundamental task in image guided interventions. Due to the physics of the X-ray imaging, however, traditional point based methods meet new challenges, where the local point features are indistinguishable, creating difficulties in establishing correspondence between 2D image feature points and 3D model points. In this paper, we propose a novel method to accomplish 3D-2D registration without known correspondences. Given a set of 3D and 2D unmatched points, this is achieved by introducing correspondence probabilities that we model as a mixture model. By casting it into the expectation conditional maximization framework, without establishing one-to-one point correspondences, we can iteratively refine the registration parameters. The method has been tested on 100 real X-ray images. The experiments showed that the proposed method accurately estimated the rotations (< 1°) and in-plane (X-Y plane) translations (< 1 mm).

  17. Bandwidth utilization maximization of scientific RF communication systems

    SciTech Connect

    Rey, D.; Ryan, W.; Ross, M.

    1997-01-01

    A method for more efficiently utilizing the frequency bandwidth allocated for data transmission is presented. Current space and range communication systems use modulation and coding schemes that transmit 0.5 to 1.0 bits per second per Hertz of radio frequency bandwidth. The goal in this LDRD project is to increase the bandwidth utilization by employing advanced digital communications techniques. This is done with little or no increase in the transmit power which is usually very limited on airborne systems. Teaming with New Mexico State University, an implementation of trellis coded modulation (TCM), a coding and modulation scheme pioneered by Ungerboeck, was developed for this application and simulated on a computer. TCM provides a means for reliably transmitting data while simultaneously increasing bandwidth efficiency. The penalty is increased receiver complexity. In particular, the trellis decoder requires high-speed, application-specific digital signal processing (DSP) chips. A system solution based on the QualComm Viterbi decoder and the Graychip DSP receiver chips is presented.

  18. Children's utilization of emotion expectancies in moral decision-making.

    PubMed

    Hertz, Steven G; Krettenauer, Tobias

    2014-09-01

    This study investigated the relevance of emotion expectancies for children's moral decision-making. The sample included 131 participants from three different grade levels (M = 8.39 years, SD = 2.45, range 4.58-12.42). Participants were presented a set of scenarios that described various emotional outcomes of (im)moral actions and asked to decide what they would do if they were in the protagonists' shoes. Overall, it was found that the anticipation of moral emotions predicted an increased likelihood of moral choices in antisocial and prosocial contexts. In younger children, anticipated moral emotions predicted moral choice for prosocial actions, but not for antisocial actions. Older children showed evidence for the utilization of anticipated emotions in both prosocial and antisocial behaviours. Moreover, for older children, the decision to act prosocially was less likely in the presence of non-moral emotions. Findings suggest that the impact of emotion expectancies on children's moral decision-making increases with age. Contrary to happy victimizer research, the study does not support the notion that young children use moral emotion expectancies for moral decision-making in the context of antisocial actions.

  19. An iterative reconstruction method of complex images using expectation maximization for radial parallel MRI

    NASA Astrophysics Data System (ADS)

    Choi, Joonsung; Kim, Dongchan; Oh, Changhyun; Han, Yeji; Park, HyunWook

    2013-05-01

    In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method.

  20. A Local Scalable Distributed Expectation Maximization Algorithm for Large Peer-to-Peer Networks

    NASA Technical Reports Server (NTRS)

    Bhaduri, Kanishka; Srivastava, Ashok N.

    2009-01-01

    This paper offers a local distributed algorithm for expectation maximization in large peer-to-peer environments. The algorithm can be used for a variety of well-known data mining tasks in a distributed environment such as clustering, anomaly detection, target tracking to name a few. This technology is crucial for many emerging peer-to-peer applications for bioinformatics, astronomy, social networking, sensor networks and web mining. Centralizing all or some of the data for building global models is impractical in such peer-to-peer environments because of the large number of data sources, the asynchronous nature of the peer-to-peer networks, and dynamic nature of the data/network. The distributed algorithm we have developed in this paper is provably-correct i.e. it converges to the same result compared to a similar centralized algorithm and can automatically adapt to changes to the data and the network. We show that the communication overhead of the algorithm is very low due to its local nature. This monitoring algorithm is then used as a feedback loop to sample data from the network and rebuild the model when it is outdated. We present thorough experimental results to verify our theoretical claims.

  1. Colocalization Estimation Using Graphical Modeling and Variational Bayesian Expectation Maximization: Towards a Parameter-Free Approach.

    PubMed

    Awate, Suyash P; Radhakrishnan, Thyagarajan

    2015-01-01

    In microscopy imaging, colocalization between two biological entities (e.g., protein-protein or protein-cell) refers to the (stochastic) dependencies between the spatial locations of the two entities in the biological specimen. Measuring colocalization between two entities relies on fluorescence imaging of the specimen using two fluorescent chemicals, each of which indicates the presence/absence of one of the entities at any pixel location. State-of-the-art methods for estimating colocalization rely on post-processing image data using an adhoc sequence of algorithms with many free parameters that are tuned visually. This leads to loss of reproducibility of the results. This paper proposes a brand-new framework for estimating the nature and strength of colocalization directly from corrupted image data by solving a single unified optimization problem that automatically deals with noise, object labeling, and parameter tuning. The proposed framework relies on probabilistic graphical image modeling and a novel inference scheme using variational Bayesian expectation maximization for estimating all model parameters, including colocalization, from data. Results on simulated and real-world data demonstrate improved performance over the state of the art.

  2. The indexing ambiguity in serial femtosecond crystallography (SFX) resolved using an expectation maximization algorithm.

    PubMed

    Liu, Haiguang; Spence, John C H

    2014-11-01

    Crystallographic auto-indexing algorithms provide crystal orientations and unit-cell parameters and assign Miller indices based on the geometric relations between the Bragg peaks observed in diffraction patterns. However, if the Bravais symmetry is higher than the space-group symmetry, there will be multiple indexing options that are geometrically equivalent, and hence many ways to merge diffraction intensities from protein nanocrystals. Structure factor magnitudes from full reflections are required to resolve this ambiguity but only partial reflections are available from each XFEL shot, which must be merged to obtain full reflections from these 'stills'. To resolve this chicken-and-egg problem, an expectation maximization algorithm is described that iteratively constructs a model from the intensities recorded in the diffraction patterns as the indexing ambiguity is being resolved. The reconstructed model is then used to guide the resolution of the indexing ambiguity as feedback for the next iteration. Using both simulated and experimental data collected at an X-ray laser for photosystem I in the P63 space group (which supports a merohedral twinning indexing ambiguity), the method is validated.

  3. Modelling Transcriptional Regulation with a Mixture of Factor Analyzers and Variational Bayesian Expectation Maximization

    PubMed Central

    2009-01-01

    Understanding the mechanisms of gene transcriptional regulation through analysis of high-throughput postgenomic data is one of the central problems of computational systems biology. Various approaches have been proposed, but most of them fail to address at least one of the following objectives: (1) allow for the fact that transcription factors are potentially subject to posttranscriptional regulation; (2) allow for the fact that transcription factors cooperate as a functional complex in regulating gene expression, and (3) provide a model and a learning algorithm with manageable computational complexity. The objective of the present study is to propose and test a method that addresses these three issues. The model we employ is a mixture of factor analyzers, in which the latent variables correspond to different transcription factors, grouped into complexes or modules. We pursue inference in a Bayesian framework, using the Variational Bayesian Expectation Maximization (VBEM) algorithm for approximate inference of the posterior distributions of the model parameters, and estimation of a lower bound on the marginal likelihood for model selection. We have evaluated the performance of the proposed method on three criteria: activity profile reconstruction, gene clustering, and network inference. PMID:19572011

  4. The indexing ambiguity in serial femtosecond crystallography (SFX) resolved using an expectation maximization algorithm

    PubMed Central

    Liu, Haiguang; Spence, John C.H.

    2014-01-01

    Crystallographic auto-indexing algorithms provide crystal orientations and unit-cell parameters and assign Miller indices based on the geometric relations between the Bragg peaks observed in diffraction patterns. However, if the Bravais symmetry is higher than the space-group symmetry, there will be multiple indexing options that are geometrically equivalent, and hence many ways to merge diffraction intensities from protein nanocrystals. Structure factor magnitudes from full reflections are required to resolve this ambiguity but only partial reflections are available from each XFEL shot, which must be merged to obtain full reflections from these ‘stills’. To resolve this chicken-and-egg problem, an expectation maximization algorithm is described that iteratively constructs a model from the intensities recorded in the diffraction patterns as the indexing ambiguity is being resolved. The reconstructed model is then used to guide the resolution of the indexing ambiguity as feedback for the next iteration. Using both simulated and experimental data collected at an X-ray laser for photosystem I in the P63 space group (which supports a merohedral twinning indexing ambiguity), the method is validated. PMID:25485120

  5. Numerical estimation of adsorption energy distributions from adsorption isotherm data with the expectation-maximization method

    SciTech Connect

    Stanley, B.J.; Guiochon, G. |

    1993-08-01

    The expectation-maximization (EM) method of parameter estimation is used to calculate adsorption energy distributions of molecular probes from their adsorption isotherms. EM does not require prior knowledge of the distribution function or the isotherm, requires no smoothing of the isotherm data, and converges with high stability towards the maximum-likelihood estimate. The method is therefore robust and accurate at high iteration numbers. The EM algorithm is tested with simulated energy distributions corresponding to unimodal Gaussian, bimodal Gaussian, Poisson distributions, and the distributions resulting from Misra isotherms. Theoretical isotherms are generated from these distributions using the Langmuir model, and then chromatographic band profiles are computed using the ideal model of chromatography. Noise is then introduced in the theoretical band profiles comparable to those observed experimentally. The isotherm is then calculated using the elution-by-characteristic points method. The energy distribution given by the EM method is compared to the original one. Results are contrasted to those obtained with the House and Jaycock algorithm HILDA, and shown to be superior in terms of robustness, accuracy, and information theory. The effect of undersampling of the high-pressure/low-energy region of the adsorption is reported and discussed for the EM algorithm, as well as the effect of signal-to-noise ratio on the degree of heterogeneity that may be estimated experimentally.

  6. Two Time Point MS Lesion Segmentation in Brain MRI: An Expectation-Maximization Framework

    PubMed Central

    Jain, Saurabh; Ribbens, Annemie; Sima, Diana M.; Cambron, Melissa; De Keyser, Jacques; Wang, Chenyu; Barnett, Michael H.; Van Huffel, Sabine; Maes, Frederik; Smeets, Dirk

    2016-01-01

    Purpose: Lesion volume is a meaningful measure in multiple sclerosis (MS) prognosis. Manual lesion segmentation for computing volume in a single or multiple time points is time consuming and suffers from intra and inter-observer variability. Methods: In this paper, we present MSmetrix-long: a joint expectation-maximization (EM) framework for two time point white matter (WM) lesion segmentation. MSmetrix-long takes as input a 3D T1-weighted and a 3D FLAIR MR image and segments lesions in three steps: (1) cross-sectional lesion segmentation of the two time points; (2) creation of difference image, which is used to model the lesion evolution; (3) a joint EM lesion segmentation framework that uses output of step (1) and step (2) to provide the final lesion segmentation. The accuracy (Dice score) and reproducibility (absolute lesion volume difference) of MSmetrix-long is evaluated using two datasets. Results: On the first dataset, the median Dice score between MSmetrix-long and expert lesion segmentation was 0.63 and the Pearson correlation coefficient (PCC) was equal to 0.96. On the second dataset, the median absolute volume difference was 0.11 ml. Conclusions: MSmetrix-long is accurate and consistent in segmenting MS lesions. Also, MSmetrix-long compares favorably with the publicly available longitudinal MS lesion segmentation algorithm of Lesion Segmentation Toolbox. PMID:28066162

  7. Nonlinear impairment compensation using expectation maximization for dispersion managed and unmanaged PDM 16-QAM transmission.

    PubMed

    Zibar, Darko; Winther, Ole; Franceschi, Niccolo; Borkowski, Robert; Caballero, Antonio; Arlunno, Valeria; Schmidt, Mikkel N; Gonzales, Neil Guerrero; Mao, Bangning; Ye, Yabin; Larsen, Knud J; Monroy, Idelfonso Tafur

    2012-12-10

    In this paper, we show numerically and experimentally that expectation maximization (EM) algorithm is a powerful tool in combating system impairments such as fibre nonlinearities, inphase and quadrature (I/Q) modulator imperfections and laser linewidth. The EM algorithm is an iterative algorithm that can be used to compensate for the impairments which have an imprint on a signal constellation, i.e. rotation and distortion of the constellation points. The EM is especially effective for combating non-linear phase noise (NLPN). It is because NLPN severely distorts the signal constellation and this can be tracked by the EM. The gain in the nonlinear system tolerance for the system under consideration is shown to be dependent on the transmission scenario. We show experimentally that for a dispersion managed polarization multiplexed 16-QAM system at 14 Gbaud a gain in the nonlinear system tolerance of up to 3 dB can be obtained. For, a dispersion unmanaged system this gain reduces to 0.5 dB.

  8. Two Time Point MS Lesion Segmentation in Brain MRI: An Expectation-Maximization Framework.

    PubMed

    Jain, Saurabh; Ribbens, Annemie; Sima, Diana M; Cambron, Melissa; De Keyser, Jacques; Wang, Chenyu; Barnett, Michael H; Van Huffel, Sabine; Maes, Frederik; Smeets, Dirk

    2016-01-01

    Purpose: Lesion volume is a meaningful measure in multiple sclerosis (MS) prognosis. Manual lesion segmentation for computing volume in a single or multiple time points is time consuming and suffers from intra and inter-observer variability. Methods: In this paper, we present MSmetrix-long: a joint expectation-maximization (EM) framework for two time point white matter (WM) lesion segmentation. MSmetrix-long takes as input a 3D T1-weighted and a 3D FLAIR MR image and segments lesions in three steps: (1) cross-sectional lesion segmentation of the two time points; (2) creation of difference image, which is used to model the lesion evolution; (3) a joint EM lesion segmentation framework that uses output of step (1) and step (2) to provide the final lesion segmentation. The accuracy (Dice score) and reproducibility (absolute lesion volume difference) of MSmetrix-long is evaluated using two datasets. Results: On the first dataset, the median Dice score between MSmetrix-long and expert lesion segmentation was 0.63 and the Pearson correlation coefficient (PCC) was equal to 0.96. On the second dataset, the median absolute volume difference was 0.11 ml. Conclusions: MSmetrix-long is accurate and consistent in segmenting MS lesions. Also, MSmetrix-long compares favorably with the publicly available longitudinal MS lesion segmentation algorithm of Lesion Segmentation Toolbox.

  9. An homomorphic filtering and expectation maximization approach for the point spread function estimation in ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Benameur, S.; Mignotte, M.; Lavoie, F.

    2012-03-01

    In modern ultrasound imaging systems, the spatial resolution is severely limited due to the effects of both the finite aperture and overall bandwidth of ultrasound transducers and the non-negligible width of the transmitted ultrasound beams. This low spatial resolution remains the major limiting factor in the clinical usefulness of medical ultrasound images. In order to recover clinically important image details, which are often masked due to this resolution limitation, an image restoration procedure should be applied. To this end, an estimation of the Point Spread Function (PSF) of the ultrasound imaging system is required. This paper introduces a novel, original, reliable, and fast Maximum Likelihood (ML) approach for recovering the PSF of an ultrasound imaging system. This new PSF estimation method assumes as a constraint that the PSF is of known parametric form. Under this constraint, the parameter values of its associated Modulation Transfer Function (MTF) are then efficiently estimated using a homomorphic filter, a denoising step, and an expectation-maximization (EM) based clustering algorithm. Given this PSF estimate, a deconvolution can then be efficiently used in order to improve the spatial resolution of an ultrasound image and to obtain an estimate (independent of the properties of the imaging system) of the true tissue reflectivity function. The experiments reported in this paper demonstrate the efficiency and illustrate all the potential of this new estimation and blind deconvolution approach.

  10. Statistical models of synaptic transmission evaluated using the expectation-maximization algorithm.

    PubMed Central

    Stricker, C; Redman, S

    1994-01-01

    Amplitude fluctuations of evoked synaptic responses can be used to extract information on the probabilities of release at the active sites, and on the amplitudes of the synaptic responses generated by transmission at each active site. The parameters that describe this process must be obtained from an incomplete data set represented by the probability density of the evoked synaptic response. In this paper, the equations required to calculate these parameters using the Expectation-Maximization algorithm and the maximum likelihood criterion have been derived for a variety of statistical models of synaptic transmission. These models are ones where the probabilities associated with the different discrete amplitudes in the evoked responses are a) unconstrained, b) binomial, and c) compound binomial. The discrete amplitudes may be separated by equal (quantal) or unequal amounts, with or without quantal variance. Alternative models have been considered where the variance associated with the discrete amplitudes is sufficiently large such that no quantal amplitudes can be detected. These models involve the sum of a normal distribution (to represent failures) and a unimodal distribution (to represent the evoked responses). The implementation of the algorithm is described in each case, and its accuracy and convergence have been demonstrated. PMID:7948679

  11. Target localization and signature extraction in GPR data using expectation-maximization and principal component analysis

    NASA Astrophysics Data System (ADS)

    Reichman, Daniel; Morton, Kenneth D.; Collins, Leslie M.; Torrione, Peter A.

    2014-05-01

    Ground Penetrating Radar (GPR) is a very promising technology for subsurface threat detection. A successful algorithm employing GPR should achieve high detection rates at a low false-alarm rate and do so at operationally relevant speeds. GPRs measure reflections at dielectric boundaries that occur at the interfaces between different materials. These boundaries may occur at any depth, within the sensor's range, and furthermore, the dielectric changes could be such that they induce a 180 degree phase shift in the received signal relative to the emitted GPR pulse. As a result of these time-of-arrival and phase variations, extracting robust features from target responses in GPR is not straightforward. In this work, a method to mitigate polarity and alignment variations based on an expectation-maximization (EM) principal-component analysis (PCA) approach is proposed. This work demonstrates how model-based target alignment can significantly improve detection performance. Performance is measured according to the improvement in the receiver operating characteristic (ROC) curve for classification before and after the data is properly aligned and phase-corrected.

  12. Optimization in the utility maximization framework for conservation planning: a comparison of solution procedures in a study of multifunctional agriculture.

    PubMed

    Kreitler, Jason; Stoms, David M; Davis, Frank W

    2014-01-01

    Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.

  13. Optimization in the utility maximization framework for conservation planning: a comparison of solution procedures in a study of multifunctional agriculture

    USGS Publications Warehouse

    Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.

    2014-01-01

    Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.

  14. Computational rationality: linking mechanism and behavior through bounded utility maximization.

    PubMed

    Lewis, Richard L; Howes, Andrew; Singh, Satinder

    2014-04-01

    We propose a framework for including information-processing bounds in rational analyses. It is an application of bounded optimality (Russell & Subramanian, 1995) to the challenges of developing theories of mechanism and behavior. The framework is based on the idea that behaviors are generated by cognitive mechanisms that are adapted to the structure of not only the environment but also the mind and brain itself. We call the framework computational rationality to emphasize the incorporation of computational mechanism into the definition of rational action. Theories are specified as optimal program problems, defined by an adaptation environment, a bounded machine, and a utility function. Such theories yield different classes of explanation, depending on the extent to which they emphasize adaptation to bounds, and adaptation to some ecology that differs from the immediate local environment. We illustrate this variation with examples from three domains: visual attention in a linguistic task, manual response ordering, and reasoning. We explore the relation of this framework to existing "levels" approaches to explanation, and to other optimality-based modeling approaches.

  15. Expecting the unexpected: applying the Develop-Distort Dilemma to maximize positive market impacts in health.

    PubMed

    Peters, David H; Paina, Ligia; Bennett, Sara

    2012-10-01

    Although health interventions start with good intentions to develop services for disadvantaged populations, they often distort the health market, making the delivery or financing of services difficult once the intervention is over: a condition called the 'Develop-Distort Dilemma' (DDD). In this paper, we describe how to examine whether a proposed intervention may develop or distort the health market. Our goal is to produce a tool that facilitates meaningful and systematic dialogue for practitioners and researchers to ensure that well-intentioned health interventions lead to productive health systems while reducing the undesirable distortions of such efforts. We apply the DDD tool to plan for development rather than distortions in health markets, using intervention research being conducted under the Future Health Systems consortium in Bangladesh, China and Uganda. Through a review of research proposals and interviews with principal investigators, we use the DDD tool to systematically understand how a project fits within the broader health market system, and to identify gaps in planning for sustainability. We found that while current stakeholders and funding sources for activities were easily identified, future ones were not. The implication is that the projects could raise community expectations that future services will be available and paid for, despite this actually being uncertain. Each project addressed the 'rules' of the health market system differently. The China research assesses changes in the formal financing rules, whereas Bangladesh and Uganda's projects involve influencing community level providers, where informal rules are more important. In each case, we recognize the importance of building trust between providers, communities and government officials. Each project could both develop and distort local health markets. Anyone intervening in the health market must recognize the main market perturbations, whether positive or negative, and manage them so

  16. Quantitative PET image reconstruction employing nested expectation-maximization deconvolution for motion compensation.

    PubMed

    Karakatsanis, Nicolas A; Tsoumpas, Charalampos; Zaidi, Habib

    2016-11-16

    Bulk body motion may randomly occur during PET acquisitions introducing blurring, attenuation-emission mismatches and, in dynamic PET, discontinuities in the measured time activity curves between consecutive frames. Meanwhile, dynamic PET scans are longer, thus increasing the probability of bulk motion. In this study, we propose a streamlined 3D PET motion-compensated image reconstruction (3D-MCIR) framework, capable of robustly deconvolving intra-frame motion from a static or dynamic 3D sinogram. The presented 3D-MCIR methods need not partition the data into multiple gates, such as 4D MCIR algorithms, or access list-mode (LM) data, such as LM MCIR methods, both associated with increased computation or memory resources. The proposed algorithms can support compensation for any periodic and non-periodic motion, such as cardio-respiratory or bulk motion, the latter including rolling, twisting or drifting. Inspired from the widely adopted point-spread function (PSF) deconvolution 3D PET reconstruction techniques, here we introduce an image-based 3D generalized motion deconvolution method within the standard 3D maximum-likelihood expectation-maximization (ML-EM) reconstruction framework. In particular, we initially integrate a motion blurring kernel, accounting for every tracked motion within a frame, as an additional MLEM modeling component in the image space (integrated 3D-MCIR). Subsequently, we replaced the integrated model component with a nested iterative Richardson-Lucy (RL) image-based deconvolution method to accelerate the MLEM algorithm convergence rate (RL-3D-MCIR). The final method was evaluated with realistic simulations of whole-body dynamic PET data employing the XCAT phantom and real human bulk motion profiles, the latter estimated from volunteer dynamic MRI scans. In addition, metabolic uptake rate Ki parametric images were generated with the standard Patlak method. Our results demonstrate significant improvement in contrast-to-noise ratio (CNR) and

  17. Direct reconstruction of the source intensity distribution of a clinical linear accelerator using a maximum likelihood expectation maximization algorithm.

    PubMed

    Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J

    2016-02-07

    Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.

  18. Recursive expectation-maximization clustering: A method for identifying buffering mechanisms composed of phenomic modules

    NASA Astrophysics Data System (ADS)

    Guo, Jingyu; Tian, Dehua; McKinney, Brett A.; Hartman, John L.

    2010-06-01

    Interactions between genetic and/or environmental factors are ubiquitous, affecting the phenotypes of organisms in complex ways. Knowledge about such interactions is becoming rate-limiting for our understanding of human disease and other biological phenomena. Phenomics refers to the integrative analysis of how all genes contribute to phenotype variation, entailing genome and organism level information. A systems biology view of gene interactions is critical for phenomics. Unfortunately the problem is intractable in humans; however, it can be addressed in simpler genetic model systems. Our research group has focused on the concept of genetic buffering of phenotypic variation, in studies employing the single-cell eukaryotic organism, S. cerevisiae. We have developed a methodology, quantitative high throughput cellular phenotyping (Q-HTCP), for high-resolution measurements of gene-gene and gene-environment interactions on a genome-wide scale. Q-HTCP is being applied to the complete set of S. cerevisiae gene deletion strains, a unique resource for systematically mapping gene interactions. Genetic buffering is the idea that comprehensive and quantitative knowledge about how genes interact with respect to phenotypes will lead to an appreciation of how genes and pathways are functionally connected at a systems level to maintain homeostasis. However, extracting biologically useful information from Q-HTCP data is challenging, due to the multidimensional and nonlinear nature of gene interactions, together with a relative lack of prior biological information. Here we describe a new approach for mining quantitative genetic interaction data called recursive expectation-maximization clustering (REMc). We developed REMc to help discover phenomic modules, defined as sets of genes with similar patterns of interaction across a series of genetic or environmental perturbations. Such modules are reflective of buffering mechanisms, i.e., genes that play a related role in the maintenance

  19. Recursive expectation-maximization clustering: a method for identifying buffering mechanisms composed of phenomic modules.

    PubMed

    Guo, Jingyu; Tian, Dehua; McKinney, Brett A; Hartman, John L

    2010-06-01

    Interactions between genetic and/or environmental factors are ubiquitous, affecting the phenotypes of organisms in complex ways. Knowledge about such interactions is becoming rate-limiting for our understanding of human disease and other biological phenomena. Phenomics refers to the integrative analysis of how all genes contribute to phenotype variation, entailing genome and organism level information. A systems biology view of gene interactions is critical for phenomics. Unfortunately the problem is intractable in humans; however, it can be addressed in simpler genetic model systems. Our research group has focused on the concept of genetic buffering of phenotypic variation, in studies employing the single-cell eukaryotic organism, S. cerevisiae. We have developed a methodology, quantitative high throughput cellular phenotyping (Q-HTCP), for high-resolution measurements of gene-gene and gene-environment interactions on a genome-wide scale. Q-HTCP is being applied to the complete set of S. cerevisiae gene deletion strains, a unique resource for systematically mapping gene interactions. Genetic buffering is the idea that comprehensive and quantitative knowledge about how genes interact with respect to phenotypes will lead to an appreciation of how genes and pathways are functionally connected at a systems level to maintain homeostasis. However, extracting biologically useful information from Q-HTCP data is challenging, due to the multidimensional and nonlinear nature of gene interactions, together with a relative lack of prior biological information. Here we describe a new approach for mining quantitative genetic interaction data called recursive expectation-maximization clustering (REMc). We developed REMc to help discover phenomic modules, defined as sets of genes with similar patterns of interaction across a series of genetic or environmental perturbations. Such modules are reflective of buffering mechanisms, i.e., genes that play a related role in the maintenance

  20. 76 FR 49473 - Petition to Maximize Practical Utility of List 1 Chemicals Screened Through EPA's Endocrine...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-10

    ... AGENCY Petition to Maximize Practical Utility of List 1 Chemicals Screened Through EPA's Endocrine... decisions on data received in response to the test orders issued under the Endocrine Disruptor Screening...'' system, which means EPA will not know your identity or contact information unless you provide it in...

  1. Whole-body direct 4D parametric PET imaging employing nested generalized Patlak expectation-maximization reconstruction

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib

    2016-08-01

    Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10-20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were

  2. Crustacean hemolymph microbiota: Endemic, tightly controlled, and utilization expectable.

    PubMed

    Wang, Xian-Wei; Wang, Jin-Xing

    2015-12-01

    Increasing number of evidence suggests that the hemolymph of numerous apparently healthy invertebrates is unsterile. Investigation of hemolymph microbiota properties and the homeostasis between host and bacteria is helpful to reveal bacteria pathogenesis, host immunity, and possible utilization in disease control. Crustaceans represent a large family of aquatic animals. Therefore, crustacean fishery is of important economic value worldwide. Research related to crustacean hemolymph microbiota has been performed over the years. In the present study, we conclude currently available information and present a comprehensive analysis regarding homeostasis between host and bacteria. In general, the presence of microbiota in crustacean hemolymph is an endemic event and can be influenced by internal and external factors. Opportunistic bacteria may have generated some changes or mutations under hemolymph stress. Meanwhile, hosts suppress hemolymph microbiota proliferation with the help of some critical antimicrobial peptides and lectins. The hemolymph microbiota may be beneficial for hosts as resistance against external damages. In addition, the hemolymph microbiota may be utilized in aquaculture.

  3. OPTUM : Optimum Portfolio Tool for Utility Maximization documentation and user's guide.

    SciTech Connect

    VanKuiken, J. C.; Jusko, M. J.; Samsa, M. E.; Decision and Information Sciences

    2008-09-30

    The Optimum Portfolio Tool for Utility Maximization (OPTUM) is a versatile and powerful tool for selecting, optimizing, and analyzing portfolios. The software introduces a compact interface that facilitates problem definition, complex constraint specification, and portfolio analysis. The tool allows simple comparisons between user-preferred choices and optimized selections. OPTUM uses a portable, efficient, mixed-integer optimization engine (lp-solve) to derive the optimal mix of projects that satisfies the constraints and maximizes the total portfolio utility. OPTUM provides advanced features, such as convenient menus for specifying conditional constraints and specialized graphical displays of the optimal frontier and alternative solutions to assist in sensitivity visualization. OPTUM can be readily applied to other nonportfolio, resource-constrained optimization problems.

  4. An Expectation-Maximization Method for Spatio-Temporal Blind Source Separation Using an AR-MOG Source Model

    PubMed Central

    Hild, Kenneth E.; Attias, Hagai T.; Nagarajan, Srikantan S.

    2009-01-01

    In this paper, we develop a maximum-likelihood (ML) spatio-temporal blind source separation (BSS) algorithm, where the temporal dependencies are explained by assuming that each source is an autoregressive (AR) process and the distribution of the associated independent identically distributed (i.i.d.) inovations process is described using a mixture of Gaussians. Unlike most ML methods, the proposed algorithm takes into account both spatial and temporal information, optimization is performed using the expectation-maximization (EM) method, the source model is adapted to maximize the likelihood, and the update equations have a simple, analytical form. The proposed method, which we refer to as autoregressive mixture of Gaussians (AR-MOG), outperforms nine other methods for artificial mixtures of real audio. We also show results for using AR-MOG to extract the fetal cardiac signal from real magnetocardiographic (MCG) data. PMID:18334368

  5. Maximum Likelihood Expectation-Maximization Algorithms Applied to Localization and Identification of Radioactive Sources with Recent Coded Mask Gamma Cameras

    SciTech Connect

    Lemaire, H.; Barat, E.; Carrel, F.; Dautremer, T.; Dubos, S.; Limousin, O.; Montagu, T.; Normand, S.; Schoepff, V.; Amgarou, K.; Menaa, N.; Angelique, J.-C.; Patoz, A.

    2015-07-01

    In this work, we tested Maximum likelihood expectation-maximization (MLEM) algorithms optimized for gamma imaging applications on two recent coded mask gamma cameras. We respectively took advantage of the characteristics of the GAMPIX and Caliste HD-based gamma cameras: noise reduction thanks to mask/anti-mask procedure but limited energy resolution for GAMPIX, high energy resolution for Caliste HD. One of our short-term perspectives is the test of MAPEM algorithms integrating specific prior values for the data to reconstruct adapted to the gamma imaging topic. (authors)

  6. Maternal Immunization Earlier in Pregnancy Maximizes Antibody Transfer and Expected Infant Seropositivity Against Pertussis

    PubMed Central

    Eberhardt, Christiane S.; Blanchard-Rohner, Geraldine; Lemaître, Barbara; Boukrid, Meriem; Combescure, Christophe; Othenin-Girard, Véronique; Chilin, Antonina; Petre, Jean; de Tejada, Begoña Martinez; Siegrist, Claire-Anne

    2016-01-01

    Background. Maternal immunization against pertussis is currently recommended after the 26th gestational week (GW). Data on the optimal timing of maternal immunization are inconsistent. Methods. We conducted a prospective observational noninferiority study comparing the influence of second-trimester (GW 13–25) vs third-trimester (≥GW 26) tetanus-diphtheria-acellular pertussis (Tdap) immunization in pregnant women who delivered at term. Geometric mean concentrations (GMCs) of cord blood antibodies to recombinant pertussis toxin (PT) and filamentous hemagglutinin (FHA) were assessed by enzyme-linked immunosorbent assay. The primary endpoint were GMCs and expected infant seropositivity rates, defined by birth anti-PT >30 enzyme-linked immunosorbent assay units (EU)/mL to confer seropositivity until 3 months of age. Results. We included 335 women (mean age, 31.0 ± 5.1 years; mean gestational age, 39.3 ± 1.3 GW) previously immunized with Tdap in the second (n = 122) or third (n = 213) trimester. Anti-PT and anti-FHA GMCs were higher following second- vs third-trimester immunization (PT: 57.1 EU/mL [95% confidence interval {CI}, 47.8–68.2] vs 31.1 EU/mL [95% CI, 25.7–37.7], P < .001; FHA: 284.4 EU/mL [95% CI, 241.3–335.2] vs 140.2 EU/mL [95% CI, 115.3–170.3], P < .001). The adjusted GMC ratios after second- vs third-trimester immunization differed significantly (PT: 1.9 [95% CI, 1.4–2.5]; FHA: 2.2 [95% CI, 1.7–3.0], P < .001). Expected infant seropositivity rates reached 80% vs 55% following second- vs third-trimester immunization (adjusted odds ratio, 3.7 [95% CI, 2.1–6.5], P < .001). Conclusions. Early second-trimester maternal Tdap immunization significantly increased neonatal antibodies. Recommending immunization from the second trimester onward would widen the immunization opportunity window and could improve seroprotection. PMID:26797213

  7. Expectation maximization classification and Laplacian based thickness measurement for cerebral cortex thickness estimation

    NASA Astrophysics Data System (ADS)

    Holden, Mark; Moreno-Vallecillo, Rafael; Harris, Anthony; Gomes, Lavier J.; Diep, Than-Mei; Bourgeat, Pierrick T.; Ourselin, Sébastien

    2007-03-01

    We describe a new framework for measuring cortical thickness from MR human brain images. This involves the integration of a method of tissue classification with one to estimate thickness in 3D. We have determined an additional boundary detection step to facilitate this. The classification stage utlizes the Expectation Maximisation (EM) algorithm to classify voxels associated with the tissue types that interface with cortical grey matter (GM, WM and CSF). This uses a Gaussian mixture and the EM algorithm to estimate the position and and width of the Gaussians that model the intensity distributions of the GM, WM and CSF tissue classes. The boundary detection stage uses the GM, WM and CSF classifications and finds connected components, fills holes and then applies a geodesic distance transform to determine the GM/WM interface. Finally the thickness of the cortical grey matter is estimated by solving Laplace's equation and determining the streamlines that connect the inner and outer boundaries. The contribution of this work is the adaptation of the classification and thickness measurement steps, neither requiring manual initialisation, and also the validation strategy. The resultant algorithm is fully automatic and avoids the computational expense associated with preserving the cortical surface topology. We have devised a validation strategy that indicates the cortical segmentation of a gold standard brain atlas has a similarity index of 0.91, thickness estimation has subvoxel accuracy evaluated using a synthetic image and precision of the combined segmentation and thickness measurement of 1.54mm using three clinical images.

  8. Maximizing acute fat utilization: effects of exercise, food, and individual characteristics.

    PubMed

    Bennard, Patrick; Imbeault, Pascal; Doucet, Eric

    2005-08-01

    In discussion of the physiological mechanisms that regulate fat metabolism, and with consideration of the metabolic stimuli that modulate substrate metabolism, the issue of how an acute state of negative lipid balance can be maximized is addressed. The regulation of lipolysis by catecholamines and insulin is reviewed, and the mechanisms of fatty acid mobilization and uptake by muscle are also briefly discussed. The implications of substrate availability and the hormonal response during physiological states such as fasting, exercise, and after food intake are also addressed, with particular regard to the influences on fatty acid mobilization and/or oxidation from eliciting these stimuli conjointly. Finally, a brief discussion is given of both the nature of exercise and the exercising individual, and how these factors influence fat metabolism during exercise. It is also a primary thrust of this paper to underline gaps in the existing literature with regard to exercise timing concerning food ingestion for maximizing acute lipid utilization.

  9. Fitting Nonlinear Ordinary Differential Equation Models with Random Effects and Unknown Initial Conditions Using the Stochastic Approximation Expectation-Maximization (SAEM) Algorithm.

    PubMed

    Chow, Sy-Miin; Lu, Zhaohua; Sherwood, Andrew; Zhu, Hongtu

    2016-03-01

    The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation-maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed.

  10. An innovative multimodal/multispectral image registration method for medical images based on the Expectation-Maximization algorithm.

    PubMed

    Arce-Santana, Edgar; Campos-Delgado, Daniel U; Mejia-Rodriguez, Aldo; Reducindo, Isnardo

    2015-01-01

    In this paper, we present a methodology for multimodal/ multispectral image registration of medical images. This approach is formulated by using the Expectation-Maximization (EM) methodology, such that we estimate the parameters of a geometric transformation that aligns multimodal/multispectral images. In this framework, the hidden random variables are associated to the intensity relations between the studied images, which allow to compare multispectral intensity values between images of different modalities. The methodology is basically composed by an iterative two-step procedure, where at each step, a new estimation of the joint conditional multispectral intensity distribution and the geometric transformation is computed. The proposed algorithm was tested with different kinds of medical images, and the obtained results show that the proposed methodology can be used to efficiently align multimodal/multispectral medical images.

  11. Application of an expectation maximization method to the reconstruction of X-ray-tube spectra from transmission data

    NASA Astrophysics Data System (ADS)

    Endrizzi, M.; Delogu, P.; Oliva, P.

    2014-12-01

    An expectation maximization method is applied to the reconstruction of X-ray tube spectra from transmission measurements in the energy range 7-40 keV. A semiconductor single-photon counting detector, ionization chambers and a scintillator-based detector are used for the experimental measurement of the transmission. The number of iterations required to reach an approximate solution is estimated on the basis of the measurement error, according to the discrepancy principle. The effectiveness of the stopping rule is studied on simulated data and validated with experiments. The quality of the reconstruction depends on the information available on the source itself and the possibility to add this knowledge to the solution process is investigated. The method can produce good approximations provided that the amount of noise in the data can be estimated.

  12. Spatial Fuzzy C Means and Expectation Maximization Algorithms with Bias Correction for Segmentation of MR Brain Images.

    PubMed

    Meena Prakash, R; Shantha Selva Kumari, R

    2017-01-01

    The Fuzzy C Means (FCM) and Expectation Maximization (EM) algorithms are the most prevalent methods for automatic segmentation of MR brain images into three classes Gray Matter (GM), White Matter (WM) and Cerebrospinal Fluid (CSF). The major difficulties associated with these conventional methods for MR brain image segmentation are the Intensity Non-uniformity (INU) and noise. In this paper, EM and FCM with spatial information and bias correction are proposed to overcome these effects. The spatial information is incorporated by convolving the posterior probability during E-Step of the EM algorithm with mean filter. Also, a method of pixel re-labeling is included to improve the segmentation accuracy. The proposed method is validated by extensive experiments on both simulated and real brain images from standard database. Quantitative and qualitative results depict that the method is superior to the conventional methods by around 25% and over the state-of-the art method by 8%.

  13. Patch-based Augmentation of Expectation-Maximization for Brain MRI Tissue Segmentation at Arbitrary Age after Premature Birth

    PubMed Central

    Liu, Mengyuan; Kitsch, Averi; Miller, Steven; Chau, Vann; Poskitt, Kenneth; Rousseau, Francois; Shaw, Dennis; Studholme, Colin

    2015-01-01

    Accurate automated tissue segmentation of premature neonatal magnetic resonance images is a crucial task for quantification of brain injury and its impact on early postnatal growth and later cognitive development. In such studies it is common for scans to be acquired shortly after birth or later during the hospital stay and therefore occur at arbitrary gestational ages during a period of rapid developmental change. It is important to be able to segment any of these scans with comparable accuracy. Previous work on brain tissue segmentation in premature neonates has focused on segmentation at specific ages. Here we look at solving the more general problem using adaptations of age specific atlas based methods and evaluate this using a unique manually traced database of high resolution images spanning 20 gestational weeks of development. We examine the complimentary strengths of age specific atlas-based Expectation-Maximization approaches and patch-based methods for this problem and explore the development of two new hybrid techniques, patch-based augmentation of Expectation-Maximization with weighted fusion and a spatial variability constrained patch search. The former approach seeks to combine the advantages of both atlas- and patch-based methods by learning from the performance of the two techniques across the brain anatomy at different developmental ages, while the latter technique aims to use anatomical variability maps learnt from atlas training data to locally constrain the patch-based search range. The proposed approaches were evaluated using leave-one-out cross-validation. Compared with the conventional age specific atlas-based segmentation and direct patch based segmentation, both new approaches demonstrate improved accuracy in the automated labeling of cortical gray matter, white matter, ventricles and sulcal cortical-spinal fluid regions, while maintaining comparable results in deep gray matter. PMID:26702777

  14. Patch-based augmentation of Expectation-Maximization for brain MRI tissue segmentation at arbitrary age after premature birth.

    PubMed

    Liu, Mengyuan; Kitsch, Averi; Miller, Steven; Chau, Vann; Poskitt, Kenneth; Rousseau, Francois; Shaw, Dennis; Studholme, Colin

    2016-02-15

    Accurate automated tissue segmentation of premature neonatal magnetic resonance images is a crucial task for quantification of brain injury and its impact on early postnatal growth and later cognitive development. In such studies it is common for scans to be acquired shortly after birth or later during the hospital stay and therefore occur at arbitrary gestational ages during a period of rapid developmental change. It is important to be able to segment any of these scans with comparable accuracy. Previous work on brain tissue segmentation in premature neonates has focused on segmentation at specific ages. Here we look at solving the more general problem using adaptations of age specific atlas based methods and evaluate this using a unique manually traced database of high resolution images spanning 20 gestational weeks of development. We examine the complimentary strengths of age specific atlas-based Expectation-Maximization approaches and patch-based methods for this problem and explore the development of two new hybrid techniques, patch-based augmentation of Expectation-Maximization with weighted fusion and a spatial variability constrained patch search. The former approach seeks to combine the advantages of both atlas- and patch-based methods by learning from the performance of the two techniques across the brain anatomy at different developmental ages, while the latter technique aims to use anatomical variability maps learnt from atlas training data to locally constrain the patch-based search range. The proposed approaches were evaluated using leave-one-out cross-validation. Compared with the conventional age specific atlas-based segmentation and direct patch based segmentation, both new approaches demonstrate improved accuracy in the automated labeling of cortical gray matter, white matter, ventricles and sulcal cortical-spinal fluid regions, while maintaining comparable results in deep gray matter.

  15. Monkeys choose as if maximizing utility compatible with basic principles of revealed preference theory

    PubMed Central

    Pastor-Bernier, Alexandre; Plott, Charles R.; Schultz, Wolfram

    2017-01-01

    Revealed preference theory provides axiomatic tools for assessing whether individuals make observable choices “as if” they are maximizing an underlying utility function. The theory evokes a tradeoff between goods whereby individuals improve themselves by trading one good for another good to obtain the best combination. Preferences revealed in these choices are modeled as curves of equal choice (indifference curves) and reflect an underlying process of optimization. These notions have far-reaching applications in consumer choice theory and impact the welfare of human and animal populations. However, they lack the empirical implementation in animals that would be required to establish a common biological basis. In a design using basic features of revealed preference theory, we measured in rhesus monkeys the frequency of repeated choices between bundles of two liquids. For various liquids, the animals’ choices were compatible with the notion of giving up a quantity of one good to gain one unit of another good while maintaining choice indifference, thereby implementing the concept of marginal rate of substitution. The indifference maps consisted of nonoverlapping, linear, convex, and occasionally concave curves with typically negative, but also sometimes positive, slopes depending on bundle composition. Out-of-sample predictions using homothetic polynomials validated the indifference curves. The animals’ preferences were internally consistent in satisfying transitivity. Change of option set size demonstrated choice optimality and satisfied the Weak Axiom of Revealed Preference (WARP). These data are consistent with a version of revealed preference theory in which preferences are stochastic; the monkeys behaved “as if” they had well-structured preferences and maximized utility. PMID:28202727

  16. Monkeys choose as if maximizing utility compatible with basic principles of revealed preference theory.

    PubMed

    Pastor-Bernier, Alexandre; Plott, Charles R; Schultz, Wolfram

    2017-03-07

    Revealed preference theory provides axiomatic tools for assessing whether individuals make observable choices "as if" they are maximizing an underlying utility function. The theory evokes a tradeoff between goods whereby individuals improve themselves by trading one good for another good to obtain the best combination. Preferences revealed in these choices are modeled as curves of equal choice (indifference curves) and reflect an underlying process of optimization. These notions have far-reaching applications in consumer choice theory and impact the welfare of human and animal populations. However, they lack the empirical implementation in animals that would be required to establish a common biological basis. In a design using basic features of revealed preference theory, we measured in rhesus monkeys the frequency of repeated choices between bundles of two liquids. For various liquids, the animals' choices were compatible with the notion of giving up a quantity of one good to gain one unit of another good while maintaining choice indifference, thereby implementing the concept of marginal rate of substitution. The indifference maps consisted of nonoverlapping, linear, convex, and occasionally concave curves with typically negative, but also sometimes positive, slopes depending on bundle composition. Out-of-sample predictions using homothetic polynomials validated the indifference curves. The animals' preferences were internally consistent in satisfying transitivity. Change of option set size demonstrated choice optimality and satisfied the Weak Axiom of Revealed Preference (WARP). These data are consistent with a version of revealed preference theory in which preferences are stochastic; the monkeys behaved "as if" they had well-structured preferences and maximized utility.

  17. MUSIC-Expected maximization gaussian mixture methodology for clustering and detection of task-related neuronal firing rates.

    PubMed

    Ortiz-Rosario, Alexis; Adeli, Hojjat; Buford, John A

    2017-01-15

    Researchers often rely on simple methods to identify involvement of neurons in a particular motor task. The historical approach has been to inspect large groups of neurons and subjectively separate neurons into groups based on the expertise of the investigator. In cases where neuron populations are small it is reasonable to inspect these neuronal recordings and their firing rates carefully to avoid data omissions. In this paper, a new methodology is presented for automatic objective classification of neurons recorded in association with behavioral tasks into groups. By identifying characteristics of neurons in a particular group, the investigator can then identify functional classes of neurons based on their relationship to the task. The methodology is based on integration of a multiple signal classification (MUSIC) algorithm to extract relevant features from the firing rate and an expectation-maximization Gaussian mixture algorithm (EM-GMM) to cluster the extracted features. The methodology is capable of identifying and clustering similar firing rate profiles automatically based on specific signal features. An empirical wavelet transform (EWT) was used to validate the features found in the MUSIC pseudospectrum and the resulting signal features captured by the methodology. Additionally, this methodology was used to inspect behavioral elements of neurons to physiologically validate the model. This methodology was tested using a set of data collected from awake behaving non-human primates.

  18. Novel hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization estimation method for population pharmacokinetic data analysis.

    PubMed

    Ng, C M

    2013-10-01

    The development of a population PK/PD model, an essential component for model-based drug development, is both time- and labor-intensive. A graphical-processing unit (GPU) computing technology has been proposed and used to accelerate many scientific computations. The objective of this study was to develop a hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization (MCPEM) estimation algorithm for population PK data analysis. A hybrid GPU-CPU implementation of the MCPEM algorithm (MCPEMGPU) and identical algorithm that is designed for the single CPU (MCPEMCPU) were developed using MATLAB in a single computer equipped with dual Xeon 6-Core E5690 CPU and a NVIDIA Tesla C2070 GPU parallel computing card that contained 448 stream processors. Two different PK models with rich/sparse sampling design schemes were used to simulate population data in assessing the performance of MCPEMCPU and MCPEMGPU. Results were analyzed by comparing the parameter estimation and model computation times. Speedup factor was used to assess the relative benefit of parallelized MCPEMGPU over MCPEMCPU in shortening model computation time. The MCPEMGPU consistently achieved shorter computation time than the MCPEMCPU and can offer more than 48-fold speedup using a single GPU card. The novel hybrid GPU-CPU implementation of parallelized MCPEM algorithm developed in this study holds a great promise in serving as the core for the next-generation of modeling software for population PK/PD analysis.

  19. Development of a fully 3D system model in iterative expectation-maximization reconstruction for cone-beam SPECT

    NASA Astrophysics Data System (ADS)

    Ye, Hongwei; Vogelsang, Levon; Feiglin, David H.; Lipson, Edward D.; Krol, Andrzej

    2008-03-01

    In order to improve reconstructed image quality for cone-beam collimator SPECT, we have developed and implemented a fully 3D reconstruction, using an ordered subsets expectation maximization (OSEM) algorithm, along with a volumetric system model - cone-volume system model (CVSM), a modified attenuation compensation, and a 3D depth- and angle-dependent resolution and sensitivity correction. SPECT data were acquired in a 128×128 matrix, in 120 views with a single circular orbit. Two sets of numerical Defrise phantoms were used to simulate CBC SPECT scans, and low noise and scatter-free projection datasets were obtained using the SimSET Monte Carlo package. The reconstructed images, obtained using OSEM with a line-length system model (LLSM) and a 3D Gaussian post-filter, and OSEM with FVSM and a 3D Gaussian post-filter were quantitatively studied. Overall improvement in the image quality has been observed, including better transaxial resolution, higher contrast-to-noise ratio between hot and cold disks, and better accuracy and lower bias in OSEM-CVSM, compared with OSEM-LLSM.

  20. Expectation-maximization algorithm for determining natural selection of Y-linked genes through two-sex branching processes.

    PubMed

    González, M; Gutiérrez, C; Martínez, R

    2012-09-01

    A two-dimensional bisexual branching process has recently been presented for the analysis of the generation-to-generation evolution of the number of carriers of a Y-linked gene. In this model, preference of females for males with a specific genetic characteristic is assumed to be determined by an allele of the gene. It has been shown that the behavior of this kind of Y-linked gene is strongly related to the reproduction law of each genotype. In practice, the corresponding offspring distributions are usually unknown, and it is necessary to develop their estimation theory in order to determine the natural selection of the gene. Here we deal with the estimation problem for the offspring distribution of each genotype of a Y-linked gene when the only observable data are each generation's total numbers of males of each genotype and of females. We set out the problem in a non parametric framework and obtain the maximum likelihood estimators of the offspring distributions using an expectation-maximization algorithm. From these estimators, we also derive the estimators for the reproduction mean of each genotype and forecast the distribution of the future population sizes. Finally, we check the accuracy of the algorithm by means of a simulation study.

  1. Expectation-maximization algorithms for learning a finite mixture of univariate survival time distributions from partially specified class values

    SciTech Connect

    Lee, Youngrok

    2013-05-15

    Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates of nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.

  2. Accuracy of Haplotype Frequency Estimation for Biallelic Loci, via the Expectation-Maximization Algorithm for Unphased Diploid Genotype Data

    PubMed Central

    Fallin, Daniele; Schork, Nicholas J.

    2000-01-01

    Haplotype analyses have become increasingly common in genetic studies of human disease because of their ability to identify unique chromosomal segments likely to harbor disease-predisposing genes. The study of haplotypes is also used to investigate many population processes, such as migration and immigration rates, linkage-disequilibrium strength, and the relatedness of populations. Unfortunately, many haplotype-analysis methods require phase information that can be difficult to obtain from samples of nonhaploid species. There are, however, strategies for estimating haplotype frequencies from unphased diploid genotype data collected on a sample of individuals that make use of the expectation-maximization (EM) algorithm to overcome the missing phase information. The accuracy of such strategies, compared with other phase-determination methods, must be assessed before their use can be advocated. In this study, we consider and explore sources of error between EM-derived haplotype frequency estimates and their population parameters, noting that much of this error is due to sampling error, which is inherent in all studies, even when phase can be determined. In light of this, we focus on the additional error between haplotype frequencies within a sample data set and EM-derived haplotype frequency estimates incurred by the estimation procedure. We assess the accuracy of haplotype frequency estimation as a function of a number of factors, including sample size, number of loci studied, allele frequencies, and locus-specific allelic departures from Hardy-Weinberg and linkage equilibrium. We point out the relative impacts of sampling error and estimation error, calling attention to the pronounced accuracy of EM estimates once sampling error has been accounted for. We also suggest that many factors that may influence accuracy can be assessed empirically within a data set—a fact that can be used to create “diagnostics” that a user can turn to for assessing potential

  3. The role of data assimilation in maximizing the utility of geospace observations (Invited)

    NASA Astrophysics Data System (ADS)

    Matsuo, T.

    2013-12-01

    Data assimilation can facilitate maximizing the utility of existing geospace observations by offering an ultimate marriage of inductive (data-driven) and deductive (first-principles based) approaches to addressing critical questions in space weather. Assimilative approaches that incorporate dynamical models are, in particular, capable of making a diverse set of observations consistent with physical processes included in a first-principles model, and allowing unobserved physical states to be inferred from observations. These points will be demonstrated in the context of the application of an ensemble Kalman filter (EnKF) to a thermosphere and ionosphere general circulation model. An important attribute of this approach is that the feedback between plasma and neutral variables is self-consistently treated both in the forecast model as well as in the assimilation scheme. This takes advantage of the intimate coupling between the thermosphere and ionosphere described in general circulation models to enable the inference of unobserved thermospheric states from the relatively plentiful observations of the ionosphere. Given the ever-growing infrastructure for the global navigation satellite system, this is indeed a promising prospect for geospace data assimilation. In principle, similar approaches can be applied to any geospace observing systems to extract more geophysical information from a given set of observations than would otherwise be possible.

  4. Partial volume correction of PET-imaged tumor heterogeneity using expectation maximization with a spatially varying point spread function

    PubMed Central

    Barbee, David L; Flynn, Ryan T; Holden, James E; Nickles, Robert J; Jeraj, Robert

    2010-01-01

    Tumor heterogeneities observed in positron emission tomography (PET) imaging are frequently compromised of partial volume effects which may affect treatment prognosis, assessment, or future implementations such as biologically optimized treatment planning (dose painting). This paper presents a method for partial volume correction of PET-imaged heterogeneous tumors. A point source was scanned on a GE Discover LS at positions of increasing radii from the scanner’s center to obtain the spatially varying point spread function (PSF). PSF images were fit in three dimensions to Gaussian distributions using least squares optimization. Continuous expressions were devised for each Gaussian width as a function of radial distance, allowing for generation of the system PSF at any position in space. A spatially varying partial volume correction (SV-PVC) technique was developed using expectation maximization (EM) and a stopping criterion based on the method’s correction matrix generated for each iteration. The SV-PVC was validated using a standard tumor phantom and a tumor heterogeneity phantom, and was applied to a heterogeneous patient tumor. SV-PVC results were compared to results obtained from spatially invariant partial volume correction (SINV-PVC), which used directionally uniform three dimensional kernels. SV-PVC of the standard tumor phantom increased the maximum observed sphere activity by 55 and 40% for 10 and 13 mm diameter spheres, respectively. Tumor heterogeneity phantom results demonstrated that as net changes in the EM correction matrix decreased below 35%, further iterations improved overall quantitative accuracy by less than 1%. SV-PVC of clinically observed tumors frequently exhibited changes of ±30% in regions of heterogeneity. The SV-PVC method implemented spatially varying kernel widths and automatically determined the number of iterations for optimal restoration, parameters which are arbitrarily chosen in SINV-PVC. Comparing SV-PVC to SINV

  5. Maximizing coupling-efficiency of high-power diode lasers utilizing hybrid assembly technology

    NASA Astrophysics Data System (ADS)

    Zontar, D.; Dogan, M.; Fulghum, S.; Müller, T.; Haag, S.; Brecher, C.

    2015-03-01

    In this paper, we present hybrid assembly technology to maximize coupling efficiency for spatially combined laser systems. High quality components, such as center-turned focusing units, as well as suitable assembly strategies are necessary to obtain highest possible output ratios. Alignment strategies are challenging tasks due to their complexity and sensitivity. Especially in low-volume production fully automated systems are economically at a disadvantage, as operator experience is often expensive. However reproducibility and quality of automatically assembled systems can be superior. Therefore automated and manual assembly techniques are combined to obtain high coupling efficiency while preserving maximum flexibility. The paper will describe necessary equipment and software to enable hybrid assembly processes. Micromanipulator technology with high step-resolution and six degrees of freedom provide a large number of possible evaluation points. Automated algorithms are necess ary to speed-up data gathering and alignment to efficiently utilize available granularity for manual assembly processes. Furthermore, an engineering environment is presented to enable rapid prototyping of automation tasks with simultaneous data ev aluation. Integration with simulation environments, e.g. Zemax, allows the verification of assembly strategies in advance. Data driven decision making ensures constant high quality, documents the assembly process and is a basis for further improvement. The hybrid assembly technology has been applied on several applications for efficiencies above 80% and will be discussed in this paper. High level coupling efficiency has been achieved with minimized assembly as a result of semi-automated alignment. This paper will focus on hybrid automation for optimizing and attaching turning mirrors and collimation lenses.

  6. Putting Teens at the Center: Maximizing Public Utility of Urban Space through Youth Involvement in Planning and Employment.

    ERIC Educational Resources Information Center

    Lawson, Laura; McNally, Marcia

    1995-01-01

    Including teens' needs in the planning and maintenance of urban space suggests new methods of layering utility and maximizing benefit to teens and community. Discusses the Berkeley Youth Alternatives (BYA) Youth Employment Landscape Program and BYA Community Garden Patch. Program descriptions and evaluation provide future direction. (LZ)

  7. The behavioral economics of consumer brand choice: patterns of reinforcement and utility maximization.

    PubMed

    Foxall, Gordon R; Oliveira-Castro, Jorge M; Schrezenmaier, Teresa C

    2004-06-30

    Purchasers of fast-moving consumer goods generally exhibit multi-brand choice, selecting apparently randomly among a small subset or "repertoire" of tried and trusted brands. Their behavior shows both matching and maximization, though it is not clear just what the majority of buyers are maximizing. Each brand attracts, however, a small percentage of consumers who are 100%-loyal to it during the period of observation. Some of these are exclusively buyers of premium-priced brands who are presumably maximizing informational reinforcement because their demand for the brand is relatively price-insensitive or inelastic. Others buy exclusively the cheapest brands available and can be assumed to maximize utilitarian reinforcement since their behavior is particularly price-sensitive or elastic. Between them are the majority of consumers whose multi-brand buying takes the form of selecting a mixture of economy -- and premium-priced brands. Based on the analysis of buying patterns of 80 consumers for 9 product categories, the paper examines the continuum of consumers so defined and seeks to relate their buying behavior to the question of how and what consumers maximize.

  8. Social and Professional Participation of Individuals Who Are Deaf: Utilizing the Psychosocial Potential Maximization Framework

    ERIC Educational Resources Information Center

    Jacobs, Paul G.; Brown, P. Margaret; Paatsch, Louise

    2012-01-01

    This article documents a strength-based understanding of how individuals who are deaf maximize their social and professional potential. This exploratory study was conducted with 49 adult participants who are deaf (n = 30) and who have typical hearing (n = 19) residing in America, Australia, England, and South Africa. The findings support a…

  9. Limit order placement as an utility maximization problem and the origin of power law distribution of limit order prices

    NASA Astrophysics Data System (ADS)

    Lillo, F.

    2007-02-01

    I consider the problem of the optimal limit order price of a financial asset in the framework of the maximization of the utility function of the investor. The analytical solution of the problem gives insight on the origin of the recently empirically observed power law distribution of limit order prices. In the framework of the model, the most likely proximate cause of this power law is a power law heterogeneity of traders' investment time horizons.

  10. Optimal Battery Utilization Over Lifetime for Parallel Hybrid Electric Vehicle to Maximize Fuel Economy

    SciTech Connect

    Patil, Chinmaya; Naghshtabrizi, Payam; Verma, Rajeev; Tang, Zhijun; Smith, Kandler; Shi, Ying

    2016-08-01

    This paper presents a control strategy to maximize fuel economy of a parallel hybrid electric vehicle over a target life of the battery. Many approaches to maximizing fuel economy of parallel hybrid electric vehicle do not consider the effect of control strategy on the life of the battery. This leads to an oversized and underutilized battery. There is a trade-off between how aggressively to use and 'consume' the battery versus to use the engine and consume fuel. The proposed approach addresses this trade-off by exploiting the differences in the fast dynamics of vehicle power management and slow dynamics of battery aging. The control strategy is separated into two parts, (1) Predictive Battery Management (PBM), and (2) Predictive Power Management (PPM). PBM is the higher level control with slow update rate, e.g. once per month, responsible for generating optimal set points for PPM. The considered set points in this paper are the battery power limits and State Of Charge (SOC). The problem of finding the optimal set points over the target battery life that minimize engine fuel consumption is solved using dynamic programming. PPM is the lower level control with high update rate, e.g. a second, responsible for generating the optimal HEV energy management controls and is implemented using model predictive control approach. The PPM objective is to find the engine and battery power commands to achieve the best fuel economy given the battery power and SOC constraints imposed by PBM. Simulation results with a medium duty commercial hybrid electric vehicle and the proposed two-level hierarchical control strategy show that the HEV fuel economy is maximized while meeting a specified target battery life. On the other hand, the optimal unconstrained control strategy achieves marginally higher fuel economy, but fails to meet the target battery life.

  11. Optimal utilization of modern reproductive technologies to maximize the gross margin of milk production.

    PubMed

    Heikkilä, A-M; Peippo, J

    2012-06-01

    In this study, a linear programming model was developed to maximize the gross margin of milk production by determining the optimal use of different reproductive technologies in a dairy herd. The model has the potential to vary the use of conventional artificial insemination, insemination with X-sorted sperm, and the use of unselected or sex-selected embryo recovery and transfer. Data from Finnish dairy herd recording systems were used to parameterize the model. This paper presents the results of 6 scenarios for a herd size of 60 dairy cows. In the basic scenario, the optimum economic combination for Finnish conditions was to inseminate 10 heifers and 22 cows with unsorted semen, 8 heifers with X-sorted sperm, and to use 20 cows as embryo donors which was the upper constraint for this technique. The embryo donors were inseminated with conventional semen for both embryo production and their subsequent pregnancy. Without restriction on embryo recovery, the optimum combination was to use all heifers as donors of sex-selected embryos and all cows as donors of unselected embryos. It was more profitable to produce female embryos with X-sorted sperm than by sorting embryos. Embryo recipients were not economically justified in any scenario. In practice, the optimal strategy is herd-specific depending on the input costs, output values and the technical success of each reproductive technology in that herd. This single-year linear programming model adequately differentiates between breeding technologies within a herd, but further research is needed to develop dynamic models to consider genetic improvement and herd expansion.

  12. Maximizing the utility of monitoring to the adaptive management of natural resources

    USGS Publications Warehouse

    Kendall, William L.; Moore, Clinton T.; Gitzen, Robert A.; Cooper, Andrew B.; Millspaugh, Joshua J.; Licht, Daniel S.

    2012-01-01

    Data collection is an important step in any investigation about the structure or processes related to a natural system. In a purely scientific investigation (experiments, quasi-experiments, observational studies), data collection is part of the scientific method, preceded by the identification of hypotheses and the design of any manipulations of the system to test those hypotheses. Data collection and the manipulations that precede it are ideally designed to maximize the information that is derived from the study. That is, such investigations should be designed for maximum power to evaluate the relative validity of the hypotheses posed. When data collection is intended to inform the management of ecological systems, we call it monitoring. Note that our definition of monitoring encompasses a broader range of data-collection efforts than some alternative definitions – e.g. Chapter 3. The purpose of monitoring as we use the term can vary, from surveillance or “thumb on the pulse” monitoring (see Nichols and Williams 2006), intended to detect changes in a system due to any non-specified source (e.g. the North American Breeding Bird Survey), to very specific and targeted monitoring of the results of specific management actions (e.g. banding and aerial survey efforts related to North American waterfowl harvest management). Although a role of surveillance monitoring is to detect unanticipated changes in a system, the same result is possible from a collection of targeted monitoring programs distributed across the same spatial range (Box 4.1). In the face of limited budgets and many specific management questions, tying monitoring as closely as possible to management needs is warranted (Nichols and Williams 2006). Adaptive resource management (ARM; Walters 1986, Williams 1997, Kendall 2001, Moore and Conroy 2006, McCarthy and Possingham 2007, Conroy et al. 2008a) provides a context and specific purpose for monitoring: to evaluate decisions with respect to achievement

  13. Unsupervised learning applied in MER and ECG signals through Gaussians mixtures with the Expectation-Maximization algorithm and Variational Bayesian Inference.

    PubMed

    Vargas Cardona, Hernán Darío; Orozco, Álvaro Ángel; Álvarez, Mauricio A

    2013-01-01

    Automatic identification of biosignals is one of the more studied fields in biomedical engineering. In this paper, we present an approach for the unsupervised recognition of biomedical signals: Microelectrode Recordings (MER) and Electrocardiography signals (ECG). The unsupervised learning is based in classic and bayesian estimation theory. We employ gaussian mixtures models with two estimation methods. The first is derived from the frequentist estimation theory, known as Expectation-Maximization (EM) algorithm. The second is obtained from bayesian probabilistic estimation and it is called variational inference. In this framework, both methods are used for parameters estimation of Gaussian mixtures. The mixtures models are used for unsupervised pattern classification, through the responsibility matrix. The algorithms are applied in two real databases acquired in Parkinson's disease surgeries and electrocardiograms. The results show an accuracy over 85% in MER and 90% in ECG for identification of two classes. These results are statistically equal or even better than parametric (Naive Bayes) and nonparametric classifiers (K-nearest neighbor).

  14. Illustrating Caffeine's Pharmacological and Expectancy Effects Utilizing a Balanced Placebo Design.

    ERIC Educational Resources Information Center

    Lotshaw, Sandra C.; And Others

    1996-01-01

    Hypothesizes that pharmacological and expectancy effects may be two principles that govern caffeine consumption in the same way they affect other drug use. Tests this theory through a balanced placebo design on 100 male undergraduate students. Expectancy set and caffeine content appeared equally powerful, and worked additionally, to affect…

  15. Nonlinear spatio-temporal filtering of dynamic PET data using a four-dimensional Gaussian filter and expectation-maximization deconvolution

    NASA Astrophysics Data System (ADS)

    Floberg, J. M.; Holden, J. E.

    2013-02-01

    We introduce a method for denoising dynamic PET data, spatio-temporal expectation-maximization (STEM) filtering, that combines four-dimensional Gaussian filtering with EM deconvolution. The initial Gaussian filter suppresses noise at a broad range of spatial and temporal frequencies and EM deconvolution quickly restores the frequencies most important to the signal. We aim to demonstrate that STEM filtering can improve variance in both individual time frames and in parametric images without introducing significant bias. We evaluate STEM filtering with a dynamic phantom study, and with simulated and human dynamic PET studies of a tracer with reversible binding behaviour, [C-11]raclopride, and a tracer with irreversible binding behaviour, [F-18]FDOPA. STEM filtering is compared to a number of established three and four-dimensional denoising methods. STEM filtering provides substantial improvements in variance in both individual time frames and in parametric images generated with a number of kinetic analysis techniques while introducing little bias. STEM filtering does bias early frames, but this does not affect quantitative parameter estimates. STEM filtering is shown to be superior to the other simple denoising methods studied. STEM filtering is a simple and effective denoising method that could be valuable for a wide range of dynamic PET applications.

  16. Expectation-maximization of the potential of mean force and diffusion coefficient in Langevin dynamics from single molecule FRET data photon by photon.

    PubMed

    Haas, Kevin R; Yang, Haw; Chu, Jhih-Wei

    2013-12-12

    The dynamics of a protein along a well-defined coordinate can be formally projected onto the form of an overdamped Lagevin equation. Here, we present a comprehensive statistical-learning framework for simultaneously quantifying the deterministic force (the potential of mean force, PMF) and the stochastic force (characterized by the diffusion coefficient, D) from single-molecule Förster-type resonance energy transfer (smFRET) experiments. The likelihood functional of the Langevin parameters, PMF and D, is expressed by a path integral of the latent smFRET distance that follows Langevin dynamics and realized by the donor and the acceptor photon emissions. The solution is made possible by an eigen decomposition of the time-symmetrized form of the corresponding Fokker-Planck equation coupled with photon statistics. To extract the Langevin parameters from photon arrival time data, we advance the expectation-maximization algorithm in statistical learning, originally developed for and mostly used in discrete-state systems, to a general form in the continuous space that allows for a variational calculus on the continuous PMF function. We also introduce the regularization of the solution space in this Bayesian inference based on a maximum trajectory-entropy principle. We use a highly nontrivial example with realistically simulated smFRET data to illustrate the application of this new method.

  17. Evaluation of list-mode ordered subset expectation maximization image reconstruction for pixelated solid-state compton gamma camera with large number of channels

    NASA Astrophysics Data System (ADS)

    Kolstein, M.; De Lorenzo, G.; Chmeissani, M.

    2014-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For Compton camera, especially with a large number of readout channels, image reconstruction presents a big challenge. In this work, results are presented for the List-Mode Ordered Subset Expectation Maximization (LM-OSEM) image reconstruction algorithm on simulated data with the VIP Compton camera design. For the simulation, all realistic contributions to the spatial resolution are taken into account, including the Doppler broadening effect. The results show that even with a straightforward implementation of LM-OSEM, good images can be obtained for the proposed Compton camera design. Results are shown for various phantoms, including extended sources and with a distance between the field of view and the first detector plane equal to 100 mm which corresponds to a realistic nuclear medicine environment.

  18. Nonlinear spatio-temporal filtering of dynamic PET data using a four-dimensional Gaussian filter and expectation-maximization deconvolution.

    PubMed

    Floberg, J M; Holden, J E

    2013-02-21

    We introduce a method for denoising dynamic PET data, spatio-temporal expectation-maximization (STEM) filtering, that combines four-dimensional Gaussian filtering withEMdeconvolution. The initial Gaussian filter suppresses noise at a broad range of spatial and temporal frequencies and EM deconvolution quickly restores the frequencies most important to the signal. We aim to demonstrate that STEM filtering can improve variance in both individual time frames and in parametric images without introducing significant bias. We evaluate STEM filtering with a dynamic phantom study, and with simulated and human dynamic PET studies of a tracer with reversible binding behaviour, [C-11]raclopride, and a tracer with irreversible binding behaviour, [F-18]FDOPA. STEM filtering is compared to a number of established three and four-dimensional denoising methods. STEM filtering provides substantial improvements in variance in both individual time frames and in parametric images generated with a number of kinetic analysis techniques while introducing little bias. STEM filtering does bias early frames, but this does not affect quantitative parameter estimates. STEM filtering is shown to be superior to the other simple denoising methods studied. STEM filtering is a simple and effective denoising method that could be valuable for a wide range of dynamic PET applications.

  19. Pt skin on AuCu intermetallic substrate: a strategy to maximize Pt utilization for fuel cells.

    PubMed

    Wang, Gongwei; Huang, Bing; Xiao, Li; Ren, Zhandong; Chen, Hao; Wang, Deli; Abruña, Héctor D; Lu, Juntao; Zhuang, Lin

    2014-07-09

    The dependence on Pt catalysts has been a major issue of proton-exchange membrane (PEM) fuel cells. Strategies to maximize the Pt utilization in catalysts include two main approaches: to put Pt atoms only at the catalyst surface and to further enhance the surface-specific catalytic activity (SA) of Pt. Thus far there has been no practical design that combines these two features into one single catalyst. Here we report a combined computational and experimental study on the design and implementation of Pt-skin catalysts with significantly improved SA toward the oxygen reduction reaction (ORR). Through screening, using density functional theory (DFT) calculations, a Pt-skin structure on AuCu(111) substrate, consisting of 1.5 monolayers of Pt, is found to have an appropriately weakened oxygen affinity, in comparison to that on Pt(111), which would be ideal for ORR catalysis. Such a structure is then realized by substituting the Cu atoms in three surface layers of AuCu intermetallic nanoparticles (AuCu iNPs) with Pt. The resulting Pt-skinned catalyst (denoted as Pt(S)AuCu iNPs) has been characterized in depth using synchrotron XRD, XPS, HRTEM, and HAADF-STEM/EDX, such that the Pt-skin structure is unambiguously identified. The thickness of the Pt skin was determined to be less than two atomic layers. Finally the catalytic activity of Pt(S)AuCu iNPs toward the ORR was measured via rotating disk electrode (RDE) voltammetry through which it was established that the SA was more than 2 times that of a commercial Pt/C catalyst. Taking into account the ultralow Pt loading in Pt(S)AuCu iNPs, the mass-specific catalytic activity (MA) was determined to be 0.56 A/mg(Pt)@0.9 V, a value that is well beyond the DOE 2017 target for ORR catalysts (0.44 A/mg(Pt)@0.9 V). These findings provide a strategic design and a realizable approach to high-performance and Pt-efficient catalysts for fuel cells.

  20. Resource utilization by children with developmental disabilities in Kenya: discrepancy analysis of parents' expectation-to-importance appraisals.

    PubMed

    Mutua, N Kagendo; Miller, Janice Williams; Mwavita, Mwarumba

    2002-01-01

    The purpose of this study was to describe parental perceptions of eight physical and human resources available to meet the needs of children with developmental disabilities in Kenya. Specifically, the study assessed the discrepancy between the importance parents attached to specified resources and the expected use of those resources by their children with developmental disabilities. Discrepancy analysis was conducted on parents' expectation-to-importance appraisals of eight resources identified in previous research including, health, education, friendships, husband/wife, religious organization, community membership/acceptance, employment/work, and home. Overall, parental appraisal of likely access-to-importance was significantly related across all eight physical and human resource areas. Discrepancy scores ranged from negative, through zero, to positive, categorized underutilized, congruent, and over-utilized, respectively. Chi-square analyses were non-significant for gender across all resources with only slight gender differences noted on three resources. Most parents reported a match between expected use and importance in five of the eight community resources, health (57.4%), friends (54.6%), religious affiliation (59.8%), acceptance in the community (60.3%), and having one's own home (62.6%). However, "husband/wife" fell outside the congruent range (50.4%), with slight gender differences noted. Finally, two resource areas where the majority of parents reported noncongruence were educational programs and employment/career service.

  1. Maximizing operational effectiveness and utility of the Mobile Infrared Scene Projector (MIRSP) during System Integration Laboratory (SIL) testing

    NASA Astrophysics Data System (ADS)

    Zabel, Kenneth W.; Brooks, Geoffrey W.; Owens, Bruce

    2000-07-01

    Testing advanced weapons systems, like the Comanche helicopter, has always presented technical challenges to the Test and Evaluation (T&E) community. Because these weapon systems are on the cutting edge of technology, it is the tester's responsibility to develop the tools and techniques to fully exercise a new weapon system's capability. As with most testing, state-of-the-art tools which provide test stimuli that matches or exceeds the fidelity of the systems under test must be developed. One such tool under development to test FLIR senors is the Mobile Infrared Scene Projector (MIRSP). This paper will investigate current plans to support the T&E of the Comanche FLIR sensor during SIL testing. Planning the T&E usage of the MIRSP has involved identifying limitations, both in hardware and software, and determining how to minimize the effects of these limitations or proposing solutions to correct these limitations. The final result of this effort is to maximize the operational effectiveness of the MIRSP in order to benefit T&E of all FLIR sensors in the future.

  2. Expectation versus Reality: The Impact of Utility on Emotional Outcomes after Returning Individualized Genetic Research Results in Pediatric Rare Disease Research, a Qualitative Interview Study

    PubMed Central

    Cacioppo, Cara N.; Chandler, Ariel E.; Towne, Meghan C.; Beggs, Alan H.; Holm, Ingrid A.

    2016-01-01

    Purpose Much information on parental perspectives on the return of individual research results (IRR) in pediatric genomic research is based on hypothetical rather than actual IRR. Our aim was to understand how the expected utility to parents who received IRR on their child from a genetic research study compared to the actual utility of the IRR received. Methods We conducted individual telephone interviews with parents who received IRR on their child through participation in the Manton Center for Orphan Disease Research Gene Discovery Core (GDC) at Boston Children’s Hospital (BCH). Results Five themes emerged around the utility that parents expected and actually received from IRR: predictability, management, family planning, finding answers, and helping science and/or families. Parents expressing negative or mixed emotions after IRR return were those who did not receive the utility they expected from the IRR. Conversely, parents who expressed positive emotions were those who received as much or greater utility than expected. Conclusions Discrepancies between expected and actual utility of IRR affect the experiences of parents and families enrolled in genetic research studies. An informed consent process that fosters realistic expectations between researchers and participants may help to minimize any negative impact on parents and families. PMID:27082877

  3. Implementation of health information technology to maximize efficiency of resource utilization in a geographically dispersed prenatal care delivery system.

    PubMed

    Cochran, Marlo Baker; Snyder, Russell R; Thomas, Elizabeth; Freeman, Daniel H; Hankins, Gary D V

    2012-04-01

    This study investigated the utilization of health information technology (HIT) to enhance resource utilization in a geographically dispersed tertiary care system with extensive outpatient and delivery services. It was initiated as a result of a systems change implemented after Hurricane Ike devastated southeast Texas. A retrospective database and electronic medical record review was performed, which included data collection from all patients evaluated 18 months prior (epoch I) and 18 months following (epoch II) the landfall of Hurricane Ike. The months immediately following the storm were omitted from the analysis, allowing time to establish a new baseline. We analyzed a total of 21,201 patients evaluated in triage at the University of Texas Medical Branch. Epoch I consisted of 11,280 patients and epoch II consisted of 9922 patients. Using HIT, we were able to decrease the number of visits to triage while simultaneously managing more complex patients in the outpatient setting with no clinically significant change in maternal or fetal outcome. This study developed an innovated model of care using constrained resources while providing quality and safety to our patients without additional cost to the health care delivery system.

  4. Utilizing WASP and hot waterflood to maximize the value of a thermally mature steam drive in the West Coalinga field

    SciTech Connect

    DeFrancisco, S.T.; Sanford, S.J.; Hong, K.C.

    1995-12-31

    The Water-Alternating-Steam-Process (WASP) has been utilized on Section 13D, West Coalinga Field since 1988. Originally implemented to control premature, high-temperature steam breakthrough, the process has improved sales oil recovery in both breakthrough and non-breakthrough patterns. A desktop, semi-conceptual simulation study was initiated in June 1993 to provide a theoretical basis for optimizing and monitoring the WASP project. The simulation study results showed that the existing WASP injection strategy could be further optimized. It also showed that conversion to continuous hot waterflood was the optimum injection strategy for the steamflood sands. The Section 13D WASP project was gradually converted to hot waterflood during 1994. Conversion to hot waterflood has significantly improved project cash flow and increased the value of the Section 13D thermal project.

  5. MO-FG-207-03: Maximizing the Utility of Integrated PET/MRI in Clinical Applications

    SciTech Connect

    Behr, S.

    2015-06-15

    The use of integrated PET/MRI systems in clinical applications can best benefit from understanding their technological advances and limitations. The currently available clinical PET/MRI systems have their own characteristics. Thorough analyses of existing technical data and evaluation of necessary performance metrics for quality assurances could be conducted to optimize application-specific PET/MRI protocols. This Symposium will focus on technical advances and limitations of clinical PET/MRI systems, and how this exciting imaging modality can be utilized in applications that can benefit from both PET and MRI. Learning Objectives: To understand the technological advances of clinical PET/MRI systems To correctly identify clinical applications that can benefit from PET/MRI To understand ongoing work to further improve the current PET/MRI technology Floris Jansen is a GE Healthcare employee.

  6. Expectant Mothers Maximizing Opportunities: Maternal Characteristics Moderate Multifactorial Prenatal Stress in the Prediction of Birth Weight in a Sample of Children Adopted at Birth

    PubMed Central

    Brotnow, Line; Reiss, David; Stover, Carla S.; Ganiban, Jody; Leve, Leslie D.; Neiderhiser, Jenae M.; Shaw, Daniel S.; Stevens, Hanna E.

    2015-01-01

    Background Mothers’ stress in pregnancy is considered an environmental risk factor in child development. Multiple stressors may combine to increase risk, and maternal personal characteristics may offset the effects of stress. This study aimed to test the effect of 1) multifactorial prenatal stress, integrating objective “stressors” and subjective “distress” and 2) the moderating effects of maternal characteristics (perceived social support, self-esteem and specific personality traits) on infant birthweight. Method Hierarchical regression modeling was used to examine cross-sectional data on 403 birth mothers and their newborns from an adoption study. Results Distress during pregnancy showed a statistically significant association with birthweight (R2 = 0.032, F(2, 398) = 6.782, p = .001). The hierarchical regression model revealed an almost two-fold increase in variance of birthweight predicted by stressors as compared with distress measures (R2Δ = 0.049, F(4, 394) = 5.339, p < .001). Further, maternal characteristics moderated this association (R2Δ = 0.031, F(4, 389) = 3.413, p = .009). Specifically, the expected benefit to birthweight as a function of higher SES was observed only for mothers with lower levels of harm-avoidance and higher levels of perceived social support. Importantly, the results were not better explained by prematurity, pregnancy complications, exposure to drugs, alcohol or environmental toxins. Conclusions The findings support multidimensional theoretical models of prenatal stress. Although both objective stressors and subjectively measured distress predict birthweight, they should be considered distinct and cumulative components of stress. This study further highlights that jointly considering risk factors and protective factors in pregnancy improves the ability to predict birthweight. PMID:26544958

  7. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    NASA Astrophysics Data System (ADS)

    Razali, Azhani Mohd; Abdullah, Jaafar

    2015-04-01

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.

  8. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    SciTech Connect

    Razali, Azhani Mohd Abdullah, Jaafar

    2015-04-29

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.

  9. Preparation of multiband structure with Cu2Se/Ga3Se2/In3Se2 thin films by thermal evaporation technique for maximal solar spectrum utilization

    NASA Astrophysics Data System (ADS)

    Mohan, A.; Rajesh, S.; Gopalakrishnan, M.

    2016-10-01

    The paper investigates and discusses the formation of multiband structure through the Cu2Se-Ga3Se-In3Se2 thin films for maximal solar spectrum utilization. Stacking different semiconductor materials with various band gaps were done by successive evaporation method. Based on the band gap values the layers are arranged (low to high bandgap from the substrate). The XRD results exhibits the formation of CIGS composites through this successive evaporation of Cu2Se/Ga3Se/In3Se2 and treating then with temperature. Scanning Electron Microscope images shows improved crystallinity with the reduction in the larger grain boundary scattering after annealing. Optical spectra shows the stronger absorption in an UV-Visible region and higher transmission in the infrared and near infrared region. The optical band gap values calculated for as prepared films is 2.20 eV and the band gap was split into 1.62, 1.92 eV and 2.27eV for annealed samples. This multiband structures are much needed to utilize the full solar spectrum.

  10. Reduced maximal inhibition in phenotypic susceptibility assays indicates that viral strains resistant to the CCR5 antagonist maraviroc utilize inhibitor-bound receptor for entry.

    PubMed

    Westby, Mike; Smith-Burchnell, Caroline; Mori, Julie; Lewis, Marilyn; Mosley, Michael; Stockdale, Mark; Dorr, Patrick; Ciaramella, Giuseppe; Perros, Manos

    2007-03-01

    Maraviroc is a CCR5 antagonist in clinical development as one of a new class of antiretrovirals targeting human immunodeficiency virus type 1 (HIV-1) coreceptor binding. We investigated the mechanism of HIV resistance to maraviroc by using in vitro sequential passage and site-directed mutagenesis. Serial passage through increasing maraviroc concentrations failed to select maraviroc-resistant variants from some laboratory-adapted and clinical isolates of HIV-1. However, high-level resistance to maraviroc was selected from three of six primary isolates passaged in peripheral blood lymphocytes (PBL). The SF162 strain acquired resistance to maraviroc in both treated and control cultures; all resistant variants were able to use CXCR4 as a coreceptor. In contrast, maraviroc-resistant virus derived from isolates CC1/85 and RU570 remained CCR5 tropic, as evidenced by susceptibility to the CCR5 antagonist SCH-C, resistance to the CXCR4 antagonist AMD3100, and an inability to replicate in CCR5 Delta32/Delta32 PBL. Strain-specific mutations were identified in the V3 loop of maraviroc-resistant CC1/85 and RU570. The envelope-encoding region of maraviroc-resistant CC1/85 was inserted into an NL4-3 background. This recombinant virus was completely resistant to maraviroc but retained susceptibility to aplaviroc. Reverse mutation of gp120 residues 316 and 323 in the V3 loop (numbering from HXB2) to their original sequence restored wild-type susceptibility to maraviroc, while reversion of either mutation resulted in a partially sensitive virus with reduced maximal inhibition (plateau). The plateaus are consistent with the virus having acquired the ability to utilize maraviroc-bound receptor for entry. This hypothesis was further corroborated by the observation that a high concentration of maraviroc blocks the activity of aplaviroc against maraviroc-resistant virus.

  11. Utilization of Molecular, Phenotypic, and Geographical Diversity to Develop Compact Composite Core Collection in the Oilseed Crop, Safflower (Carthamus tinctorius L.) through Maximization Strategy

    PubMed Central

    Kumar, Shivendra; Ambreen, Heena; Variath, Murali T.; Rao, Atmakuri R.; Agarwal, Manu; Kumar, Amar; Goel, Shailendra; Jagannath, Arun

    2016-01-01

    Safflower (Carthamus tinctorius L.) is a dryland oilseed crop yielding high quality edible oil. Previous studies have described significant phenotypic variability in the crop and used geographical distribution and phenotypic trait values to develop core collections. However, the molecular diversity component was lacking in the earlier collections thereby limiting their utility in breeding programs. The present study evaluated the phenotypic variability for 12 agronomically important traits during two growing seasons (2011–12 and 2012–13) in a global reference collection of 531 safflower accessions, assessed earlier by our group for genetic diversity and population structure using AFLP markers. Significant phenotypic variation was observed for all the agronomic traits in the representative collection. Cluster analysis of phenotypic data grouped the accessions into five major clusters. Accessions from the Indian Subcontinent and America harbored maximal phenotypic variability with unique characters for a few traits. MANOVA analysis indicated significant interaction between genotypes and environment for both the seasons. Initially, six independent core collections (CC1–CC6) were developed using molecular marker and phenotypic data for two seasons through POWERCORE and MSTRAT. These collections captured the entire range of trait variability but failed to include complete genetic diversity represented in 19 clusters reported earlier through Bayesian analysis of population structure (BAPS). Therefore, we merged the three POWERCORE core collections (CC1–CC3) to generate a composite core collection, CartC1 and three MSTRAT core collections (CC4–CC6) to generate another composite core collection, CartC2. The mean difference percentage, variance difference percentage, variable rate of coefficient of variance percentage, coincidence rate of range percentage, Shannon's diversity index, and Nei's gene diversity for CartC1 were 11.2, 43.7, 132.4, 93.4, 0.47, and 0

  12. Utilization of Molecular, Phenotypic, and Geographical Diversity to Develop Compact Composite Core Collection in the Oilseed Crop, Safflower (Carthamus tinctorius L.) through Maximization Strategy.

    PubMed

    Kumar, Shivendra; Ambreen, Heena; Variath, Murali T; Rao, Atmakuri R; Agarwal, Manu; Kumar, Amar; Goel, Shailendra; Jagannath, Arun

    2016-01-01

    Safflower (Carthamus tinctorius L.) is a dryland oilseed crop yielding high quality edible oil. Previous studies have described significant phenotypic variability in the crop and used geographical distribution and phenotypic trait values to develop core collections. However, the molecular diversity component was lacking in the earlier collections thereby limiting their utility in breeding programs. The present study evaluated the phenotypic variability for 12 agronomically important traits during two growing seasons (2011-12 and 2012-13) in a global reference collection of 531 safflower accessions, assessed earlier by our group for genetic diversity and population structure using AFLP markers. Significant phenotypic variation was observed for all the agronomic traits in the representative collection. Cluster analysis of phenotypic data grouped the accessions into five major clusters. Accessions from the Indian Subcontinent and America harbored maximal phenotypic variability with unique characters for a few traits. MANOVA analysis indicated significant interaction between genotypes and environment for both the seasons. Initially, six independent core collections (CC1-CC6) were developed using molecular marker and phenotypic data for two seasons through POWERCORE and MSTRAT. These collections captured the entire range of trait variability but failed to include complete genetic diversity represented in 19 clusters reported earlier through Bayesian analysis of population structure (BAPS). Therefore, we merged the three POWERCORE core collections (CC1-CC3) to generate a composite core collection, CartC1 and three MSTRAT core collections (CC4-CC6) to generate another composite core collection, CartC2. The mean difference percentage, variance difference percentage, variable rate of coefficient of variance percentage, coincidence rate of range percentage, Shannon's diversity index, and Nei's gene diversity for CartC1 were 11.2, 43.7, 132.4, 93.4, 0.47, and 0

  13. Managing Expectations: Results from Case Studies of US Water Utilities on Preparing for, Coping with, and Adapting to Extreme Events

    NASA Astrophysics Data System (ADS)

    Beller-Simms, N.; Metchis, K.

    2014-12-01

    Water utilities, reeling from increased impacts of successive extreme events such as floods, droughts, and derechos, are taking a more proactive role in preparing for future incursions. A recent study by Federal and water foundation investigators, reveals how six US water utilities and their regions prepared for, responded to, and coped with recent extreme weather and climate events and the lessons they are using to plan future adaptation and resilience activities. Two case studies will be highlighted. (1) Sonoma County, CA, has had alternating floods and severe droughts. In 2009, this area, home to competing water users, namely, agricultural crops, wineries, tourism, and fisheries faced a three-year drought, accompanied at the end by intense frosts. Competing uses of water threatened the grape harvest, endangered the fish industry and resulted in a series of regulations, and court cases. Five years later, new efforts by partners in the entire watershed have identified mutual opportunities for increased basin sustainability in the face of a changing climate. (2) Washington DC had a derecho in late June 2012, which curtailed water, communications, and power delivery during a record heat spell that impacted hundreds of thousands of residents and lasted over the height of the tourist-intensive July 4th holiday. Lessons from this event were applied three months later in anticipation of an approaching Superstorm Sandy. This study will help other communities in improving their resiliency in the face of future climate extremes. For example, this study revealed that (1) communities are planning with multiple types and occurrences of extreme events which are becoming more severe and frequent and are impacting communities that are expanding into more vulnerable areas and (2) decisions by one sector can not be made in a vacuum and require the scientific, sectoral and citizen communities to work towards sustainable solutions.

  14. Home nursing and home help for dementia patients: Predictors for utilization and expected quality from a family caregiver's point of view.

    PubMed

    Graessel, Elmar; Luttenberger, Katharina; Bleich, Stefan; Adabbo, Raffaela; Donath, Carolin

    2011-01-01

    Little is known about the factors that influence utilization of home nursing and home help or about quality expectations of family caregivers of a dementia patient. These questions are addressed in the following paper. The cross-sectional study was carried out as an anonymous written survey of family caregivers of dementia patients in four regions of Germany. Quantitative and qualitative data from 404 family caregivers were analyzed using binary logistic regression analysis and content analysis. We found that subjective need of home nursing respectively of home help and the age of the dementia patient are significant predictors for utilization. Utilization of home nursing is also predicted by the age of the family caregiver. Punctuality of the staff is the dominant quality criterion. Hence, in order to reduce the number of those Alzheimer's disease (AD) caregivers who think they don't need home nursing or home help compared with the number who really don't need it, caregivers should be transparently informed of the relevant advantages and quality principles of using home nursing respectively home help.

  15. On deciding to have a lobotomy: either lobotomies were justified or decisions under risk should not always seek to maximise expected utility.

    PubMed

    Cooper, Rachel

    2014-02-01

    In the 1940s and 1950s thousands of lobotomies were performed on people with mental disorders. These operations were known to be dangerous, but thought to offer great hope. Nowadays, the lobotomies of the 1940s and 1950s are widely condemned. The consensus is that the practitioners who employed them were, at best, misguided enthusiasts, or, at worst, evil. In this paper I employ standard decision theory to understand and assess shifts in the evaluation of lobotomy. Textbooks of medical decision making generally recommend that decisions under risk are made so as to maximise expected utility (MEU) I show that using this procedure suggests that the 1940s and 1950s practice of psychosurgery was justifiable. In making sense of this finding we have a choice: Either we can accept that psychosurgery was justified, in which case condemnation of the lobotomists is misplaced. Or, we can conclude that the use of formal decision procedures, such as MEU, is problematic.

  16. Variation of epimedins A - C and icariin in ten representative populations of Epimedium brevicornu Maxim., and implications for utilization.

    PubMed

    Xu, Yanqin; Li, Zuozhou; Yuan, Ling; Zhang, Xuejun; Lu, Dayan; Huang, Hongwen; Wang, Ying

    2013-04-01

    The concentration variations of main flavonoids, epimedins A-C and icariin, among ten representative populations of Epimedium brevicornu Maxim. were assessed by HPLC. The populations were collected during the flowering stage and included 419 individual samples. Remarkable variations within and among populations were detected. SXXA Population (see Fig. 1) was an outlier due to its significant low concentrations (<1.00-4.46 mg/g). But even without SXXA, significant concentration differences among populations were still observed in epimedin A (2.31-8.42 mg/g), epimedin B (6.67-55.7 mg/g), epimedin C (5.39-23.0 mg/g), icariin (8.50-39.9 mg/g), and their total (29.1-123 mg/g). All populations except SXXA showed much higher concentrations than the recommended standards (i.e. 5 mg/g for icariin and 13 mg/g for the total). A high-concentration-population structure, estimated both by principal component analysis (PCA) and unweighted pair group method with averaging (UPGMA) cluster analysis, based on Euclidean distances, was observed. Both methods allowed separation of the populations in four groups defined by the concentrations of four main flavonoids. The populations (SXLC and SXQS) located in north of Yellow River were clustered together and characterized by highest concentrations of epimedin B, icariin, and their total. Considering of the high concentrations of main flavonoids and abundant resources, E. brevicornu could be exploited as a good medical resource for Herba Epimedii and would offer a tremendous potential for commercial development, but SXXA population should be paid special attention, and further study is needed.

  17. Maximizing the utilization of Laminaria japonica as biomass via improvement of alginate lyase activity in a two-phase fermentation system.

    PubMed

    Oh, Yuri; Xu, Xu; Kim, Ji Young; Park, Jong Moon

    2015-08-01

    Brown seaweed contains up to 67% of carbohydrates by dry weight and presents high potential as a polysaccharide feedstock for biofuel production. To effectively use brown seaweed as a biomass, degradation of alginate is the major challenge due to its complicated structure and low solubility in water. This study focuses on the isolation of alginate degrading bacteria, determining of the optimum fermentation conditions, as well as comparing the conventional single fermentation system with the two-phase fermentation system which is separately using alginate and mannitol extracted from Laminaria japonica. Maximum yield of organic acids production and volatile solids reduction obtained were 0.516 g/g and 79.7%, respectively, using the two-phase fermentation system in which alginate fermentation was carried out at pH 7 and mannitol fermentation at pH 8. The two-phase fermentation system increased the yield of organic acids production by 1.14 times and led to a 1.45-times reduction of VS when compared to the conventional single fermentation system at pH 8. The results show that the two-phase fermentation system improved the utilization of alginate by separating alginate from mannitol leading to enhanced alginate lyase activity.

  18. DEVELOPMENT OF A VALIDATED MODEL FOR USE IN MINIMIZING NOx EMISSIONS AND MAXIMIZING CARBON UTILIZATION WHEN CO-FIRING BIOMASS WITH COAL

    SciTech Connect

    Larry G. Felix; P. Vann Bush; Stephen Niksa

    2003-04-30

    In full-scale boilers, the effect of biomass cofiring on NO{sub x} and unburned carbon (UBC) emissions has been found to be site-specific. Few sets of field data are comparable and no consistent database of information exists upon which cofiring fuel choice or injection system design can be based to assure that NOX emissions will be minimized and UBC be reduced. This report presents the results of a comprehensive project that generated an extensive set of pilot-scale test data that were used to validate a new predictive model for the cofiring of biomass and coal. All testing was performed at the 3.6 MMBtu/hr (1.75 MW{sub t}) Southern Company Services/Southern Research Institute Combustion Research Facility where a variety of burner configurations, coals, biomasses, and biomass injection schemes were utilized to generate a database of consistent, scalable, experimental results (422 separate test conditions). This database was then used to validate a new model for predicting NO{sub x} and UBC emissions from the cofiring of biomass and coal. This model is based on an Advanced Post-Processing (APP) technique that generates an equivalent network of idealized reactor elements from a conventional CFD simulation. The APP reactor network is a computational environment that allows for the incorporation of all relevant chemical reaction mechanisms and provides a new tool to quantify NOx and UBC emissions for any cofired combination of coal and biomass.

  19. Maximally Expressive Modeling

    NASA Technical Reports Server (NTRS)

    Jaap, John; Davis, Elizabeth; Richardson, Lea

    2004-01-01

    Planning and scheduling systems organize tasks into a timeline or schedule. Tasks are logically grouped into containers called models. Models are a collection of related tasks, along with their dependencies and requirements, that when met will produce the desired result. One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed; the information sought is at the cutting edge of scientific endeavor; and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a maximally expressive modeling schema.

  20. Maximally Expressive Task Modeling

    NASA Technical Reports Server (NTRS)

    Japp, John; Davis, Elizabeth; Maxwell, Theresa G. (Technical Monitor)

    2002-01-01

    Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiment activities for the Space Station. The equipment used in these experiments is some of the most complex hardware ever developed by mankind, the information sought by these experiments is at the cutting edge of scientific endeavor, and the procedures for executing the experiments are intricate and exacting. Scheduling is made more difficult by a scarcity of space station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling space station experiment operations calls for a "maximally expressive" modeling schema. Modeling even the simplest of activities cannot be automated; no sensor can be attached to a piece of equipment that can discern how to use that piece of equipment; no camera can quantify how to operate a piece of equipment. Modeling is a human enterprise-both an art and a science. The modeling schema should allow the models to flow from the keyboard of the user as easily as works of literature flowed from the pen of Shakespeare. The Ground Systems Department at the Marshall Space Flight Center has embarked on an effort to develop a new scheduling engine that is highlighted by a maximally expressive modeling schema. This schema, presented in this paper, is a synergy of technological advances and domain-specific innovations.

  1. Great Expectations.

    ERIC Educational Resources Information Center

    Natale, Jo Anna

    1993-01-01

    Inside one Washington, DC, elementary school, Principal John Pannell has high hopes for his students and an expansive school vision. Malcolm X School compensates for disorder outside by clearly inculcating rules and behavior expectations. Children in school uniforms daily repeat a motto promoting Malcolm X as a school of love allowing no hitting,…

  2. Leadership Expectancy.

    ERIC Educational Resources Information Center

    Phillips, Ray C.

    A review of theories of expectation as related to behavior shows a high correlation between educational leaders' perceptions of their faculties and the climate and quality of instructional programs. Thus, effective faculties and high quality educational programs could be linked to a particular type of leadership. Leaders who hold high expectations…

  3. Maximization, learning, and economic behavior

    PubMed Central

    Erev, Ido; Roth, Alvin E.

    2014-01-01

    The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design. PMID:25024182

  4. Maximization, learning, and economic behavior.

    PubMed

    Erev, Ido; Roth, Alvin E

    2014-07-22

    The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design.

  5. Great Expectations for "Great Expectations."

    ERIC Educational Resources Information Center

    Ridley, Cheryl

    Designed to make the study of Dickens'"Great Expectations" an appealing and worthwhile experience, this paper presents a unit of study intended to help students gain (1) an appreciation of Dickens' skill at creating realistic human characters; (2) an insight into the problems of a young man confused by false values and unreal ambitions…

  6. How To: Maximize Google

    ERIC Educational Resources Information Center

    Branzburg, Jeffrey

    2004-01-01

    Google is shaking out to be the leading Web search engine, with recent research from Nielsen NetRatings reporting about 40 percent of all U.S. households using the tool at least once in January 2004. This brief article discusses how teachers and students can maximize their use of Google.

  7. Quantum-Inspired Maximizer

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2008-01-01

    A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).

  8. Utilizing Spectrum Efficiently (USE)

    DTIC Science & Technology

    2011-02-28

    and transmission rates to maximize the expected values of forward progress [3,5,21,33] as well as trans- port capacity [18,27] over a Nakagami -m channel...2009. [27] T. D. Goswami, J. M. Shea, T. F. Wong, M. Rao, and J. Glover, “Maximizing Transport Capacity for Geographic Transmission on Nakagami -m

  9. Smoking Outcome Expectancies among College Students.

    ERIC Educational Resources Information Center

    Brandon, Thomas H.; Baker, Timothy B.

    Alcohol expectancies have been found to predict later onset of drinking among adolescents. This study examined whether the relationship between level of alcohol use and expectancies is paralleled with cigarette smoking, and attempted to identify the content of smoking expectancies. An instrument to measure the subjective expected utility of…

  10. Creating a bridge between data collection and program planning: a technical assistance model to maximize the use of HIV/AIDS surveillance and service utilization data for planning purposes.

    PubMed

    Logan, Jennifer A; Beatty, Maile; Woliver, Renee; Rubinstein, Eric P; Averbach, Abigail R

    2005-12-01

    Over time, improvements in HIV/AIDS surveillance and service utilization data have increased their usefulness for planning programs, targeting resources, and otherwise informing HIV/AIDS policy. However, community planning groups, service providers, and health department staff often have difficulty in interpreting and applying the wide array of data now available. We describe the development of the Bridging Model, a technical assistance model for overcoming barriers to the use of data for program planning. Through the use of an iterative feedback loop in the model, HIV/AIDS data products constantly are evolving to better inform the decision-making tasks of their multiple users. Implementation of this model has led to improved data quality and data products and to a greater willingness and ability among stakeholders to use the data for planning purposes.

  11. COPD: maximization of bronchodilation.

    PubMed

    Nardini, Stefano; Camiciottoli, Gianna; Locicero, Salvatore; Maselli, Rosario; Pasqua, Franco; Passalacqua, Giovanni; Pela, Riccardo; Pesci, Alberto; Sebastiani, Alfredo; Vatrella, Alessandro

    2014-01-01

    The most recent guidelines define COPD in a multidimensional way, nevertheless the diagnosis is still linked to the limitation of airflow, usually measured by the reduction in the FEV1/FVC ratio below 70%. However, the severity of obstruction is not directly correlated to symptoms or to invalidity determined by COPD. Thus, besides respiratory function, COPD should be evaluated based on symptoms, frequency and severity of exacerbations, patient's functional status and health related quality of life (HRQoL). Therapy is mainly aimed at increasing exercise tolerance and reducing dyspnea, with improvement of daily activities and HRQoL. This can be accomplished by a drug-induced reduction of pulmonary hyperinflation and exacerbations frequency and severity. All guidelines recommend bronchodilators as baseline therapy for all stages of COPD, and long-acting inhaled bronchodilators, both beta-2 agonist (LABA) and antimuscarinic (LAMA) drugs, are the most effective in regular treatment in the clinically stable phase. The effectiveness of bronchodilators should be evaluated in terms of functional (relief of bronchial obstruction and pulmonary hyperinflation), symptomatic (exercise tolerance and HRQoL), and clinical improvement (reduction in number or severity of exacerbations), while the absence of a spirometric response is not a reason for interrupting treatment, if there is subjective improvement in symptoms. Because LABA and LAMA act via different mechanisms of action, when administered in combination they can exert additional effects, thus optimizing (i.e. maximizing) sustained bronchodilation in COPD patients with severe airflow limitation, who cannot benefit (or can get only partial benefit) by therapy with a single bronchodilator. Recently, a fixed combination of ultra LABA/LAMA (indacaterol/glycopyrronium) has shown that it is possible to get a stable and persistent bronchodilation, which can help in avoiding undesirable fluctuations of bronchial calibre.

  12. Power Converters Maximize Outputs Of Solar Cell Strings

    NASA Technical Reports Server (NTRS)

    Frederick, Martin E.; Jermakian, Joel B.

    1993-01-01

    Microprocessor-controlled dc-to-dc power converters devised to maximize power transferred from solar photovoltaic strings to storage batteries and other electrical loads. Converters help in utilizing large solar photovoltaic arrays most effectively with respect to cost, size, and weight. Main points of invention are: single controller used to control and optimize any number of "dumb" tracker units and strings independently; power maximized out of converters; and controller in system is microprocessor.

  13. Creating an Agent Based Framework to Maximize Information Utility

    DTIC Science & Technology

    2008-03-01

    Pecarina graduated from Central High School in San Angelo , Texas. He received his Bachelor of Science in Computer Science from Angelo State...instance, snipers in several building in strategic topographic locations could be setting themselves up for a coordinated jamming attack while enemy...scheduling problem and maps it to the restless bandit problem. The restless bandit problem is formulated, then its complexity is described and an

  14. Utilizing Partnerships to Maximize Resources in College Counseling Services

    ERIC Educational Resources Information Center

    Stewart, Allison; Moffat, Meridith; Travers, Heather; Cummins, Douglas

    2015-01-01

    Research indicates an increasing number of college students are experiencing severe psychological problems that are impacting their academic performance. However, many colleges and universities operate with constrained budgets that limit their ability to provide adequate counseling services for their student population. Moreover, accessing…

  15. Expecting the Best

    ERIC Educational Resources Information Center

    DiPaula, John

    2010-01-01

    Educational expectations are psychological constructs that change over time and can be altered or influenced by various factors. The concept of educational expectations refers to how much schooling students realistically believe that they will complete. These expectations are eventually raised or lowered as students see others like themselves…

  16. Maximizing TDRS Command Load Lifetime

    NASA Technical Reports Server (NTRS)

    Brown, Aaron J.

    2002-01-01

    was therefore the key to achieving this goal. This goal was eventually realized through development of an Excel spreadsheet tool called EMMIE (Excel Mean Motion Interactive Estimation). EMMIE utilizes ground ephemeris nodal data to perform a least-squares fit to inferred mean anomaly as a function of time, thus generating an initial estimate for mean motion. This mean motion in turn drives a plot of estimated downtrack position difference versus time. The user can then manually iterate the mean motion, and determine an optimal value that will maximize command load lifetime. Once this optimal value is determined, the mean motion initially calculated by the command builder tool is overwritten with the new optimal value, and the command load is built for uplink to ISS. EMMIE also provides the capability for command load lifetime to be tracked through multiple TORS ephemeris updates. Using EMMIE, TORS command load lifetimes of approximately 30 days have been achieved.

  17. Impacts of demand dynamics and consumer expectations on world oil prices

    SciTech Connect

    Fromholzer, D.R.

    1981-01-01

    This research contributes to the study of world oil prices. Models of rational producers and consumers are examined. Producers set prices or production quantities to maximize the value of their oil resources. Consumers purchase oil and other commodities to maximize utility. A market solution is a time path of prices and quantities that balances the choices of producers and consumers. A dynamic model of consumer demand was developed to address alternative pricing implications of consumer technology and objectives. Simplified demand models based on this dynamic model of consumer behavior are combined with simplified models of producer behavior. Sensitivity of pricing results to alternative assumptions about consumer price expectations and to the use of different functional forms for these models are tested. Two alternative models represent demand, using recent oil market data.

  18. Can monkeys make investments based on maximized pay-off?

    PubMed

    Steelandt, Sophie; Dufour, Valérie; Broihanne, Marie-Hélène; Thierry, Bernard

    2011-03-10

    Animals can maximize benefits but it is not known if they adjust their investment according to expected pay-offs. We investigated whether monkeys can use different investment strategies in an exchange task. We tested eight capuchin monkeys (Cebus apella) and thirteen macaques (Macaca fascicularis, Macaca tonkeana) in an experiment where they could adapt their investment to the food amounts proposed by two different experimenters. One, the doubling partner, returned a reward that was twice the amount given by the subject, whereas the other, the fixed partner, always returned a constant amount regardless of the amount given. To maximize pay-offs, subjects should invest a maximal amount with the first partner and a minimal amount with the second. When tested with the fixed partner only, one third of monkeys learned to remove a maximal amount of food for immediate consumption before investing a minimal one. With both partners, most subjects failed to maximize pay-offs by using different decision rules with each partner' quality. A single Tonkean macaque succeeded in investing a maximal amount to one experimenter and a minimal amount to the other. The fact that only one of over 21 subjects learned to maximize benefits in adapting investment according to experimenters' quality indicates that such a task is difficult for monkeys, albeit not impossible.

  19. Humans expect generosity

    NASA Astrophysics Data System (ADS)

    Brañas-Garza, Pablo; Rodríguez-Lara, Ismael; Sánchez, Angel

    2017-02-01

    Mechanisms supporting human ultra-cooperativeness are very much subject to debate. One psychological feature likely to be relevant is the formation of expectations, particularly about receiving cooperative or generous behavior from others. Without such expectations, social life will be seriously impeded and, in turn, expectations leading to satisfactory interactions can become norms and institutionalize cooperation. In this paper, we assess people’s expectations of generosity in a series of controlled experiments using the dictator game. Despite differences in respective roles, involvement in the game, degree of social distance or variation of stakes, the results are conclusive: subjects seldom predict that dictators will behave selfishly (by choosing the Nash equilibrium action, namely giving nothing). The majority of subjects expect that dictators will choose the equal split. This implies that generous behavior is not only observed in the lab, but also expected by subjects. In addition, expectations are accurate, matching closely the donations observed and showing that as a society we have a good grasp of how we interact. Finally, correlation between expectations and actual behavior suggests that expectations can be an important ingredient of generous or cooperative behavior.

  20. Humans expect generosity.

    PubMed

    Brañas-Garza, Pablo; Rodríguez-Lara, Ismael; Sánchez, Angel

    2017-02-14

    Mechanisms supporting human ultra-cooperativeness are very much subject to debate. One psychological feature likely to be relevant is the formation of expectations, particularly about receiving cooperative or generous behavior from others. Without such expectations, social life will be seriously impeded and, in turn, expectations leading to satisfactory interactions can become norms and institutionalize cooperation. In this paper, we assess people's expectations of generosity in a series of controlled experiments using the dictator game. Despite differences in respective roles, involvement in the game, degree of social distance or variation of stakes, the results are conclusive: subjects seldom predict that dictators will behave selfishly (by choosing the Nash equilibrium action, namely giving nothing). The majority of subjects expect that dictators will choose the equal split. This implies that generous behavior is not only observed in the lab, but also expected by subjects. In addition, expectations are accurate, matching closely the donations observed and showing that as a society we have a good grasp of how we interact. Finally, correlation between expectations and actual behavior suggests that expectations can be an important ingredient of generous or cooperative behavior.

  1. Humans expect generosity

    PubMed Central

    Brañas-Garza, Pablo; Rodríguez-Lara, Ismael; Sánchez, Angel

    2017-01-01

    Mechanisms supporting human ultra-cooperativeness are very much subject to debate. One psychological feature likely to be relevant is the formation of expectations, particularly about receiving cooperative or generous behavior from others. Without such expectations, social life will be seriously impeded and, in turn, expectations leading to satisfactory interactions can become norms and institutionalize cooperation. In this paper, we assess people’s expectations of generosity in a series of controlled experiments using the dictator game. Despite differences in respective roles, involvement in the game, degree of social distance or variation of stakes, the results are conclusive: subjects seldom predict that dictators will behave selfishly (by choosing the Nash equilibrium action, namely giving nothing). The majority of subjects expect that dictators will choose the equal split. This implies that generous behavior is not only observed in the lab, but also expected by subjects. In addition, expectations are accurate, matching closely the donations observed and showing that as a society we have a good grasp of how we interact. Finally, correlation between expectations and actual behavior suggests that expectations can be an important ingredient of generous or cooperative behavior. PMID:28195218

  2. Outside the Expected.

    ERIC Educational Resources Information Center

    Dienstfrey, Harris

    1968-01-01

    In examining the findings of "Pygmalion in the Classroom," an experimental study of the positive effects of favorable teacher expectations on the intellectual development of disadvantaged elementary school students, this review speculates about why the experimental students, whom the teachers expected to improve, and the control…

  3. Reflections on Expectations

    ERIC Educational Resources Information Center

    Santini, Joseph

    2014-01-01

    This article describes a teachers reflections on the matter of student expectations. Santini begins with a common understanding of the "Pygmalion effect" from research projects conducted in earlier years that intimated "people's expectations could influence other people in the world around them." In the world of deaf…

  4. A Superintendent's High Expectations

    ERIC Educational Resources Information Center

    Pascopella, Angela

    2009-01-01

    This article profiles Wanda Bamberg, superintendent of the Aldine (Texas) Independent School District. Bamberg is used to high expectations regardless of the circumstances. She is a firecracker of sorts who talks much and expects much from her staff members, teachers, and students, who are mostly at-risk, Black and Hispanic, and economically…

  5. An Unexpected Expected Value.

    ERIC Educational Resources Information Center

    Schwartzman, Steven

    1993-01-01

    Discusses the surprising result that the expected number of marbles of one color drawn from a set of marbles of two colors after two draws without replacement is the same as the expected number of that color marble after two draws with replacement. Presents mathematical models to help explain this phenomenon. (MDH)

  6. Using Debate to Maximize Learning Potential: A Case Study

    ERIC Educational Resources Information Center

    Firmin, Michael W.; Vaughn, Aaron; Dye, Amanda

    2007-01-01

    Following a review of the literature, an educational case study is provided for the benefit of faculty preparing college courses. In particular, we provide a transcribed debate utilized in a General Psychology course as a best practice example of how to craft a debate which maximizes student learning. The work is presented as a model for the…

  7. Learning to maximize reward rate: a model based on semi-Markov decision processes

    PubMed Central

    Khodadadi, Arash; Fakhari, Pegah; Busemeyer, Jerome R.

    2014-01-01

    When animals have to make a number of decisions during a limited time interval, they face a fundamental problem: how much time they should spend on each decision in order to achieve the maximum possible total outcome. Deliberating more on one decision usually leads to more outcome but less time will remain for other decisions. In the framework of sequential sampling models, the question is how animals learn to set their decision threshold such that the total expected outcome achieved during a limited time is maximized. The aim of this paper is to provide a theoretical framework for answering this question. To this end, we consider an experimental design in which each trial can come from one of the several possible “conditions.” A condition specifies the difficulty of the trial, the reward, the penalty and so on. We show that to maximize the expected reward during a limited time, the subject should set a separate value of decision threshold for each condition. We propose a model of learning the optimal value of decision thresholds based on the theory of semi-Markov decision processes (SMDP). In our model, the experimental environment is modeled as an SMDP with each “condition” being a “state” and the value of decision thresholds being the “actions” taken in those states. The problem of finding the optimal decision thresholds then is cast as the stochastic optimal control problem of taking actions in each state in the corresponding SMDP such that the average reward rate is maximized. Our model utilizes a biologically plausible learning algorithm to solve this problem. The simulation results show that at the beginning of learning the model choses high values of decision threshold which lead to sub-optimal performance. With experience, however, the model learns to lower the value of decision thresholds till finally it finds the optimal values. PMID:24904252

  8. Learning to maximize reward rate: a model based on semi-Markov decision processes.

    PubMed

    Khodadadi, Arash; Fakhari, Pegah; Busemeyer, Jerome R

    2014-01-01

    WHEN ANIMALS HAVE TO MAKE A NUMBER OF DECISIONS DURING A LIMITED TIME INTERVAL, THEY FACE A FUNDAMENTAL PROBLEM: how much time they should spend on each decision in order to achieve the maximum possible total outcome. Deliberating more on one decision usually leads to more outcome but less time will remain for other decisions. In the framework of sequential sampling models, the question is how animals learn to set their decision threshold such that the total expected outcome achieved during a limited time is maximized. The aim of this paper is to provide a theoretical framework for answering this question. To this end, we consider an experimental design in which each trial can come from one of the several possible "conditions." A condition specifies the difficulty of the trial, the reward, the penalty and so on. We show that to maximize the expected reward during a limited time, the subject should set a separate value of decision threshold for each condition. We propose a model of learning the optimal value of decision thresholds based on the theory of semi-Markov decision processes (SMDP). In our model, the experimental environment is modeled as an SMDP with each "condition" being a "state" and the value of decision thresholds being the "actions" taken in those states. The problem of finding the optimal decision thresholds then is cast as the stochastic optimal control problem of taking actions in each state in the corresponding SMDP such that the average reward rate is maximized. Our model utilizes a biologically plausible learning algorithm to solve this problem. The simulation results show that at the beginning of learning the model choses high values of decision threshold which lead to sub-optimal performance. With experience, however, the model learns to lower the value of decision thresholds till finally it finds the optimal values.

  9. Reward expectations in honeybees.

    PubMed

    Gil, Mariana

    2010-03-01

    The study of expectations of reward helps to understand rules controlling goal-directed behavior as well as decision making and planning. I shall review a series of recent studies focusing on how the food gathering behavior of honeybees depends upon reward expectations. These studies document that free-flying honeybees develop long-term expectations of reward and use them to regulate their investment of energy/time during foraging. Also, they present a laboratory procedure suitable for analysis of neural substrates of reward expectations in the honeybee brain. I discuss these findings in the context of individual and collective foraging, on the one hand, and neurobiology of learning and memory of reward.

  10. Health expectancy indicators.

    PubMed Central

    Robine, J. M.; Romieu, I.; Cambois, E.

    1999-01-01

    An outline is presented of progress in the development of health expectancy indicators, which are growing in importance as a means of assessing the health status of populations and determining public health priorities. PMID:10083720

  11. Maximizing algebraic connectivity in air transportation networks

    NASA Astrophysics Data System (ADS)

    Wei, Peng

    In air transportation networks the robustness of a network regarding node and link failures is a key factor for its design. An experiment based on the real air transportation network is performed to show that the algebraic connectivity is a good measure for network robustness. Three optimization problems of algebraic connectivity maximization are then formulated in order to find the most robust network design under different constraints. The algebraic connectivity maximization problem with flight routes addition or deletion is first formulated. Three methods to optimize and analyze the network algebraic connectivity are proposed. The Modified Greedy Perturbation Algorithm (MGP) provides a sub-optimal solution in a fast iterative manner. The Weighted Tabu Search (WTS) is designed to offer a near optimal solution with longer running time. The relaxed semi-definite programming (SDP) is used to set a performance upper bound and three rounding techniques are discussed to find the feasible solution. The simulation results present the trade-off among the three methods. The case study on two air transportation networks of Virgin America and Southwest Airlines show that the developed methods can be applied in real world large scale networks. The algebraic connectivity maximization problem is extended by adding the leg number constraint, which considers the traveler's tolerance for the total connecting stops. The Binary Semi-Definite Programming (BSDP) with cutting plane method provides the optimal solution. The tabu search and 2-opt search heuristics can find the optimal solution in small scale networks and the near optimal solution in large scale networks. The third algebraic connectivity maximization problem with operating cost constraint is formulated. When the total operating cost budget is given, the number of the edges to be added is not fixed. Each edge weight needs to be calculated instead of being pre-determined. It is illustrated that the edge addition and the

  12. Multivariate residues and maximal unitarity

    NASA Astrophysics Data System (ADS)

    Søgaard, Mads; Zhang, Yang

    2013-12-01

    We extend the maximal unitarity method to amplitude contributions whose cuts define multidimensional algebraic varieties. The technique is valid to all orders and is explicitly demonstrated at three loops in gauge theories with any number of fermions and scalars in the adjoint representation. Deca-cuts realized by replacement of real slice integration contours by higher-dimensional tori encircling the global poles are used to factorize the planar triple box onto a product of trees. We apply computational algebraic geometry and multivariate complex analysis to derive unique projectors for all master integral coefficients and obtain compact analytic formulae in terms of tree-level data.

  13. Knowledge discovery by accuracy maximization

    PubMed Central

    Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo

    2014-01-01

    Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold’s topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan’s presidency and not from its beginning. PMID:24706821

  14. Behavior, Expectations and Status

    ERIC Educational Resources Information Center

    Webster, Jr, Murray; Rashotte, Lisa Slattery

    2010-01-01

    We predict effects of behavior patterns and status on performance expectations and group inequality using an integrated theory developed by Fisek, Berger and Norman (1991). We next test those predictions using new experimental techniques we developed to control behavior patterns as independent variables. In a 10-condition experiment, predictions…

  15. Addicted to Expectations? Conference!

    ERIC Educational Resources Information Center

    Gilliland, Mary

    Whether teaching incoming students or training faculty in other disciplines, writing instructors often form unrealistic expectations about goals and skills of students and colleagues, which (like chemical addictions) predictably recur each semester as though they had never occurred before. For effective instruction, it is important that…

  16. Great Expectations. [Lesson Plan].

    ERIC Educational Resources Information Center

    Devine, Kelley

    Based on Charles Dickens' novel "Great Expectations," this lesson plan presents activities designed to help students understand the differences between totalitarianism and democracy; and a that a writer of a story considers theme, plot, characters, setting, and point of view. The main activity of the lesson involves students working in groups to…

  17. Maintaining High Expectations

    ERIC Educational Resources Information Center

    Williams, Roger; Williams, Sherry

    2014-01-01

    Author and husband, Roger Williams, is hearing and signs fluently, and author and wife, Sherry Williams, is deaf and uses both speech and signs, although she is most comfortable signing. As parents of six children--deaf and hearing--they are determined to encourage their children to do their best, and they always set their expectations high. They…

  18. Parenting with High Expectations

    ERIC Educational Resources Information Center

    Timperlake, Benna Hull; Sanders, Genelle Timperlake

    2014-01-01

    In some ways raising deaf or hard of hearing children is no different than raising hearing children; expectations must be established and periodically tweaked. Benna Hull Timperlake, who with husband Roger, raised two hearing children in addition to their deaf daughter, Genelle Timperlake Sanders, and Genelle, now a deaf professional, share their…

  19. Likelihood maximization for list-mode emission tomographic image reconstruction.

    PubMed

    Byrne, C

    2001-10-01

    The maximum a posteriori (MAP) Bayesian iterative algorithm using priors that are gamma distributed, due to Lange, Bahn and Little, is extended to include parameter choices that fall outside the gamma distribution model. Special cases of the resulting iterative method include the expectation maximization maximum likelihood (EMML) method based on the Poisson model in emission tomography, as well as algorithms obtained by Parra and Barrett and by Huesman et al. that converge to maximum likelihood and maximum conditional likelihood estimates of radionuclide intensities for list-mode emission tomography. The approach taken here is optimization-theoretic and does not rely on the usual expectation maximization (EM) formalism. Block-iterative variants of the algorithms are presented. A self-contained, elementary proof of convergence of the algorithm is included.

  20. Ventromedial frontal lobe damage disrupts value maximization in humans.

    PubMed

    Camille, Nathalie; Griffiths, Cathryn A; Vo, Khoi; Fellows, Lesley K; Kable, Joseph W

    2011-05-18

    Recent work in neuroeconomics has shown that regions in orbitofrontal and medial prefrontal cortex encode the subjective value of different options during choice. However, these electrophysiological and neuroimaging studies cannot demonstrate whether such signals are necessary for value-maximizing choices. Here we used a paradigm developed in experimental economics to empirically measure and quantify violations of utility theory in humans with damage to the ventromedial frontal lobe (VMF). We show that people with such damage are more likely to make choices that violate the generalized axiom of revealed preference, which is the one necessary and sufficient condition for choices to be consistent with value maximization. These results demonstrate that the VMF plays a critical role in value-maximizing choice.

  1. Natural selection maximizes Fisher information.

    PubMed

    Frank, S A

    2009-02-01

    In biology, information flows from the environment to the genome by the process of natural selection. However, it has not been clear precisely what sort of information metric properly describes natural selection. Here, I show that Fisher information arises as the intrinsic metric of natural selection and evolutionary dynamics. Maximizing the amount of Fisher information about the environment captured by the population leads to Fisher's fundamental theorem of natural selection, the most profound statement about how natural selection influences evolutionary dynamics. I also show a relation between Fisher information and Shannon information (entropy) that may help to unify the correspondence between information and dynamics. Finally, I discuss possible connections between the fundamental role of Fisher information in statistics, biology and other fields of science.

  2. Genetic enhancements and expectations.

    PubMed

    Sorensen, K

    2009-07-01

    Some argue that genetic enhancements and environmental enhancements are not importantly different: environmental enhancements such as private schools and chess lessons are simply the old-school way to have a designer baby. I argue that there is an important distinction between the two practices--a distinction that makes state restrictions on genetic enhancements more justifiable than state restrictions on environmental enhancements. The difference is that parents have no settled expectations about genetic enhancements.

  3. Expectation Maximization and its Application in Modeling, Segmentation and Anomaly Detection

    DTIC Science & Technology

    2008-05-01

    electro-optical Infra Red (IR) data. The EM update equations for single and multi-band Gaussian and single-band Gamma and Beta mixture models are... CONVERGENCE PROPERTIES OF EM ............................................. 12 2.8. SELECTION OF NUMBER OF CLASSES...vulnerability to the initial values, ‘ .’ If these are “far” from the actual values, then there may be cases when the algorithm does not converge to the

  4. Dry period length to maximize production across adjacent lactations and lifetime production.

    PubMed

    Kuhn, M T; Hutchison, J L; Norman, H D

    2006-05-01

    The primary objectives of this research were to determine the dry period lengths that maximize production across adjacent lactations and also dry period length that maximizes lifetime yield. Effect of days dry (DD) after lactations 1 through 3 were determined separately for both adjacent lactation sums and lifetime yield. Field data, collected through the Dairy Herd Improvement Association, on US Holstein cows first calving between January 1997 and January 2004 were utilized. Lifetime records were restricted to cows first calving no later than December 1999. Actual lactation yields, in contrast to standardized records, were used to calculate lactation sums and lifetime records. Herds were required to be on test for the entire period to avoid partial records. Another important edit was that actual calving dates had to agree with expected calving dates, based on reported days open, within 10 d. This edit ensured that the producer knew, at least at one point in time, when the cow was going to calve. Cow effects were corrected for in both the adjacent lactation and lifetime analyses. The minimum DD to maximize production across adjacent lactations depended on parity. For yield across first and second lactations, there was little loss in production with a minimum of 40 to 45 DD. Longer dry periods (55 to 65 DD) were required after second and third lactations however, presumably due to the lower persistency of second and later lactation cows. Lifetime production was maximized by 40 to 50 DD after first lactation and 30 to 40 DD after second and later lactations. Fewer DD were required to maximize lifetime yield than adjacent lactation yield because cows with fewer DD also had more lifetime days in milk. Although dry periods of 30 to 40 d can be used after second and later lactations without cost in lifetime yield, their benefit to lifetime production is minimal. Dry periods shorter than 30 d or longer than 70 d are costly to lifetime yield and should be avoided. Dry

  5. The futility of utility: how market dynamics marginalize Adam Smith

    NASA Astrophysics Data System (ADS)

    McCauley, Joseph L.

    2000-10-01

    Economic theorizing is based on the postulated, nonempiric notion of utility. Economists assume that prices, dynamics, and market equilibria are supposed to be derived from utility. The results are supposed to represent mathematically the stabilizing action of Adam Smith's invisible hand. In deterministic excess demand dynamics I show the following. A utility function generally does not exist mathematically due to nonintegrable dynamics when production/investment are accounted for, resolving Mirowski's thesis. Price as a function of demand does not exist mathematically either. All equilibria are unstable. I then explain how deterministic chaos can be distinguished from random noise at short times. In the generalization to liquid markets and finance theory described by stochastic excess demand dynamics, I also show the following. Market price distributions cannot be rescaled to describe price movements as ‘equilibrium’ fluctuations about a systematic drift in price. Utility maximization does not describe equilibrium. Maximization of the Gibbs entropy of the observed price distribution of an asset would describe equilibrium, if equilibrium could be achieved, but equilibrium does not describe real, liquid markets (stocks, bonds, foreign exchange). There are three inconsistent definitions of equilibrium used in economics and finance, only one of which is correct. Prices in unregulated free markets are unstable against both noise and rising or falling expectations: Adam Smith's stabilizing invisible hand does not exist, either in mathematical models of liquid market data, or in real market data.

  6. Maximal dinucleotide and trinucleotide circular codes.

    PubMed

    Michel, Christian J; Pellegrini, Marco; Pirillo, Giuseppe

    2016-01-21

    We determine here the number and the list of maximal dinucleotide and trinucleotide circular codes. We prove that there is no maximal dinucleotide circular code having strictly less than 6 elements (maximum size of dinucleotide circular codes). On the other hand, a computer calculus shows that there are maximal trinucleotide circular codes with less than 20 elements (maximum size of trinucleotide circular codes). More precisely, there are maximal trinucleotide circular codes with 14, 15, 16, 17, 18 and 19 elements and no maximal trinucleotide circular code having less than 14 elements. We give the same information for the maximal self-complementary dinucleotide and trinucleotide circular codes. The amino acid distribution of maximal trinucleotide circular codes is also determined.

  7. A maximally supersymmetric Kondo model

    NASA Astrophysics Data System (ADS)

    Harrison, Sarah; Kachru, Shamit; Torroba, Gonzalo

    2012-10-01

    We study the maximally supersymmetric Kondo model obtained by adding a fermionic impurity to N=4 supersymmetric Yang-Mills theory. While the original Kondo problem describes a defect interacting with a free Fermi liquid of itinerant electrons, here the ambient theory is an interacting CFT, and this introduces qualitatively new features into the system. The model arises in string theory by considering the intersection of a stack of M D5-branes with a stack of N D3-branes, at a point in the D3 worldvolume. We analyze the theory holographically, and propose a dictionary between the Kondo problem and antisymmetric Wilson loops in N=4 SYM. We perform an explicit calculation of the D5 fluctuations in the D3 geometry and determine the spectrum of defect operators. This establishes the stability of the Kondo fixed point together with its basic thermodynamic properties. Known supergravity solutions for Wilson loops allow us to go beyond the probe approximation: the D5s disappear and are replaced by three-form flux piercing a new topologically non-trivial S3 in the corrected geometry. This describes the Kondo model in terms of a geometric transition. A dual matrix model reflects the basic properties of the corrected gravity solution in its eigenvalue distribution.

  8. Maximizing the optical network capacity

    PubMed Central

    Bayvel, Polina; Maher, Robert; Liga, Gabriele; Shevchenko, Nikita A.; Lavery, Domaniç; Killey, Robert I.

    2016-01-01

    Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity. PMID:26809572

  9. A Maximally Supersymmetric Kondo Model

    SciTech Connect

    Harrison, Sarah; Kachru, Shamit; Torroba, Gonzalo; /Stanford U., Phys. Dept. /SLAC

    2012-02-17

    We study the maximally supersymmetric Kondo model obtained by adding a fermionic impurity to N = 4 supersymmetric Yang-Mills theory. While the original Kondo problem describes a defect interacting with a free Fermi liquid of itinerant electrons, here the ambient theory is an interacting CFT, and this introduces qualitatively new features into the system. The model arises in string theory by considering the intersection of a stack of M D5-branes with a stack of N D3-branes, at a point in the D3 worldvolume. We analyze the theory holographically, and propose a dictionary between the Kondo problem and antisymmetric Wilson loops in N = 4 SYM. We perform an explicit calculation of the D5 fluctuations in the D3 geometry and determine the spectrum of defect operators. This establishes the stability of the Kondo fixed point together with its basic thermodynamic properties. Known supergravity solutions for Wilson loops allow us to go beyond the probe approximation: the D5s disappear and are replaced by three-form flux piercing a new topologically non-trivial S3 in the corrected geometry. This describes the Kondo model in terms of a geometric transition. A dual matrix model reflects the basic properties of the corrected gravity solution in its eigenvalue distribution.

  10. Maximal switchability of centralized networks

    NASA Astrophysics Data System (ADS)

    Vakulenko, Sergei; Morozov, Ivan; Radulescu, Ovidiu

    2016-08-01

    We consider continuous time Hopfield-like recurrent networks as dynamical models for gene regulation and neural networks. We are interested in networks that contain n high-degree nodes preferably connected to a large number of N s weakly connected satellites, a property that we call n/N s -centrality. If the hub dynamics is slow, we obtain that the large time network dynamics is completely defined by the hub dynamics. Moreover, such networks are maximally flexible and switchable, in the sense that they can switch from a globally attractive rest state to any structurally stable dynamics when the response time of a special controller hub is changed. In particular, we show that a decrease of the controller hub response time can lead to a sharp variation in the network attractor structure: we can obtain a set of new local attractors, whose number can increase exponentially with N, the total number of nodes of the nework. These new attractors can be periodic or even chaotic. We provide an algorithm, which allows us to design networks with the desired switching properties, or to learn them from time series, by adjusting the interactions between hubs and satellites. Such switchable networks could be used as models for context dependent adaptation in functional genetics or as models for cognitive functions in neuroscience.

  11. Utility franchises reconsidered

    SciTech Connect

    Weidner, B.

    1981-11-01

    It is easier to obtain a public utility franchise than one for a fast food store because companies like Burger King value the profit share and control available with a franchise arrangement. The investor-owned utilities (IOUs) in Chicago and elsewhere gets little financial or regulatory benefit, although they do have an alternative because the franchise can be taken over by the city with a one-year notice. As IOUs evolved, the annual franchise fee has been incorporated into the rate in a move that taxes ratepayers and maximizes profits. Cities that found franchising unsatisfactory are looking for ways to terminate the franchise and finance a takeover, but limited-term and indeterminate franchises may offer a better mechanism when public needs and utility aims diverge. A directory lists franchised utilities by state and comments on their legal status. (DCK)

  12. Maximally Expressive Modeling of Operations Tasks

    NASA Technical Reports Server (NTRS)

    Jaap, John; Richardson, Lea; Davis, Elizabeth

    2002-01-01

    Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed, the information sought is at the cutting edge of scientific endeavor, and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a "maximally expressive" modeling schema.

  13. Network channel allocation and revenue maximization

    NASA Astrophysics Data System (ADS)

    Hamalainen, Timo; Joutsensalo, Jyrki

    2002-09-01

    This paper introduces a model that can be used to share link capacity among customers under different kind of traffic conditions. This model is suitable for different kind of networks like the 4G networks (fast wireless access to wired network) to support connections of given duration that requires a certain quality of service. We study different types of network traffic mixed in a same communication link. A single link is considered as a bottleneck and the goal is to find customer traffic profiles that maximizes the revenue of the link. Presented allocation system accepts every calls and there is not absolute blocking, but the offered data rate/user depends on the network load. Data arrival rate depends on the current link utilization, user's payment (selected CoS class) and delay. The arrival rate is (i) increasing with respect to the offered data rate, (ii) decreasing with respect to the price, (iii) decreasing with respect to the network load, and (iv) decreasing with respect to the delay. As an example, explicit formula obeying these conditions is given and analyzed.

  14. Beyond my wildest expectations.

    PubMed

    Nester, Eugene

    2014-01-01

    With support from my parents, I fulfilled their and my expectations of graduating from college and becoming a scientist. My scientific career has focused on two organisms, Bacillus subtilis and Agrobacterium tumefaciens, and two experimental systems, aromatic amino acid synthesis and DNA transfer in bacteria and plants. Studies on B. subtilis emphasized the genetics and biochemistry of aromatic amino acid synthesis and the characterization of competence in DNA transformation. I carried out both as a postdoc at Stanford with Josh Lederberg. At the University of Washington, I continued these studies and then investigated how Agrobacterium transforms plant cells. In collaboration, Milt Gordon, Mary-Dell Chilton, and I found that this bacterium could transfer a piece of its plasmid into plant cells and thereby modify their properties. This discovery opened up a host of intriguing questions that we have tried to answer over the last 35 years.

  15. A comparative study of expectant parents ' childbirth expectations.

    PubMed

    Kao, Bi-Chin; Gau, Meei-Ling; Wu, Shian-Feng; Kuo, Bih-Jaw; Lee, Tsorng-Yeh

    2004-09-01

    The purpose of this study was to understand childbirth expectations and differences in childbirth expectations among expectant parents. For convenience sampling, 200 couples willing to participate in this study were chosen from two hospitals in central Taiwan. Inclusion criteria were at least 36 weeks of gestation, aged 18 and above, no prenatal complications, and willing to consent to participate in this study. Instruments used to collect data included basic demographic data and the Childbirth Expectations Questionnaire. Findings of the study revealed that (1) five factors were identified by expectant parents regarding childbirth expectations including the caregiving environment, expectation of labor pain, spousal support, control and participation, and medical and nursing support; (2) no general differences were identified in the childbirth expectations between expectant fathers and expectant mothers; and (3) expectant fathers with a higher socioeconomic status and who had received prenatal (childbirth) education had higher childbirth expectations, whereas mothers displayed no differences in demographic characteristics. The study results may help clinical healthcare providers better understand differences in expectations during labor and birth and childbirth expectations by expectant parents in order to improve the medical and nursing system and promote positive childbirth experiences and satisfaction for expectant parents.

  16. The Learning Styles, Expectations, and Needs of Online Students

    ERIC Educational Resources Information Center

    Mupinga, Davison M.; Nora, Robert T.; Yaw, Dorothy Carole

    2006-01-01

    Each student comes to class with certain learning experiences, expectations, and needs that have to be addressed, and to which instructors need to be sensitive, to maximize the students' learning experiences. However, because of the unknown make-up of online classes, the characteristics of online students may be unclear, making it difficult to…

  17. Sociology of Low Expectations

    PubMed Central

    Samuel, Gabrielle; Williams, Clare

    2015-01-01

    Social scientists have drawn attention to the role of hype and optimistic visions of the future in providing momentum to biomedical innovation projects by encouraging innovation alliances. In this article, we show how less optimistic, uncertain, and modest visions of the future can also provide innovation projects with momentum. Scholars have highlighted the need for clinicians to carefully manage the expectations of their prospective patients. Using the example of a pioneering clinical team providing deep brain stimulation to children and young people with movement disorders, we show how clinicians confront this requirement by drawing on their professional knowledge and clinical expertise to construct visions of the future with their prospective patients; visions which are personalized, modest, and tainted with uncertainty. We refer to this vision-constructing work as recalibration, and we argue that recalibration enables clinicians to manage the tension between the highly optimistic and hyped visions of the future that surround novel biomedical interventions, and the exigencies of delivering those interventions in a clinical setting. Drawing on work from science and technology studies, we suggest that recalibration enrolls patients in an innovation alliance by creating a shared understanding of how the “effectiveness” of an innovation shall be judged. PMID:26527846

  18. Expectations and speech intelligibility.

    PubMed

    Babel, Molly; Russell, Jamie

    2015-05-01

    Socio-indexical cues and paralinguistic information are often beneficial to speech processing as this information assists listeners in parsing the speech stream. Associations that particular populations speak in a certain speech style can, however, make it such that socio-indexical cues have a cost. In this study, native speakers of Canadian English who identify as Chinese Canadian and White Canadian read sentences that were presented to listeners in noise. Half of the sentences were presented with a visual-prime in the form of a photo of the speaker and half were presented in control trials with fixation crosses. Sentences produced by Chinese Canadians showed an intelligibility cost in the face-prime condition, whereas sentences produced by White Canadians did not. In an accentedness rating task, listeners rated White Canadians as less accented in the face-prime trials, but Chinese Canadians showed no such change in perceived accentedness. These results suggest a misalignment between an expected and an observed speech signal for the face-prime trials, which indicates that social information about a speaker can trigger linguistic associations that come with processing benefits and costs.

  19. Glioblastoma: changing expectations?

    PubMed

    Arribas Alpuente, Leoncio; Menéndez López, Antonio; Yayá Tur, Ricardo

    2011-04-01

    Glioblastoma (GB) represents the most aggressive glioma in the adult population. Despite recent research efforts, the prognosis of patients with GB has remained dismal. Lately, the knowledge of genetic information about gliomagenesis has increased; we even have a classification of the genetic expression of the tumour. The main problem is that at the moment we do not have any therapeutical resources to help us better treat these tumours, as we can do, with others tumours like breast, lung and colorectal cancer. We have also improved on diagnostic imaging, especially with the new MRI sequences; we can now better define the characteristics of the tumour area and the surrounding brain structures, allowing us to adjust resections. Thanks to the most advanced surgery techniques, such as neuronavigation, intraoperative control of the nervous function and the tumour volume, the neurosurgeon is able to complete tumour exeresis with less morbidity. These imaging techniques allow the radiation oncologist to better contour the irradiation target volume, the structures and the organs at risk, to diminish the irradiation of apparently healthy tissue. Nowadays, knowledge of brain stem cells provides new expectations for future treatments. Novel targeted agents such as bevacizumab, imatinib, erlotinib, temsirolimus, immunotherapy, cilengitide, talampanel, etc. are helping classical chemotherapeutic agents, like temozolomide, to achieve an increase in overall survival. The main objective is to improve median overall survival, which is currently between 9 and 12 months, with a good quality of life, measured by the ability to carry out daily life activities.

  20. Inflation in maximal gauged supergravities

    SciTech Connect

    Kodama, Hideo; Nozawa, Masato

    2015-05-18

    We discuss the dynamics of multiple scalar fields and the possibility of realistic inflation in the maximal gauged supergravity. In this paper, we address this problem in the framework of recently discovered 1-parameter deformation of SO(4,4) and SO(5,3) dyonic gaugings, for which the base point of the scalar manifold corresponds to an unstable de Sitter critical point. In the gauge-field frame where the embedding tensor takes the value in the sum of the 36 and 36’ representations of SL(8), we present a scheme that allows us to derive an analytic expression for the scalar potential. With the help of this formalism, we derive the full potential and gauge coupling functions in analytic forms for the SO(3)×SO(3)-invariant subsectors of SO(4,4) and SO(5,3) gaugings, and argue that there exist no new critical points in addition to those discovered so far. For the SO(4,4) gauging, we also study the behavior of 6-dimensional scalar fields in this sector near the Dall’Agata-Inverso de Sitter critical point at which the negative eigenvalue of the scalar mass square with the largest modulus goes to zero as the deformation parameter s approaches a critical value s{sub c}. We find that when the deformation parameter s is taken sufficiently close to the critical value, inflation lasts more than 60 e-folds even if the initial point of the inflaton allows an O(0.1) deviation in Planck units from the Dall’Agata-Inverso critical point. It turns out that the spectral index n{sub s} of the curvature perturbation at the time of the 60 e-folding number is always about 0.96 and within the 1σ range n{sub s}=0.9639±0.0047 obtained by Planck, irrespective of the value of the η parameter at the critical saddle point. The tensor-scalar ratio predicted by this model is around 10{sup −3} and is close to the value in the Starobinsky model.

  1. Development and psychometric evaluation of the Milwaukee Psychotherapy Expectations Questionnaire.

    PubMed

    Norberg, Melissa M; Wetterneck, Chad T; Sass, Daniel A; Kanter, Jonathan W

    2011-06-01

    The Milwaukee Psychotherapy Expectations Questionnaire (MPEQ) was developed to measure clients' expectations about the components and effects of therapy. Items were generated rationally based upon the theoretical literature and existing expectancy measures. An exploratory factor analysis revealed a 2-factor solution, comprised of Process Expectations and Outcome Expectations, which was supported by confirmatory factor analyses in three additional samples. The measure demonstrated good internal consistency and test-retest reliability, along with support for convergent, discriminant, and predictive validity. These results present initial evidence for the utility of the MPEQ in assessing both process and outcome expectations in therapy.

  2. Asset Management for Water and Wastewater Utilities

    EPA Pesticide Factsheets

    Renewing and replacing the nation's public water infrastructure is an ongoing task. Asset management can help a utility maximize the value of its capital as well as its operations and maintenance dollars.

  3. Expected relative fitness and the adaptive topography of fluctuating selection.

    PubMed

    Lande, Russell

    2007-08-01

    Wright's adaptive topography describes gene frequency evolution as a maximization of mean fitness in a constant environment. I extended this to a fluctuating environment by unifying theories of stochastic demography and fluctuating selection, assuming small or moderate fluctuations in demographic rates with a stationary distribution, and weak selection among the types. The demography of a large population, composed of haploid genotypes at a single locus or normally distributed phenotypes, can then be approximated as a diffusion process and transformed to produce the dynamics of population size, N, and gene frequency, p, or mean phenotype, . The expected evolution of p or is a product of genetic variability and the gradient of the long-run growth rate of the population, , with respect to p or . This shows that the expected evolution maximizes , the mean Malthusian fitness in the average environment minus half the environmental variance in population growth rate. Thus, as a function of p or represents an adaptive topography that, despite environmental fluctuations, does not change with time. The haploid model is dominated by environmental stochasticity, so the expected maximization is not realized. Different constraints on quantitative genetic variability, and stabilizing selection in the average environment, allow evolution of the mean phenotype to undergo a stochastic maximization of . Although the expected evolution maximizes the long-run growth rate of the population, for a genotype or phenotype the long-run growth rate is not a valid measure of fitness in a fluctuating environment. The haploid and quantitative character models both reveal that the expected relative fitness of a type is its Malthusian fitness in the average environment minus the environmental covariance between its growth rate and that of the population.

  4. Computing Maximally Supersymmetric Scattering Amplitudes

    NASA Astrophysics Data System (ADS)

    Stankowicz, James Michael, Jr.

    This dissertation reviews work in computing N = 4 super-Yang--Mills (sYM) and N = 8 maximally supersymmetric gravity (mSUGRA) scattering amplitudes in D = 4 spacetime dimensions in novel ways. After a brief introduction and overview in Ch. 1, the various techniques used to construct amplitudes in the remainder of the dissertation are discussed in Ch. 2. This includes several new concepts such as d log and pure integrand bases, as well as how to construct the amplitude using exactly one kinematic point where it vanishes. Also included in this chapter is an outline of the Mathematica package on shell diagrams and numerics.m (osdn) that was developed for the computations herein. The rest of the dissertation is devoted to explicit examples. In Ch. 3, the starting point is tree-level sYM amplitudes that have integral representations with residues that obey amplitude relations. These residues are shown to have corresponding residue numerators that allow a double copy prescription that results in mSUGRA residues. In Ch. 4, the two-loop four-point sYM amplitude is constructed in several ways, showcasing many of the techniques of Ch. 2; this includes an example of how to use osdn. The two-loop five-point amplitude is also presented in a pure integrand representation with comments on how it was constructed from one homogeneous cut of the amplitude. On-going work on the two-loop n-point amplitude is presented at the end of Ch. 4. In Ch. 5, the three-loop four-point amplitude is presented in the d log representation and in the pure integrand representation. In Ch. 6, there are several examples of four- through seven-loop planar diagrams that illustrate how considerations of the singularity structure of the amplitude underpin dual-conformal invariance. Taken with the previous examples, this is additional evidence that the structure known to exist in the planar sector extends to the full theory. At the end of this chapter is a proof that all mSUGRA amplitudes have a pole at

  5. Specificity of a Maximal Step Exercise Test

    ERIC Educational Resources Information Center

    Darby, Lynn A.; Marsh, Jennifer L.; Shewokis, Patricia A.; Pohlman, Roberta L.

    2007-01-01

    To adhere to the principle of "exercise specificity" exercise testing should be completed using the same physical activity that is performed during exercise training. The present study was designed to assess whether aerobic step exercisers have a greater maximal oxygen consumption (max VO sub 2) when tested using an activity specific, maximal step…

  6. Diurnal Variations in Maximal Oxygen Uptake.

    ERIC Educational Resources Information Center

    McClellan, Powell D.

    A study attempted to determine if diurnal (daily cyclical) variations were present during maximal exercise. The subjects' (30 female undergraduate physical education majors) oxygen consumption and heart rates were monitored while they walked on a treadmill on which the grade was raised every minute. Each subject was tested for maximal oxygen…

  7. Statistical mechanics of maximal independent sets

    NASA Astrophysics Data System (ADS)

    Dall'Asta, Luca; Pin, Paolo; Ramezanpour, Abolfazl

    2009-12-01

    The graph theoretic concept of maximal independent set arises in several practical problems in computer science as well as in game theory. A maximal independent set is defined by the set of occupied nodes that satisfy some packing and covering constraints. It is known that finding minimum and maximum-density maximal independent sets are hard optimization problems. In this paper, we use cavity method of statistical physics and Monte Carlo simulations to study the corresponding constraint satisfaction problem on random graphs. We obtain the entropy of maximal independent sets within the replica symmetric and one-step replica symmetry breaking frameworks, shedding light on the metric structure of the landscape of solutions and suggesting a class of possible algorithms. This is of particular relevance for the application to the study of strategic interactions in social and economic networks, where maximal independent sets correspond to pure Nash equilibria of a graphical game of public goods allocation.

  8. Measuring Alcohol Expectancies in Youth

    ERIC Educational Resources Information Center

    Randolph, Karen A.; Gerend, Mary A.; Miller, Brenda A.

    2006-01-01

    Beliefs about the consequences of using alcohol, alcohol expectancies, are powerful predictors of underage drinking. The Alcohol Expectancies Questionnaire-Adolescent form (AEQ-A) has been widely used to measure expectancies in youth. Despite its broad use, the factor structure of the AEQ-A has not been firmly established. It is also not known…

  9. Labview utilities

    SciTech Connect

    Persaud, Arun

    2011-09-30

    The software package provides several utilities written in LabView. These utilities don't form independent programs, but rather can be used as a library or controls in other labview programs. The utilities include several new controls (xcontrols), VIs for input and output routines, as well as other 'helper'-functions not provided in the standard LabView environment.

  10. The evolution of utility functions and psychological altruism.

    PubMed

    Clavien, Christine; Chapuisat, Michel

    2016-04-01

    Numerous studies show that humans tend to be more cooperative than expected given the assumption that they are rational maximizers of personal gain. As a result, theoreticians have proposed elaborated formal representations of human decision-making, in which utility functions including "altruistic" or "moral" preferences replace the purely self-oriented "Homo economicus" function. Here we review mathematical approaches that provide insights into the mathematical stability of alternative utility functions. Candidate utility functions may be evaluated with help of game theory, classical modeling of social evolution that focuses on behavioral strategies, and modeling of social evolution that focuses directly on utility functions. We present the advantages of the latter form of investigation and discuss one surprisingly precise result: "Homo economicus" as well as "altruistic" utility functions are less stable than a function containing a preference for the common welfare that is only expressed in social contexts composed of individuals with similar preferences. We discuss the contribution of mathematical models to our understanding of human other-oriented behavior, with a focus on the classical debate over psychological altruism. We conclude that human can be psychologically altruistic, but that psychological altruism evolved because it was generally expressed towards individuals that contributed to the actor's fitness, such as own children, romantic partners and long term reciprocators.

  11. Formation Control of the MAXIM L2 Libration Orbit Mission

    NASA Technical Reports Server (NTRS)

    Folta, David; Hartman, Kate; Howell, Kathleen; Marchand, Belinda

    2004-01-01

    The Micro-Arcsecond Imaging Mission (MAXIM), a proposed concept for the Structure and Evolution of the Universe (SEU) Black Hole Imaging mission, is designed to make a ten million-fold improvement in X-ray image clarity of celestial objects by providing better than 0.1 microarcsecond imaging. To achieve mission requirements, MAXIM will have to improve on pointing by orders of magnitude. This pointing requirement impacts the control and design of the formation. Currently the architecture is comprised of 25 spacecraft, which will form the sparse apertures of a grazing incidence X-ray interferometer covering the 0.3-10 keV bandpass. This configuration will deploy 24 spacecraft as optics modules and one as the detector. The formation must allow for long duration continuous science observations and also for reconfiguration that permits re-pointing of the formation. In this paper, we provide analysis and trades of several control efforts that are dependent upon the pointing requirements and the configuration and dimensions of the MAXIM formation. We emphasize the utilization of natural motions in the Lagrangian regions that minimize the control efforts and we address both continuous and discrete control via LQR and feedback linearization. Results provide control cost, configuration options, and capabilities as guidelines for the development of this complex mission.

  12. Formation Control of the MAXIM L2 Libration Orbit Mission

    NASA Technical Reports Server (NTRS)

    Folta, David; Hartman, Kate; Howell, Kathleen; Marchand, Belinda

    2004-01-01

    The Micro-Arcsecond X-ray Imaging Mission (MAXIM), a proposed concept for the Structure and Evolution of the Universe (SEU) Black Hole Imager mission, is designed to make a ten million-fold improvement in X-ray image clarity of celestial objects by providing better than 0.1 micro-arcsecond imaging. Currently the mission architecture comprises 25 spacecraft, 24 as optics modules and one as the detector, which will form sparse sub-apertures of a grazing incidence X-ray interferometer covering the 0.3-10 keV bandpass. This formation must allow for long duration continuous science observations and also for reconfiguration that permits re-pointing of the formation. To achieve these mission goals, the formation is required to cooperatively point at desired targets. Once pointed, the individual elements of the MAXIM formation must remain stable, maintaining their relative positions and attitudes below a critical threshold. These pointing and formation stability requirements impact the control and design of the formation. In this paper, we provide analysis of control efforts that are dependent upon the stability and the configuration and dimensions of the MAXIM formation. We emphasize the utilization of natural motions in the Lagrangian regions to minimize the control efforts and we address continuous control via input feedback linearization (IFL). Results provide control cost, configuration options, and capabilities as guidelines for the development of this complex mission.

  13. Maximizing Complementary Quantities by Projective Measurements

    NASA Astrophysics Data System (ADS)

    M. Souza, Leonardo A.; Bernardes, Nadja K.; Rossi, Romeu

    2017-04-01

    In this work, we study the so-called quantitative complementarity quantities. We focus in the following physical situation: two qubits ( q A and q B ) are initially in a maximally entangled state. One of them ( q B ) interacts with a N-qubit system ( R). After the interaction, projective measurements are performed on each of the qubits of R, in a basis that is chosen after independent optimization procedures: maximization of the visibility, the concurrence, and the predictability. For a specific maximization procedure, we study in detail how each of the complementary quantities behave, conditioned on the intensity of the coupling between q B and the N qubits. We show that, if the coupling is sufficiently "strong," independent of the maximization procedure, the concurrence tends to decay quickly. Interestingly enough, the behavior of the concurrence in this model is similar to the entanglement dynamics of a two qubit system subjected to a thermal reservoir, despite that we consider finite N. However, the visibility shows a different behavior: its maximization is more efficient for stronger coupling constants. Moreover, we investigate how the distinguishability, or the information stored in different parts of the system, is distributed for different couplings.

  14. Name Stereotypes and Teachers' Expectations

    ERIC Educational Resources Information Center

    Harari, Herbert; McDavid, John W.

    1973-01-01

    The studies described here were executed to explore and verify the conjecture that teachers' expectations are likely to be systematically associated with implicit stereotyped perceptions of names, and these stereotypical expectations may in turn be reflected in teachers' subjective evaluation of student products and performance. (Author/RK)

  15. Increasing Expectations for Student Effort.

    ERIC Educational Resources Information Center

    Schilling, Karen Maitland; Schilling, Karl L.

    1999-01-01

    States that few higher education institutions have publicly articulated clear expectations of the knowledge and skills students are to attain. Describes gap between student and faculty expectations for academic effort. Reports that what is required in students' first semester appears to play a strong role in shaping the time investments made in…

  16. High Hopes and High Expectations!

    ERIC Educational Resources Information Center

    Wilford, Sara

    2006-01-01

    The start of each new school year is an especially hopeful time, and this author has found that clearly communicating expectations for teachers and families can set the stage for a wonderful new school year. This article discusses the expectations of teachers, directors, and families as a new school year begins.

  17. Institutional Differences: Expectations and Perceptions.

    ERIC Educational Resources Information Center

    Silver, Harold

    1982-01-01

    The history of higher education has paid scant attention to the attitudes and expectations of its customers, students, and employers of graduates. Recent research on student and employer attitudes toward higher education sectors has not taken into account these expectations in the context of recent higher education history. (Author/MSE)

  18. Expectancy Climate and School Effectiveness.

    ERIC Educational Resources Information Center

    Miskel, Cecil; Bloom, Susan

    Two questionnaire surveys of 89 Kansas public elementary and secondary schools examined, first, the relationship between school expectancy climate--teachers' expectations that their efforts would lead to positive student results--and school effectiveness, and, second, the change in that relationship through the school year. School effectiveness…

  19. Sibling Status Effects: Adult Expectations.

    ERIC Educational Resources Information Center

    Baskett, Linda Musun

    1985-01-01

    This study attempted to determine what expectations or beliefs adults might hold about a child based on his or her sibling status alone. Ratings on 50 adjective pairs for each of three sibling status types, only, oldest, and youngest child, were assessed in relation to adult expectations, birth order, and parental status of rater. (Author/DST)

  20. Expected Energy Method for Electro-Optical SNR Calculations.

    DTIC Science & Technology

    1984-02-02

    r’AD-Ri39 984 EXPECTED ENERGY METHOD FOR ELECTPO-OPTICRL SNR i/i CALCULRTIONS(U) MASSRCHUSETTS INST OF TECH LEXINGTON LINCOLN LAB G J MAYER 82 FEB 84...ENERGY METHOD FOR ELECTRO-OPTICAL SNR CALCULATIONS * Ci. MA YER Group 9 TECHNICAL REPORT 634 2 FEBRUARY 1984 Approved for public release; distribution...analysis of image and sensor element configuration. This method allows the optimal pixel size to be selected to maximize the expected SNR for any point

  1. Lipidome determinants of maximal lifespan in mammals.

    PubMed

    Bozek, Katarzyna; Khrameeva, Ekaterina E; Reznick, Jane; Omerbašić, Damir; Bennett, Nigel C; Lewin, Gary R; Azpurua, Jorge; Gorbunova, Vera; Seluanov, Andrei; Regnard, Pierrick; Wanert, Fanelie; Marchal, Julia; Pifferi, Fabien; Aujard, Fabienne; Liu, Zhen; Shi, Peng; Pääbo, Svante; Schroeder, Florian; Willmitzer, Lothar; Giavalisco, Patrick; Khaitovich, Philipp

    2017-12-01

    Maximal lifespan of mammalian species, even if closely related, may differ more than 10-fold, however the nature of the mechanisms that determine this variability is unresolved. Here, we assess the relationship between maximal lifespan duration and concentrations of more than 20,000 lipid compounds, measured in 669 tissue samples from 6 tissues of 35 species representing three mammalian clades: primates, rodents and bats. We identify lipids associated with species' longevity across the three clades, uncoupled from other parameters, such as basal metabolic rate, body size, or body temperature. These lipids clustered in specific lipid classes and pathways, and enzymes linked to them display signatures of greater stabilizing selection in long-living species, and cluster in functional groups related to signaling and protein-modification processes. These findings point towards the existence of defined molecular mechanisms underlying variation in maximal lifespan among mammals.

  2. An information maximization model of eye movements

    NASA Technical Reports Server (NTRS)

    Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra

    2005-01-01

    We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.

  3. Singularity structure of maximally supersymmetric scattering amplitudes.

    PubMed

    Arkani-Hamed, Nima; Bourjaily, Jacob L; Cachazo, Freddy; Trnka, Jaroslav

    2014-12-31

    We present evidence that loop amplitudes in maximally supersymmetric (N=4) Yang-Mills theory (SYM) beyond the planar limit share some of the remarkable structures of the planar theory. In particular, we show that through two loops, the four-particle amplitude in full N=4 SYM has only logarithmic singularities and is free of any poles at infinity--properties closely related to uniform transcendentality and the UV finiteness of the theory. We also briefly comment on implications for maximal (N=8) supergravity theory (SUGRA).

  4. Understanding violations of Gricean maxims in preschoolers and adults

    PubMed Central

    Okanda, Mako; Asada, Kosuke; Moriguchi, Yusuke; Itakura, Shoji

    2015-01-01

    This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants' understanding of the following maxims was assessed: be informative (first maxim of quantity), avoid redundancy (second maxim of quantity), be truthful (maxim of quality), be relevant (maxim of relation), avoid ambiguity (second maxim of manner), and be polite (maxim of politeness). Sensitivity to violations of Gricean maxims increased with age: 4-year-olds' understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner), and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed. PMID:26191018

  5. How to Generate Good Profit Maximization Problems

    ERIC Educational Resources Information Center

    Davis, Lewis

    2014-01-01

    In this article, the author considers the merits of two classes of profit maximization problems: those involving perfectly competitive firms with quadratic and cubic cost functions. While relatively easy to develop and solve, problems based on quadratic cost functions are too simple to address a number of important issues, such as the use of…

  6. Maximizing the Spectacle of Water Fountains

    ERIC Educational Resources Information Center

    Simoson, Andrew J.

    2009-01-01

    For a given initial speed of water from a spigot or jet, what angle of the jet will maximize the visual impact of the water spray in the fountain? This paper focuses on fountains whose spigots are arranged in circular fashion, and couches the measurement of the visual impact in terms of the surface area and the volume under the fountain's natural…

  7. Maximizing the Effective Use of Formative Assessments

    ERIC Educational Resources Information Center

    Riddell, Nancy B.

    2016-01-01

    In the current age of accountability, teachers must be able to produce tangible evidence of students' concept mastery. This article focuses on implementation of formative assessments before, during, and after instruction in order to maximize teachers' ability to effectively monitor student achievement. Suggested strategies are included to help…

  8. Maximal dinucleotide comma-free codes.

    PubMed

    Fimmel, Elena; Strüngmann, Lutz

    2016-01-21

    The problem of retrieval and maintenance of the correct reading frame plays a significant role in RNA transcription. Circular codes, and especially comma-free codes, can help to understand the underlying mechanisms of error-detection in this process. In recent years much attention has been paid to the investigation of trinucleotide circular codes (see, for instance, Fimmel et al., 2014; Fimmel and Strüngmann, 2015a; Michel and Pirillo, 2012; Michel et al., 2012, 2008), while dinucleotide codes had been touched on only marginally, even though dinucleotides are associated to important biological functions. Recently, all maximal dinucleotide circular codes were classified (Fimmel et al., 2015; Michel and Pirillo, 2013). The present paper studies maximal dinucleotide comma-free codes and their close connection to maximal dinucleotide circular codes. We give a construction principle for such codes and provide a graphical representation that allows them to be visualized geometrically. Moreover, we compare the results for dinucleotide codes with the corresponding situation for trinucleotide maximal self-complementary C(3)-codes. Finally, the results obtained are discussed with respect to Crick׳s hypothesis about frame-shift-detecting codes without commas.

  9. DNA solution of the maximal clique problem.

    PubMed

    Ouyang, Q; Kaplan, P D; Liu, S; Libchaber, A

    1997-10-17

    The maximal clique problem has been solved by means of molecular biology techniques. A pool of DNA molecules corresponding to the total ensemble of six-vertex cliques was built, followed by a series of selection processes. The algorithm is highly parallel and has satisfactory fidelity. This work represents further evidence for the ability of DNA computing to solve NP-complete search problems.

  10. Ehrenfest's Lottery--Time and Entropy Maximization

    ERIC Educational Resources Information Center

    Ashbaugh, Henry S.

    2010-01-01

    Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…

  11. Maximizing the Motivated Mind for Emergent Giftedness.

    ERIC Educational Resources Information Center

    Rea, Dan

    2001-01-01

    This article explains how the theory of the motivated mind conceptualizes the productive interaction of intelligence, creativity, and achievement motivation and how this theory can help educators to maximize students' emergent potential for giftedness. It discusses the integration of cold-order thinking and hot-chaotic thinking into fluid-adaptive…

  12. A Model of College Tuition Maximization

    ERIC Educational Resources Information Center

    Bosshardt, Donald I.; Lichtenstein, Larry; Zaporowski, Mark P.

    2009-01-01

    This paper develops a series of models for optimal tuition pricing for private colleges and universities. The university is assumed to be a profit maximizing, price discriminating monopolist. The enrollment decision of student's is stochastic in nature. The university offers an effective tuition rate, comprised of stipulated tuition less financial…

  13. Formation Control for the Maxim Mission.

    NASA Technical Reports Server (NTRS)

    Luquette, Richard J.; Leitner, Jesse; Gendreau, Keith; Sanner, Robert M.

    2004-01-01

    Over the next twenty years, a wave of change is occurring in the spacebased scientific remote sensing community. While the fundamental limits in the spatial and angular resolution achievable in spacecraft have been reached, based on today's technology, an expansive new technology base has appeared over the past decade in the area of Distributed Space Systems (DSS). A key subset of the DSS technology area is that which covers precision formation flying of space vehicles. Through precision formation flying, the baselines, previously defined by the largest monolithic structure which could fit in the largest launch vehicle fairing, are now virtually unlimited. Several missions including the Micro-Arcsecond X-ray Imaging Mission (MAXIM), and the Stellar Imager will drive the formation flying challenges to achieve unprecedented baselines for high resolution, extended-scene, interferometry in the ultraviolet and X-ray regimes. This paper focuses on establishing the feasibility for the formation control of the MAXIM mission. The Stellar Imager mission requirements are on the same order of those for MAXIM. This paper specifically addresses: (1) high-level science requirements for these missions and how they evolve into engineering requirements; (2) the formation control architecture devised for such missions; (3) the design of the formation control laws to maintain very high precision relative positions; and (4) the levels of fuel usage required in the duration of these missions. Specific preliminary results are presented for two spacecraft within the MAXIM mission.

  14. Physical activity extends life expectancy

    Cancer.gov

    Leisure-time physical activity is associated with longer life expectancy, even at relatively low levels of activity and regardless of body weight, according to a study by a team of researchers led by the NCI.

  15. On a nonstandard Brownian motion and its maximal function

    NASA Astrophysics Data System (ADS)

    Andrade, Bernardo B. de

    2015-07-01

    This article uses Radically Elementary Probability Theory (REPT) to prove results about the Wiener walk (the radically elementary Brownian motion) without the technical apparatus required by stochastic integration. The techniques used replace measure-theoretic tools by discrete probability and the rigorous use of infinitesimals. Specifically, REPT is applied to the results in Palacios (The American Statistician, 2008) to calculate certain expectations related to the Wiener walk and its maximal function. Because Palacios uses mostly combinatorics and no measure theory his results carry over through REPT with minimal changes. The paper also presents a construction of the Wiener walk which is intended to mimic the construction of Brownian motion from "continuous" white noise. A brief review of the nonstandard model on which REPT is based is given in the Appendix in order to minimize the need for previous exposure to the subject.

  16. Loops and multiple edges in modularity maximization of networks

    NASA Astrophysics Data System (ADS)

    Cafieri, Sonia; Hansen, Pierre; Liberti, Leo

    2010-04-01

    The modularity maximization model proposed by Newman and Girvan for the identification of communities in networks works for general graphs possibly with loops and multiple edges. However, the applications usually correspond to simple graphs. These graphs are compared to a null model where the degree distribution is maintained but edges are placed at random. Therefore, in this null model there will be loops and possibly multiple edges. Sharp bounds on the expected number of loops, and their impact on the modularity, are derived. Then, building upon the work of Massen and Doye, but using algebra rather than simulation, we propose modified null models associated with graphs without loops but with multiple edges, graphs with loops but without multiple edges and graphs without loops nor multiple edges. We validate our models by using the exact algorithm for clique partitioning of Grötschel and Wakabayashi.

  17. Modularity maximization using completely positive programming

    NASA Astrophysics Data System (ADS)

    Yazdanparast, Sakineh; Havens, Timothy C.

    2017-04-01

    Community detection is one of the most prominent problems of social network analysis. In this paper, a novel method for Modularity Maximization (MM) for community detection is presented which exploits the Alternating Direction Augmented Lagrangian (ADAL) method for maximizing a generalized form of Newman's modularity function. We first transform Newman's modularity function into a quadratic program and then use Completely Positive Programming (CPP) to map the quadratic program to a linear program, which provides the globally optimal maximum modularity partition. In order to solve the proposed CPP problem, a closed form solution using the ADAL merged with a rank minimization approach is proposed. The performance of the proposed method is evaluated on several real-world data sets used for benchmarks community detection. Simulation results shows the proposed technique provides outstanding results in terms of modularity value for crisp partitions.

  18. Maximally discordant mixed states of two qubits

    SciTech Connect

    Galve, Fernando; Giorgi, Gian Luca; Zambrini, Roberta

    2011-01-15

    We study the relative strength of classical and quantum correlations, as measured by discord, for two-qubit states. Quantum correlations appear only in the presence of classical correlations, while the reverse is not always true. We identify the family of states that maximize the discord for a given value of the classical correlations and show that the largest attainable discord for mixed states is greater than for pure states. The difference between discord and entanglement is emphasized by the remarkable fact that these states do not maximize entanglement and are, in some cases, even separable. Finally, by random generation of density matrices uniformly distributed over the whole Hilbert space, we quantify the frequency of the appearance of quantum and classical correlations for different ranks.

  19. Hamiltonian formalism and path entropy maximization

    NASA Astrophysics Data System (ADS)

    Davis, Sergio; González, Diego

    2015-10-01

    Maximization of the path information entropy is a clear prescription for constructing models in non-equilibrium statistical mechanics. Here it is shown that, following this prescription under the assumption of arbitrary instantaneous constraints on position and velocity, a Lagrangian emerges which determines the most probable trajectory. Deviations from the probability maximum can be consistently described as slices in time by a Hamiltonian, according to a nonlinear Langevin equation and its associated Fokker-Planck equation. The connections unveiled between the maximization of path entropy and the Langevin/Fokker-Planck equations imply that missing information about the phase space coordinate never decreases in time, a purely information-theoretical version of the second law of thermodynamics. All of these results are independent of any physical assumptions, and thus valid for any generalized coordinate as a function of time, or any other parameter. This reinforces the view that the second law is a fundamental property of plausible inference.

  20. Nondecoupling of maximal supergravity from the superstring.

    PubMed

    Green, Michael B; Ooguri, Hirosi; Schwarz, John H

    2007-07-27

    We consider the conditions necessary for obtaining perturbative maximal supergravity in d dimensions as a decoupling limit of type II superstring theory compactified on a (10-d) torus. For dimensions d=2 and d=3, it is possible to define a limit in which the only finite-mass states are the 256 massless states of maximal supergravity. However, in dimensions d>or=4, there are infinite towers of additional massless and finite-mass states. These correspond to Kaluza-Klein charges, wound strings, Kaluza-Klein monopoles, or branes wrapping around cycles of the toroidal extra dimensions. We conclude that perturbative supergravity cannot be decoupled from string theory in dimensions>or=4. In particular, we conjecture that pure N=8 supergravity in four dimensions is in the Swampland.

  1. Experimental implementation of maximally synchronizable networks

    NASA Astrophysics Data System (ADS)

    Sevilla-Escoboza, R.; Buldú, J. M.; Boccaletti, S.; Papo, D.; Hwang, D.-U.; Huerta-Cuellar, G.; Gutiérrez, R.

    2016-04-01

    Maximally synchronizable networks (MSNs) are acyclic directed networks that maximize synchronizability. In this paper, we investigate the feasibility of transforming networks of coupled oscillators into their corresponding MSNs. By tuning the weights of any given network so as to reach the lowest possible eigenratio λN /λ2, the synchronized state is guaranteed to be maintained across the longest possible range of coupling strengths. We check the robustness of the resulting MSNs with an experimental implementation of a network of nonlinear electronic oscillators and study the propagation of the synchronization errors through the network. Importantly, a method to study the effects of topological uncertainties on the synchronizability is proposed and explored both theoretically and experimentally.

  2. Uplink Array Calibration via Far-Field Power Maximization

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V.; Mukai, R.; Lee, D.

    2006-01-01

    Uplink antenna arrays have the potential to greatly increase the Deep Space Network s high-data-rate uplink capabilities as well as useful range, and to provide additional uplink signal power during critical spacecraft emergencies. While techniques for calibrating an array of receive antennas have been addressed previously, proven concepts for uplink array calibration have yet to be demonstrated. This article describes a method of utilizing the Moon as a natural far-field reflector for calibrating a phased array of uplink antennas. Using this calibration technique, the radio frequency carriers transmitted by each antenna of the array are optimally phased to ensure that the uplink power received by the spacecraft is maximized.

  3. Beyond "utilitarianism": maximizing the clinical impact of moral judgment research.

    PubMed

    Rosas, Alejandro; Koenigs, Michael

    2014-01-01

    The use of hypothetical moral dilemmas--which pit utilitarian considerations of welfare maximization against emotionally aversive "personal" harms--has become a widespread approach for studying the neuropsychological correlates of moral judgment in healthy subjects, as well as in clinical populations with social, cognitive, and affective deficits. In this article, we propose that a refinement of the standard stimulus set could provide an opportunity to more precisely identify the psychological factors underlying performance on this task, and thereby enhance the utility of this paradigm for clinical research. To test this proposal, we performed a re-analysis of previously published moral judgment data from two clinical populations: neurological patients with prefrontal brain damage and psychopathic criminals. The results provide intriguing preliminary support for further development of this assessment paradigm.

  4. Profit Maximization Models for Exponential Decay Processes.

    DTIC Science & Technology

    1980-08-01

    assumptions could easily be analyzed in similar fashion. References [1] Bensoussan, A., Hurst , E.G. and Nislund, B., Management Applications of Modern...TVIPe OF r 04PORNT A i M0 CiH O .V9RAE PROFIT MAXIMIZATION .ODELS FOR EXPONENT IAL Technical Report DECAY PROCESSES August 1990 ~~~I. PtA’OR~idNG ONqG

  5. Maximal respiratory pressures among adolescent swimmers.

    PubMed

    Rocha Crispino Santos, M A; Pinto, M L; Couto Sant'Anna, C; Bernhoeft, M

    2011-01-01

    Maximal inspiratory pressures (MIP) and maximal expiratory pressures (MEP) are useful indices of respiratory muscle strength in athletes. The aims of this study were: to describe the strength of the respiratory muscles of Olympic junior swim team, at baseline and after a standard physical training; and to determine if there is a differential inspiratory and expiratory pressure response to the physical training. A cross-sectional study evaluated 28 international-level swimmers with ages ranging from 15 to 17 years, 19 (61 %) being males. At baseline, MIP was found to be lower in females (P = .001). The mean values reached by males and females were: MIP(cmH2O) = M: 100.4 (± 26.5)/F: 67.8 (± 23.2); MEP (cmH2O) = M: 87.4 (± 20.7)/F: 73.9 (± 17.3). After the physical training they reached: MIP (cmH2O) = M: 95.3 (± 30.3)/F: 71.8 (± 35.6); MEP (cmH2O) = M: 82.8 (± 26.2)/F: 70.4 (± 8.3). No differential pressure responses were observed in either males or females. These results suggest that swimmers can sustain the magnitude of the initial maximal pressures. Other studies should be developed to clarify if MIP and MEP could be used as a marker of an athlete's performance.

  6. Formation Control for the MAXIM Mission

    NASA Technical Reports Server (NTRS)

    Luquette, Richard J.; Leitner, Jesse; Gendreau, Keith; Sanner, Robert M.

    2004-01-01

    Over the next twenty years, a wave of change is occurring in the space-based scientific remote sensing community. While the fundamental limits in the spatial and angular resolution achievable in spacecraft have been reached, based on today s technology, an expansive new technology base has appeared over the past decade in the area of Distributed Space Systems (DSS). A key subset of the DSS technology area is that which covers precision formation flying of space vehicles. Through precision formation flying, the baselines, previously defined by the largest monolithic structure which could fit in the largest launch vehicle fairing, are now virtually unlimited. Several missions including the Micro-Arcsecond X-ray Imaging Mission (MAXIM), and the Stellar Imager will drive the formation flying challenges to achieve unprecedented baselines for high resolution, extended-scene, interferometry in the ultraviolet and X-ray regimes. This paper focuses on establishing the feasibility for the formation control of the MAXIM mission. MAXIM formation flying requirements are on the order of microns, while Stellar Imager mission requirements are on the order of nanometers. This paper specifically addresses: (1) high-level science requirements for these missions and how they evolve into engineering requirements; and (2) the development of linearized equations of relative motion for a formation operating in an n-body gravitational field. Linearized equations of motion provide the ground work for linear formation control designs.

  7. Using return on investment to maximize conservation effectiveness in Argentine grasslands.

    PubMed

    Murdoch, William; Ranganathan, Jai; Polasky, Stephen; Regetz, James

    2010-12-07

    The rapid global loss of natural habitats and biodiversity, and limited resources, place a premium on maximizing the expected benefits of conservation actions. The scarcity of information on the fine-grained distribution of species of conservation concern, on risks of loss, and on costs of conservation actions, especially in developing countries, makes efficient conservation difficult. The distribution of ecosystem types (unique ecological communities) is typically better known than species and arguably better represents the entirety of biodiversity than do well-known taxa, so we use conserving the diversity of ecosystem types as our conservation goal. We define conservation benefit to include risk of conversion, spatial effects that reward clumping of habitat, and diminishing returns to investment in any one ecosystem type. Using Argentine grasslands as an example, we compare three strategies: protecting the cheapest land ("minimize cost"), maximizing conservation benefit regardless of cost ("maximize benefit"), and maximizing conservation benefit per dollar ("return on investment"). We first show that the widely endorsed goal of saving some percentage (typically 10%) of a country or habitat type, although it may inspire conservation, is a poor operational goal. It either leads to the accumulation of areas with low conservation benefit or requires infeasibly large sums of money, and it distracts from the real problem: maximizing conservation benefit given limited resources. Second, given realistic budgets, return on investment is superior to the other conservation strategies. Surprisingly, however, over a wide range of budgets, minimizing cost provides more conservation benefit than does the maximize-benefit strategy.

  8. Undergraduates' Perceptions of Employer Expectations

    ERIC Educational Resources Information Center

    DuPre, Carrie; Williams, Kate

    2011-01-01

    Research conducted by the National Association of Colleges and Employers (NACE) indicates that employers across industries seek similar skills in job applicants; yet employers often report finding these desired skills lacking in new hires. This study closes the gap in understanding between employer expectations and student perceptions regarding…

  9. Career Expectations of Accounting Students

    ERIC Educational Resources Information Center

    Elam, Dennis; Mendez, Francis

    2010-01-01

    The demographic make-up of accounting students is dramatically changing. This study sets out to measure how well the profession is ready to accommodate what may be very different needs and expectations of this new generation of students. Non-traditional students are becoming more and more of a tradition in the current college classroom.…

  10. Life Expectancy of Kibbutz Members.

    ERIC Educational Resources Information Center

    Leviatan, Uri; And Others

    1986-01-01

    Data are presented demonstrating that the life expectancy of kibbutz members--both men and women--is higher than that of the overall Jewish population in Israel. These data add to and support other research findings illustrating the more positive mental health and well-being found among kibbutz members than among other comparative populations.…

  11. Education: Expectation and the Unexpected

    ERIC Educational Resources Information Center

    Fulford, Amanda

    2016-01-01

    This paper considers concepts of expectation and responsibility, and how these drive dialogic interactions between tutor and student in an age of marketised Higher Education. In thinking about such interactions in terms of different forms of exchange, the paper considers the philosophy of Martin Buber and Emmanuel Levinas on dialogic…

  12. Metaphors As Storehouses of Expectation.

    ERIC Educational Resources Information Center

    Beavis, Allan K.; Thomas, A. Ross

    1996-01-01

    Explores how metaphors are used to identify and store some expectations that structure schools' interactions and communications. Outlines a systems-theoretical view of schools derived from Niklas Luhmann's social theories. Illustrates how the metaphors identified in an earlier study provide material contexts for identifying and storing structures…

  13. Primary expectations of secondary metabolites

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Plant secondary metabolites (e.g., phenolics) are important for human health, in addition to the organoleptic properties they impart to fresh and processed foods. Consumer expectations such as appearance, taste, or texture influence their purchasing decisions. Thorough identification of phenolic com...

  14. Privacy Expectations in Online Contexts

    ERIC Educational Resources Information Center

    Pure, Rebekah Abigail

    2013-01-01

    Advances in digital networked communication technology over the last two decades have brought the issue of personal privacy into sharper focus within contemporary public discourse. In this dissertation, I explain the Fourth Amendment and the role that privacy expectations play in the constitutional protection of personal privacy generally, and…

  15. Primary expectations of secondary metabolites

    Technology Transfer Automated Retrieval System (TEKTRAN)

    My program examines the plant secondary metabolites (i.e. phenolics) important for human health, and which impart the organoleptic properties that are quality indicators for fresh and processed foods. Consumer expectations such as appearance, taste, or texture influence their purchasing decisions; a...

  16. Labor Market Experiences and Expectancies.

    ERIC Educational Resources Information Center

    Gurin, Patricia

    This paper reports on a study which measured labor market experience and its possible effects on workers' psychological expectancies. Past efforts that employed black and white men and women had made to improve their market situations are described, as well as attributions they gave for their experiences. Work or educational changes attempted by…

  17. Reasonable Expectation of Adult Behavior.

    ERIC Educational Resources Information Center

    Todaro, Julie

    1999-01-01

    Discusses staff behavioral problems that prove difficult for successful library management. Suggests that reasonable expectations for behavior need to be established in such areas as common courtesies, environmental issues such as temperature and noise levels, work relationships and values, diverse work styles and ways of communicating, and…

  18. Great Expectations and New Beginnings

    ERIC Educational Resources Information Center

    Davis, Frances A.

    2009-01-01

    Great Expectation and New Beginnings is a prenatal family support program run by the Family, Infant, and Preschool Program (FIPP) in North Carolina. FIPP has developed an evidence-based integrated framework of early childhood intervention and family support that includes three primary components: providing intervention in everyday family…

  19. Expectation Effects in Organizational Change

    ERIC Educational Resources Information Center

    King, Albert S.

    1974-01-01

    The experiment reported here was conducted during a 12-month period at four plants owned by the same company. Managers were given artificial reports about previous findings obtained in implementing job enlargement and job rotation programs. Led to expect higher productivity as a result of these organizational innovations, the managers increased…

  20. Evaluation of Behavioral Expectation Scales.

    ERIC Educational Resources Information Center

    Zedeck, Sheldon; Baker, Henry T.

    Behavioral Expectation Scales developed by Smith and Kendall were evaluated. Results indicated slight interrater reliability between Head Nurses and Supervisors, moderate dependence among five performance dimensions, and correlation between two scales and tenure. Results are discussed in terms of procedural problems, critical incident problems,…

  1. Compaction properties of powders: the relationship between compression cycle hysteresis areas and maximally applied punch pressures.

    PubMed

    Khossravi, D

    1999-08-01

    The consolidation behaviors of various pharmaceutical solids were characterized by investigating the relationship between the calculated hysteresis areas and the maximally applied punch pressures. An Instron universal testing apparatus and an instrumented die were used to generate compression cycle profiles at various maximally applied punch pressures for the materials studied. Based on the profiles obtained, hysteresis areas were calculated for the materials studied as a function of maximally applied punch pressures. Furthermore, model profiles describing the plastic and brittle fracture processes were utilized to derive mathematical relationships between the calculated hysteresis cycle areas and the maximally applied punch pressures. The mathematical relationships derived indicate that a linear relationship between hysteresis areas and maximally applied punch pressures exists for plastic materials, whereas for brittle materials the hysteresis areas are related to the square of the maximally applied punch pressures. Experimental data obtained support the mathematical relationships derived. The goodness of fit to the models derived is used to rank order the consolidation mechanism of various drugs and pharmaceutical excipients.

  2. Utility solar water heating workshops

    SciTech Connect

    Barrett, L.B.

    1992-01-01

    The objective of this project was to explore the problems and opportunities for utility participation with solar water heating as a DSM measure. Expected benefits from the workshops included an increased awareness and interest by utilities in solar water heating as well as greater understanding by federal research and policy officials of utility perspectives for purposes of planning and programming. Ultimately, the project could result in better information transfer, increased implementation of solar water heating programs, greater penetration of solar systems, and more effective research projects. The objective of the workshops was satisfied. Each workshop succeeded in exploring the problems and opportunities for utility participation with solar water heating as a DSM option. The participants provided a range of ideas and suggestions regarding useful next steps for utilities and NREL. According to evaluations, the participants believed the workshops were very valuable, and they returned to their utilities with new information, ideas, and commitment.

  3. Graded Expectations: Predictive Processing and the Adjustment of Expectations during Spoken Language Comprehension

    PubMed Central

    Boudewyn, Megan A.; Long, Debra L.; Swaab, Tamara Y.

    2015-01-01

    The goal of this study was to investigate the use of local and global context to incoming words during listening comprehension. Local context was manipulated by presenting a target noun (e.g., cake, veggies) that was preceded by a word that described a prototypical or atypical feature of the noun (e.g., sweet, healthy). Global context was manipulated by presenting the noun in a scenario that was consistent or inconsistent with the critical noun (e.g., a birthday party). ERPs were examined at the feature word and at the critical noun. An N400 effect was found at the feature word reflecting the effect of compatibility with the global context. Global predictability and local feature-word consistency interacted at the critical noun: a larger N200 was found to nouns that mismatched predictions when the context was maximally constraining, relative to nouns in the other conditions. A graded N400 response was observed at the critical noun, modulated by global predictability and feature consistency. Finally, PNP effects of context-updating were observed to nouns supported by one contextual cue (global/local), but unsupported by the other. These results indicate (1) incoming words that are compatible with context-based expectations receive a processing benefit; (2) when the context is sufficiently constraining, specific lexical items may be activated; and (3) listeners dynamically adjust their expectations when input is inconsistent with their predictions, provided that the inconsistency has some level of support from either global or local context. PMID:25673006

  4. Supergravity backgrounds for four-dimensional maximally supersymmetric Yang-Mills

    NASA Astrophysics Data System (ADS)

    Maxfield, Travis

    2017-02-01

    In this note, we describe supersymmetric backgrounds for the four-dimensional maximally supersymmetric Yang-Mills theory. As an extension of the method of Festuccia and Seiberg to sixteen supercharges in four dimensions, we utilize the coupling of the gauge theory to maximally extended conformal supergravity. Included among the fields of the conformal supergravity multiplet is the complexified coupling parameter of the gauge theory; therefore, backgrounds with spacetime varying coupling — such as appear in F-theory and Janus configurations — are naturally included in this formalism. We demonstrate this with a few examples from past literature.

  5. Life expectancy of kibbutz members.

    PubMed

    Leviatan, U; Cohen, J; Jaffe-Katz, A

    1986-01-01

    Data are presented demonstrating that the life expectancy (LE) of kibbutz members--both men and women--is higher than that of the overall Jewish population in Israel. Closer inspection of the death rates at various ages reveals that, from age thirty, those of kibbutz women are lower than those of the Jewish population. Although those of kibbutz men are actually higher until age forty-nine, nevertheless the LE of kibbutz members (based on death rates) surpasses that of Jews in Israel. These data add to and support other research findings illustrating the more positive mental health and well-being found among kibbutz members than among other comparative populations. Similarly, the factors contributing to kibbutz members' life expectancy evolve from this quality of life, especially as this quality of life affects old age.

  6. Maximizing versus satisficing: happiness is a matter of choice.

    PubMed

    Schwartz, Barry; Ward, Andrew; Monterosso, John; Lyubomirsky, Sonja; White, Katherine; Lehman, Darrin R

    2002-11-01

    Can people feel worse off as the options they face increase? The present studies suggest that some people--maximizers--can. Study 1 reported a Maximization Scale, which measures individual differences in desire to maximize. Seven samples revealed negative correlations between maximization and happiness, optimism, self-esteem, and life satisfaction, and positive correlations between maximization and depression, perfectionism, and regret. Study 2 found maximizers less satisfied than nonmaximizers (satisficers) with consumer decisions, and more likely to engage in social comparison. Study 3 found maximizers more adversely affected by upward social comparison. Study 4 found maximizers more sensitive to regret and less satisfied in an ultimatum bargaining game. The interaction between maximizing and choice is discussed in terms of regret, adaptation, and self-blame.

  7. Postactivation Potentiation Biases Maximal Isometric Strength Assessment

    PubMed Central

    Lima, Leonardo Coelho Rabello; Oliveira, Felipe Bruno Dias; Oliveira, Thiago Pires; Assumpção, Claudio de Oliveira; Greco, Camila Coelho; Cardozo, Adalgiso Croscato; Denadai, Benedito Sérgio

    2014-01-01

    Postactivation potentiation (PAP) is known to enhance force production. Maximal isometric strength assessment protocols usually consist of two or more maximal voluntary isometric contractions (MVCs). The objective of this study was to determine if PAP would influence isometric strength assessment. Healthy male volunteers (n = 23) performed two five-second MVCs separated by a 180-seconds interval. Changes in isometric peak torque (IPT), time to achieve it (tPTI), contractile impulse (CI), root mean square of the electromyographic signal during PTI (RMS), and rate of torque development (RTD), in different intervals, were measured. Significant increases in IPT (240.6 ± 55.7 N·m versus 248.9 ± 55.1 N·m), RTD (746 ± 152 N·m·s−1versus 727 ± 158 N·m·s−1), and RMS (59.1 ± 12.2% RMSMAX  versus 54.8 ± 9.4% RMSMAX) were found on the second MVC. tPTI decreased significantly on the second MVC (2373 ± 1200 ms versus 2784 ± 1226 ms). We conclude that a first MVC leads to PAP that elicits significant enhancements in strength-related variables of a second MVC performed 180 seconds later. If disconsidered, this phenomenon might bias maximal isometric strength assessment, overestimating some of these variables. PMID:25133157

  8. Trust regions in Kriging-based optimization with expected improvement

    NASA Astrophysics Data System (ADS)

    Regis, Rommel G.

    2016-06-01

    The Kriging-based Efficient Global Optimization (EGO) method works well on many expensive black-box optimization problems. However, it does not seem to perform well on problems with steep and narrow global minimum basins and on high-dimensional problems. This article develops a new Kriging-based optimization method called TRIKE (Trust Region Implementation in Kriging-based optimization with Expected improvement) that implements a trust-region-like approach where each iterate is obtained by maximizing an Expected Improvement (EI) function within some trust region. This trust region is adjusted depending on the ratio of the actual improvement to the EI. This article also develops the Kriging-based CYCLONE (CYClic Local search in OptimizatioN using Expected improvement) method that uses a cyclic pattern to determine the search regions where the EI is maximized. TRIKE and CYCLONE are compared with EGO on 28 test problems with up to 32 dimensions and on a 36-dimensional groundwater bioremediation application in appendices supplied as an online supplement available at http://dx.doi.org/10.1080/0305215X.2015.1082350. The results show that both algorithms yield substantial improvements over EGO and they are competitive with a radial basis function method.

  9. Electromagnetically induced grating with maximal atomic coherence

    SciTech Connect

    Carvalho, Silvania A.; Araujo, Luis E. E. de

    2011-10-15

    We describe theoretically an atomic diffraction grating that combines an electromagnetically induced grating with a coherence grating in a double-{Lambda} atomic system. With the atom in a condition of maximal coherence between its lower levels, the combined gratings simultaneously diffract both the incident probe beam as well as the signal beam generated through four-wave mixing. A special feature of the atomic grating is that it will diffract any beam resonantly tuned to any excited state of the atom accessible by a dipole transition from its ground state.

  10. Approximation Algorithms for Free-Label Maximization

    NASA Astrophysics Data System (ADS)

    de Berg, Mark; Gerrits, Dirk H. P.

    Inspired by air traffic control and other applications where moving objects have to be labeled, we consider the following (static) point labeling problem: given a set P of n points in the plane and labels that are unit squares, place a label with each point in P in such a way that the number of free labels (labels not intersecting any other label) is maximized. We develop efficient constant-factor approximation algorithms for this problem, as well as PTASs, for various label-placement models.

  11. Maximizing results in reconstruction of cheek defects.

    PubMed

    Mureau, Marc A M; Hofer, Stefan O P

    2009-07-01

    The face is exceedingly important, as it is the medium through which individuals interact with the rest of society. Reconstruction of cheek defects after trauma or surgery is a continuing challenge for surgeons who wish to reliably restore facial function and appearance. Important in aesthetic facial reconstruction are the aesthetic unit principles, by which the face can be divided in central facial units (nose, lips, eyelids) and peripheral facial units (cheeks, forehead, chin). This article summarizes established options for reconstruction of cheek defects and provides an overview of several modifications as well as tips and tricks to avoid complications and maximize aesthetic results.

  12. Developing maximal neuromuscular power: part 2 - training considerations for improving maximal power production.

    PubMed

    Cormie, Prue; McGuigan, Michael R; Newton, Robert U

    2011-02-01

    This series of reviews focuses on the most important neuromuscular function in many sport performances: the ability to generate maximal muscular power. Part 1, published in an earlier issue of Sports Medicine, focused on the factors that affect maximal power production while part 2 explores the practical application of these findings by reviewing the scientific literature relevant to the development of training programmes that most effectively enhance maximal power production. The ability to generate maximal power during complex motor skills is of paramount importance to successful athletic performance across many sports. A crucial issue faced by scientists and coaches is the development of effective and efficient training programmes that improve maximal power production in dynamic, multi-joint movements. Such training is referred to as 'power training' for the purposes of this review. Although further research is required in order to gain a deeper understanding of the optimal training techniques for maximizing power in complex, sports-specific movements and the precise mechanisms underlying adaptation, several key conclusions can be drawn from this review. First, a fundamental relationship exists between strength and power, which dictates that an individual cannot possess a high level of power without first being relatively strong. Thus, enhancing and maintaining maximal strength is essential when considering the long-term development of power. Second, consideration of movement pattern, load and velocity specificity is essential when designing power training programmes. Ballistic, plyometric and weightlifting exercises can be used effectively as primary exercises within a power training programme that enhances maximal power. The loads applied to these exercises will depend on the specific requirements of each particular sport and the type of movement being trained. The use of ballistic exercises with loads ranging from 0% to 50% of one-repetition maximum (1RM) and

  13. Manpower Planning and Personnel Management Models Based on Utility Theory,

    DTIC Science & Technology

    1980-08-01

    and Morgenstern [1947]. 2.3 Assessment of Utility Functions For decision problems with multiple objectives, multiattribute utility theory provides... multiattribute utility theory and applications. In Multiple Criteria Decision Making, M.K. Starr and M. Zelany (eds.), North Holland, Amsterdam. Fishburn...Princeton University Press, Princeton, NJ. Fishburn, P.C. (1977). Multiattribute utilities in expected utility theory . In Conflicting Objectives in

  14. Paracellular epithelial sodium transport maximizes energy efficiency in the kidney

    PubMed Central

    Pei, Lei; Nguyen, Mien T.X.; Kamat, Nikhil; Magenheimer, Lynn; Zhuo, Min; Li, Jiahua; McDonough, Alicia A.; Fields, Timothy A.; Welch, William J.; Yu, Alan S.L.

    2016-01-01

    Efficient oxygen utilization in the kidney may be supported by paracellular epithelial transport, a form of passive diffusion that is driven by preexisting transepithelial electrochemical gradients. Claudins are tight-junction transmembrane proteins that act as paracellular ion channels in epithelial cells. In the proximal tubule (PT) of the kidney, claudin-2 mediates paracellular sodium reabsorption. Here, we used murine models to investigate the role of claudin-2 in maintaining energy efficiency in the kidney. We found that claudin-2–null mice conserve sodium to the same extent as WT mice, even during profound dietary sodium depletion, as a result of the upregulation of transcellular Na-K-2Cl transport activity in the thick ascending limb of Henle. We hypothesized that shifting sodium transport to transcellular pathways would lead to increased whole-kidney oxygen consumption. Indeed, compared with control animals, oxygen consumption in the kidneys of claudin-2–null mice was markedly increased, resulting in medullary hypoxia. Furthermore, tubular injury in kidneys subjected to bilateral renal ischemia-reperfusion injury was more severe in the absence of claudin-2. Our results indicate that paracellular transport in the PT is required for efficient utilization of oxygen in the service of sodium transport. We speculate that paracellular permeability may have evolved as a general strategy in epithelial tissues to maximize energy efficiency. PMID:27214555

  15. Impacts of demand dynamics and consumer expectations on world oil prices

    SciTech Connect

    Fromholzer, D.

    1980-12-01

    This research contributes to the study of world oil prices. We examine models of rational producers and consumers. Producers set prices or production quantities to maximize the value of their oil resources. Consumers purchase oil and other commodities to maximize utility. A market solution is a time path of prices and quantities that balances the choices of producers and consumers. Most existing models address pricing implications of alternative descriptions of the technology, organization, and objectives of producers. There has been little study of pricing implications of alternative descriptions of consumer behavior. The accurate description of demand is critical for the immediate empirical testing of alternative pricing models and for the projection of future prices. We develop a dynamic model of consumer behavior to improve our ability to address pricing implications of alternative descriptions of consumer technology and objectives. We build several simplified demand models based on this dynamic model of consumer behavior. We combine these models with simplified models of producer behavior. We test the sensitivity of pricing results to alternative assumptions about consumer price expectations and to the use of different functional forms for these models. Based on these tests, we choose two alternative models to represent demand, and we reestimate these models using recent oil market data.We generate and compare price paths for each model, and we discuss implications of these results for the world oil market. We study, in particular, consumers' ability to affect market prices. Finally, we show that price-setting producers have several nearly optimal strategies at their disposal. This gives them an ability to choose pricing strategies based on non-economic factors.

  16. Optimizing Population Variability to Maximize Benefit

    PubMed Central

    Izu, Leighton T.; Bányász, Tamás; Chen-Izu, Ye

    2015-01-01

    Variability is inherent in any population, regardless whether the population comprises humans, plants, biological cells, or manufactured parts. Is the variability beneficial, detrimental, or inconsequential? This question is of fundamental importance in manufacturing, agriculture, and bioengineering. This question has no simple categorical answer because research shows that variability in a population can have both beneficial and detrimental effects. Here we ask whether there is a certain level of variability that can maximize benefit to the population as a whole. We answer this question by using a model composed of a population of individuals who independently make binary decisions; individuals vary in making a yes or no decision, and the aggregated effect of these decisions on the population is quantified by a benefit function (e.g. accuracy of the measurement using binary rulers, aggregate income of a town of farmers). Here we show that an optimal variance exists for maximizing the population benefit function; this optimal variance quantifies what is often called the “right mix” of individuals in a population. PMID:26650247

  17. Maximal liquid bridges between horizontal cylinders.

    PubMed

    Cooray, Himantha; Huppert, Herbert E; Neufeld, Jerome A

    2016-08-01

    We investigate two-dimensional liquid bridges trapped between pairs of identical horizontal cylinders. The cylinders support forces owing to surface tension and hydrostatic pressure that balance the weight of the liquid. The shape of the liquid bridge is determined by analytically solving the nonlinear Laplace-Young equation. Parameters that maximize the trapping capacity (defined as the cross-sectional area of the liquid bridge) are then determined. The results show that these parameters can be approximated with simple relationships when the radius of the cylinders is small compared with the capillary length. For such small cylinders, liquid bridges with the largest cross-sectional area occur when the centre-to-centre distance between the cylinders is approximately twice the capillary length. The maximum trapping capacity for a pair of cylinders at a given separation is linearly related to the separation when it is small compared with the capillary length. The meniscus slope angle of the largest liquid bridge produced in this regime is also a linear function of the separation. We additionally derive approximate solutions for the profile of a liquid bridge, using the linearized Laplace-Young equation. These solutions analytically verify the above-mentioned relationships obtained for the maximization of the trapping capacity.

  18. Maximal liquid bridges between horizontal cylinders

    NASA Astrophysics Data System (ADS)

    Cooray, Himantha; Huppert, Herbert E.; Neufeld, Jerome A.

    2016-08-01

    We investigate two-dimensional liquid bridges trapped between pairs of identical horizontal cylinders. The cylinders support forces owing to surface tension and hydrostatic pressure that balance the weight of the liquid. The shape of the liquid bridge is determined by analytically solving the nonlinear Laplace-Young equation. Parameters that maximize the trapping capacity (defined as the cross-sectional area of the liquid bridge) are then determined. The results show that these parameters can be approximated with simple relationships when the radius of the cylinders is small compared with the capillary length. For such small cylinders, liquid bridges with the largest cross-sectional area occur when the centre-to-centre distance between the cylinders is approximately twice the capillary length. The maximum trapping capacity for a pair of cylinders at a given separation is linearly related to the separation when it is small compared with the capillary length. The meniscus slope angle of the largest liquid bridge produced in this regime is also a linear function of the separation. We additionally derive approximate solutions for the profile of a liquid bridge, using the linearized Laplace-Young equation. These solutions analytically verify the above-mentioned relationships obtained for the maximization of the trapping capacity.

  19. Maximizing strain in miniaturized dielectric elastomer actuators

    NASA Astrophysics Data System (ADS)

    Rosset, Samuel; Araromi, Oluwaseun; Shea, Herbert

    2015-04-01

    We present a theoretical model to optimise the unidirectional motion of a rigid object bonded to a miniaturized dielectric elastomer actuator (DEA), a configuration found for example in AMI's haptic feedback devices, or in our tuneable RF phase shifter. Recent work has shown that unidirectional motion is maximized when the membrane is both anistropically prestretched and subjected to a dead load in the direction of actuation. However, the use of dead weights for miniaturized devices is clearly highly impractical. Consequently smaller devices use the membrane itself to generate the opposing force. Since the membrane covers the entire frame, one has the same prestretch condition in the active (actuated) and passive zones. Because the passive zone contracts when the active zone expands, it does not provide a constant restoring force, reducing the maximum achievable actuation strain. We have determined the optimal ratio between the size of the electrode (active zone) and the passive zone, as well as the optimal prestretch in both in-plane directions, in order to maximize the absolute displacement of the rigid object placed at the active/passive border. Our model and experiments show that the ideal active ratio is 50%, with a displacement twice smaller than what can be obtained with a dead load. We expand our fabrication process to also show how DEAs can be laser-post-processed to remove carefully chosen regions of the passive elastomer membrane, thereby increasing the actuation strain of the device.

  20. Maximal lactate steady state in Judo

    PubMed Central

    de Azevedo, Paulo Henrique Silva Marques; Pithon-Curi, Tania; Zagatto, Alessandro Moura; Oliveira, João; Perez, Sérgio

    2014-01-01

    Summary Background: the purpose of this study was to verify the validity of respiratory compensation threshold (RCT) measured during a new single judo specific incremental test (JSIT) for aerobic demand evaluation. Methods: to test the validity of the new test, the JSIT was compared with Maximal Lactate Steady State (MLSS), which is the gold standard procedure for aerobic demand measuring. Eight well-trained male competitive judo players (24.3 ± 7.9 years; height of 169.3 ± 6.7cm; fat mass of 12.7 ± 3.9%) performed a maximal incremental specific test for judo to assess the RCT and performed on 30-minute MLSS test, where both tests were performed mimicking the UchiKomi drills. Results: the intensity at RCT measured on JSIT was not significantly different compared to MLSS (p=0.40). In addition, it was observed high and significant correlation between MLSS and RCT (r=0.90, p=0.002), as well as a high agreement. Conclusions: RCT measured during JSIT is a valid procedure to measure the aerobic demand, respecting the ecological validity of Judo. PMID:25332923

  1. Optimizing Population Variability to Maximize Benefit.

    PubMed

    Izu, Leighton T; Bányász, Tamás; Chen-Izu, Ye

    2015-01-01

    Variability is inherent in any population, regardless whether the population comprises humans, plants, biological cells, or manufactured parts. Is the variability beneficial, detrimental, or inconsequential? This question is of fundamental importance in manufacturing, agriculture, and bioengineering. This question has no simple categorical answer because research shows that variability in a population can have both beneficial and detrimental effects. Here we ask whether there is a certain level of variability that can maximize benefit to the population as a whole. We answer this question by using a model composed of a population of individuals who independently make binary decisions; individuals vary in making a yes or no decision, and the aggregated effect of these decisions on the population is quantified by a benefit function (e.g. accuracy of the measurement using binary rulers, aggregate income of a town of farmers). Here we show that an optimal variance exists for maximizing the population benefit function; this optimal variance quantifies what is often called the "right mix" of individuals in a population.

  2. Spiders Tune Glue Viscosity to Maximize Adhesion.

    PubMed

    Amarpuri, Gaurav; Zhang, Ci; Diaz, Candido; Opell, Brent D; Blackledge, Todd A; Dhinojwala, Ali

    2015-11-24

    Adhesion in humid conditions is a fundamental challenge to both natural and synthetic adhesives. Yet, glue from most spider species becomes stickier as humidity increases. We find the adhesion of spider glue, from five diverse spider species, maximizes at very different humidities that matches their foraging habitats. By using high-speed imaging and spreading power law, we find that the glue viscosity varies over 5 orders of magnitude with humidity for each species, yet the viscosity at maximal adhesion for each species is nearly identical, 10(5)-10(6) cP. Many natural systems take advantage of viscosity to improve functional response, but spider glue's humidity responsiveness is a novel adaptation that makes the glue stickiest in each species' preferred habitat. This tuning is achieved by a combination of proteins and hygroscopic organic salts that determines water uptake in the glue. We therefore anticipate that manipulation of polymer-salts interaction to control viscosity can provide a simple mechanism to design humidity responsive smart adhesives.

  3. Maximal coherence in a generic basis

    NASA Astrophysics Data System (ADS)

    Yao, Yao; Dong, G. H.; Ge, Li; Li, Mo; Sun, C. P.

    2016-12-01

    Since quantum coherence is an undoubted characteristic trait of quantum physics, the quantification and application of quantum coherence has been one of the long-standing central topics in quantum information science. Within the framework of a resource theory of quantum coherence proposed recently, a fiducial basis should be preselected for characterizing the quantum coherence in specific circumstances, namely, the quantum coherence is a basis-dependent quantity. Therefore, a natural question is raised: what are the maximum and minimum coherences contained in a certain quantum state with respect to a generic basis? While the minimum case is trivial, it is not so intuitive to verify in which basis the quantum coherence is maximal. Based on the coherence measure of relative entropy, we indicate the particular basis in which the quantum coherence is maximal for a given state, where the Fourier matrix (or more generally, complex Hadamard matrices) plays a critical role in determining the basis. Intriguingly, though we can prove that the basis associated with the Fourier matrix is a stationary point for optimizing the l1 norm of coherence, numerical simulation shows that it is not a global optimal choice.

  4. Rehabilitation Professionals' Participation Intensity and Expectations of Transition Roles

    ERIC Educational Resources Information Center

    Oertle, Kathleen Marie

    2009-01-01

    In this mixed-methods study, an on-line survey and interviews were utilized to gather data regarding the level of participation and expectations rehabilitation professionals have of teachers, youth with disabilities, parents, and themselves during the transition process. The survey response rate was 73.0% (N = 46). Six were selected for interviews…

  5. Normative Expectations and Individual Decisions concerning Media Gratification Choices.

    ERIC Educational Resources Information Center

    Lichtenstein, Allen; Rosenfeld, Lawrence

    1984-01-01

    Results indicate that each of the nine media studied (newspapers, magazines, commercial and public television, books, radio, friends, recorded music, film) has a clear, socially defined image, suggesting a two-stage model of media channel utilization--normative expectations followed by individual decisions. (PD)

  6. ADHD and marijuana use expectancies in young adulthood

    PubMed Central

    Harty, Seth C.; Pedersen, Sarah L.; Gnagy, Elizabeth M.; Pelham, William E.; Molina, Brooke S. G.

    2015-01-01

    Objective This study examined mean level differences in marijuana expectancies and the differential associations between expectancies and marijuana use for individuals with and without a history of Attention-Deficit/Hyperactivity Disorder (ADHD) Background Substance use expectancies are a widely studied risk factor for alcohol and other drug use. The relations between marijuana use expectancies and self-reported marijuana use have not been examined in young adults with ADHD, a population shown to be at risk for marijuana use. Method Participants were 306 (190 ADHD and 116 nonADHD) young adults (M age = 20.06, SD = 2.03) from the Pittsburgh ADHD Longitudinal Study (PALS) who provided data about marijuana use and marijuana use expectancies. Results Individuals in the ADHD group reported lower levels of social enhancement, tension reduction, and cognitive and behavioral impairment expectancies compared to individuals in the nonADHD group. Positive and negative marijuana use expectancies were associated with marijuana use frequency in the whole sample and statistically significant ADHD group by expectancy interactions were found. Sexual enhancement expectancies were more strongly associated with marijuana use frequency among individuals with ADHD histories while cognitive behavioral impairment expectancies were more strongly associated with marijuana use frequency among individuals without ADHD. Conclusions Marijuana use expectancies may be acquired, and operate differently, for individuals with and without ADHD histories. Although future research is needed to test this speculation, these differences may be associated with ADHD-related difficulties in higher order cognitive processes that affect the encoding and utilization of expectations regarding marijuana’s effects. PMID:26548364

  7. Insufficient ct data reconstruction based on directional total variation (dtv) regularized maximum likelihood expectation maximization (mlem) method

    NASA Astrophysics Data System (ADS)

    Islam, Fahima Fahmida

    Sparse tomography is an efficient technique which saves time as well as minimizes cost. However, due to few angular data it implies the image reconstruction problem as ill-posed. In the ill posed problem, even with exact data constraints, the inversion cannot be uniquely performed. Therefore, selection of suitable method to optimize the reconstruction problems plays an important role in sparse data CT. Use of regularization function is a well-known method to control the artifacts in limited angle data acquisition. In this work, we propose directional total variation regularized ordered subset (OS) type image reconstruction method for neutron limited data CT. Total variation (TV) regularization works as edge preserving regularization which not only preserves the sharp edge but also reduces many of the artifacts that are very common in limited data CT. However TV itself is not direction dependent. Therefore, TV is not very suitable for images with a dominant direction. The images with dominant direction it is important to know the total variation at certain direction. Hence, here a directional TV is used as prior term. TV regularization assumes the constraint of piecewise smoothness. As the original image is not piece wise constant image, sparsifying transform is used to convert the image in to sparse image or piecewise constant image. Along with this regularized function (D TV) the likelihood function which is adapted as objective function. To optimize this objective function a OS type algorithm is used. Generally there are two methods available to make OS method convergent. This work proposes OS type directional TV regularized likelihood reconstruction method which yields fast convergence as well as good quality image. Initial iteration starts with the filtered back projection (FBP) reconstructed image. The indication of convergence is determined by the convergence index between two successive reconstructed images. The quality of the image is assessed by showing the line profile of the reconstructed image. The proposed method is compared with the commonly used FBP, MLEM, and MLEM-TV algorithm. In order to verify the performance of the proposed algorithm a Shep-Logan head phantom is simulated as well as a real neutron CT image is tested to demonstrate the feasibility of the algorithm for the practical sparse CT reconstruction applications.

  8. Proper Timing of Foot-and-Mouth Disease Vaccination of Piglets with Maternally Derived Antibodies Will Maximize Expected Protection Levels

    PubMed Central

    Dekker, Aldo; Chénard, Gilles; Stockhofe, Norbert; Eblé, Phaedra L.

    2016-01-01

    We investigated to what extent maternally derived antibodies interfere with foot-and-mouth disease (FMD) vaccination in order to determine the factors that influence the correct vaccination for piglets. Groups of piglets with maternally derived antibodies were vaccinated at different time points following birth, and the antibody titers to FMD virus (FMDV) were measured using virus neutralization tests (VNT). We used 50 piglets from 5 sows that had been vaccinated 3 times intramuscularly in the neck during pregnancy with FMD vaccine containing strains of FMDV serotypes O, A, and Asia-1. Four groups of 10 piglets were vaccinated intramuscularly in the neck at 3, 5, 7, or 9 weeks of age using a monovalent Cedivac-FMD vaccine (serotype A TUR/14/98). One group of 10 piglets with maternally derived antibodies was not vaccinated, and another group of 10 piglets without maternally derived antibodies was vaccinated at 3 weeks of age and served as a control group. Sera samples were collected, and antibody titers were determined using VNT. In our study, the antibody responses of piglets with maternally derived antibodies vaccinated at 7 or 9 weeks of age were similar to the responses of piglets without maternally derived antibodies vaccinated at 3 weeks of age. The maternally derived antibody levels in piglets depended very strongly on the antibody titer in the sow, so the optimal time for vaccination of piglets will depend on the vaccination scheme and quality of vaccine used in the sows and should, therefore, be monitored and reviewed on regular basis in countries that use FMD prophylactic vaccination. PMID:27446940

  9. Vectorcardiographic loop alignment for fetal movement detection using the expectation-maximization algorithm and support vector machines.

    PubMed

    Vullings, R; Mischi, M

    2013-01-01

    Reduced fetal movement is an important parameter to assess fetal distress. Currently, no suitable methods are available that can objectively assess fetal movement during pregnancy. Fetal vectorcardiographic (VCG) loop alignment could be such a method. In general, the goal of VCG loop alignment is to correct for motion-induced changes in the VCGs of (multiple) consecutive heartbeats. However, the parameters used for loop alignment also provide information to assess fetal movement. Unfortunately, current methods for VCG loop alignment are not robust against low-quality VCG signals. In this paper, a more robust method for VCG loop alignment is developed that includes a priori information on the loop alignment, yielding a maximum a posteriori loop alignment. Classification, based on movement parameters extracted from the alignment, is subsequently performed using support vector machines, resulting in correct classification of (absence of) fetal movement in about 75% of cases. After additional validation and optimization, this method can possibly be employed for continuous fetal movement monitoring.

  10. Primary Care Clinician Expectations Regarding Aging

    ERIC Educational Resources Information Center

    Davis, Melinda M.; Bond, Lynne A.; Howard, Alan; Sarkisian, Catherine A.

    2011-01-01

    Purpose: Expectations regarding aging (ERA) in community-dwelling older adults are associated with personal health behaviors and health resource usage. Clinicians' age expectations likely influence patients' expectations and care delivery patterns; yet, limited research has explored clinicians' age expectations. The Expectations Regarding Aging…

  11. Adaptable Careers: Maximizing Less and Exploring More

    ERIC Educational Resources Information Center

    van Vianen, Annelies E. M.; De Pater, Irene E.; Preenen, Paul T. Y.

    2009-01-01

    Today, young adults are expected to decide between educational, vocational, and job options and to make the best choice possible. Career literatures emphasize the importance of young adults' career decision making but also acknowledge the problems related to making these decisions. The authors argue that career counselors could support clients'…

  12. Racial differences in patient expectations prior to resective epilepsy surgery.

    PubMed

    Baca, Christine Bower; Cheng, Eric M; Spencer, Susan S; Vassar, Stefanie; Vickrey, Barbara G

    2009-08-01

    We assessed the nature and frequency of preoperative expectations among patients with refractory epilepsy who were enrolled in a seven-center observational study of epilepsy surgery outcomes. At enrollment, patients responded to open-ended questions about expectations for surgical outcome. With the use of an iterative cutting-and-sorting technique, expectation themes were identified and rank-ordered. Associations of expectations with race/ethnicity were evaluated. Among 391 respondents, the two most frequently endorsed expectations (any rank order) were driving (62%) and job/school (43%). When only the most important (first-ranked) expectation was analyzed, driving (53%) and cognition (17%) were most frequently offered. Nonwhites endorsed job/school and cognition more frequently and driving less frequently than whites (all P0.05), whether expectations of any order or only first-ranked expectations were included. Elucidating the reason for these differences can aid in the clinical decision-making process for resective surgery and potentially address disparities in its utilization.

  13. An expectation-based memory deficit in aging.

    PubMed

    Bollinger, Jacob; Rubens, Michael T; Masangkay, Edrick; Kalkstein, Jonathan; Gazzaley, Adam

    2011-05-01

    Memory performance can be enhanced by expectations regarding the appearance of ensuing stimuli. Here, we investigated the influence of stimulus-category expectation on memory performance in aging, and used fMRI to explore age-related alterations in associated neural mechanisms. Unlike younger adults, who demonstrated both working memory (WM) and long-term memory (LTM) performance benefits for face stimuli when this stimulus category was expected, older adults did not exhibit these memory benefits. Concordantly, older adults did not exhibit expectation-period activity modulation in visual association cortex (i.e., fusiform face area (FFA)), unlike younger adults. However, within the older population, individuals who demonstrated face-expectation memory benefits also exhibited expectation-period FFA activity modulation equivalent to younger adults. The older cohort also displayed diminished expectation-related functional connectivity between regions of the prefrontal cortex and the FFA, relative to younger adults, suggesting that network alterations underlie the absence of expectation-mediated cortical modulation and memory benefits. This deficit may have broader consequences for the effective utilization of predictive cues to guide attention and engender optimal cognitive performance in older individuals.

  14. Maximal energy extraction under discrete diffusive exchange

    SciTech Connect

    Hay, M. J.; Schiff, J.; Fisch, N. J.

    2015-10-15

    Waves propagating through a bounded plasma can rearrange the densities of states in the six-dimensional velocity-configuration phase space. Depending on the rearrangement, the wave energy can either increase or decrease, with the difference taken up by the total plasma energy. In the case where the rearrangement is diffusive, only certain plasma states can be reached. It turns out that the set of reachable states through such diffusive rearrangements has been described in very different contexts. Building upon those descriptions, and making use of the fact that the plasma energy is a linear functional of the state densities, the maximal extractable energy under diffusive rearrangement can then be addressed through linear programming.

  15. Maximal energy extraction under discrete diffusive exchange

    NASA Astrophysics Data System (ADS)

    Hay, M. J.; Schiff, J.; Fisch, N. J.

    2015-10-01

    Waves propagating through a bounded plasma can rearrange the densities of states in the six-dimensional velocity-configuration phase space. Depending on the rearrangement, the wave energy can either increase or decrease, with the difference taken up by the total plasma energy. In the case where the rearrangement is diffusive, only certain plasma states can be reached. It turns out that the set of reachable states through such diffusive rearrangements has been described in very different contexts. Building upon those descriptions, and making use of the fact that the plasma energy is a linear functional of the state densities, the maximal extractable energy under diffusive rearrangement can then be addressed through linear programming.

  16. Multipartite maximally entangled states in symmetric scenarios

    NASA Astrophysics Data System (ADS)

    González-Guillén, Carlos E.

    2012-08-01

    We consider the class of (N+1)-partite states suitable for protocols where there is a powerful party, the authority, and the other N parties play the same role, namely, the state of their system lies in the symmetric Hilbert space. We show that, within this scenario, there is a “maximally entangled state” that can be transform by a local operations and classical communication protocol into any other state. In addition, we show how to use the protocol efficiently, including the construction of the state, and discuss security issues for possible applications to cryptographic protocols. As an immediate consequence we recover a sequential protocol that implements the 1-to-N symmetric cloning.

  17. Polycrystalline configurations that maximize electrical resistivity

    NASA Astrophysics Data System (ADS)

    Nesi, Vincenzo; Milton, Graeme W.

    A lower bound on the effective conductivity tensor of polycrystalline aggregates formed from a single basic crystal of conductivity σ was recently established by Avellaneda. Cherkaev, Lurie and Milton. The bound holds for any basic crystal, but for isotropic aggregates of a uniaxial crystal, the bound is achieved by a sphere assemblage model of Schulgasser. This left open the question of attainability of the bound when the crystal is not uniaxial. The present work establishes that the bound is always attained by a rather large class of polycrystalline materials. These polycrystalline materials, with maximal electrical resistivity, are constructed by sequential lamination of the basic crystal and rotations of itself on widely separated length scales. The analysis is facilitated by introducing a tensor S = 0( 0I + σ) -1 where 0 > 0 is chosen so that Tr S = 1. This tensor s is related to the electric field in the optimal polycrystalline configurations.

  18. Dispatch Scheduling to Maximize Exoplanet Detection

    NASA Astrophysics Data System (ADS)

    Johnson, Samson; McCrady, Nate; MINERVA

    2016-01-01

    MINERVA is a dedicated exoplanet detection telescope array using radial velocity measurements of nearby stars to detect planets. MINERVA will be a completely robotic facility, with a goal of maximizing the number of exoplanets detected. MINERVA requires a unique application of queue scheduling due to its automated nature and the requirement of high cadence observations. A dispatch scheduling algorithm is employed to create a dynamic and flexible selector of targets to observe, in which stars are chosen by assigning values through a weighting function. I designed and have begun testing a simulation which implements the functions of a dispatch scheduler and records observations based on target selections through the same principles that will be used at the commissioned site. These results will be used in a larger simulation that incorporates weather, planet occurrence statistics, and stellar noise to test the planet detection capabilities of MINERVA. This will be used to heuristically determine an optimal observing strategy for the MINERVA project.

  19. Characterizing maximally singular phase-space distributions

    NASA Astrophysics Data System (ADS)

    Sperling, J.

    2016-07-01

    Phase-space distributions are widely applied in quantum optics to access the nonclassical features of radiations fields. In particular, the inability to interpret the Glauber-Sudarshan distribution in terms of a classical probability density is the fundamental benchmark for quantum light. However, this phase-space distribution cannot be directly reconstructed for arbitrary states, because of its singular behavior. In this work, we perform a characterization of the Glauber-Sudarshan representation in terms of distribution theory. We address important features of such distributions: (i) the maximal degree of their singularities is studied, (ii) the ambiguity of representation is shown, and (iii) their dual space for nonclassicality tests is specified. In this view, we reconsider the methods for regularizing the Glauber-Sudarshan distribution for verifying its nonclassicality. This treatment is supported with comprehensive examples and counterexamples.

  20. Robust determination of maximally localized Wannier functions

    NASA Astrophysics Data System (ADS)

    Cancès, Éric; Levitt, Antoine; Panati, Gianluca; Stoltz, Gabriel

    2017-02-01

    We propose an algorithm to determine maximally localized Wannier functions (MLWFs). This algorithm, based on recent theoretical developments, does not require any physical input such as initial guesses for the Wannier functions, unlike popular schemes based on the projection method. We discuss how the projection method can fail on fine grids when the initial guesses are too far from MLWFs. We demonstrate that our algorithm is able to find localized Wannier functions through tests on two-dimensional systems, simplified models of semiconductors, and realistic DFT systems by interfacing with the wannier90 code. We also test our algorithm on the Haldane and Kane-Mele models to examine how it fails in the presence of topological obstructions.

  1. Maximally reliable Markov chains under energy constraints.

    PubMed

    Escola, Sean; Eisele, Michael; Miller, Kenneth; Paninski, Liam

    2009-07-01

    Signal-to-noise ratios in physical systems can be significantly degraded if the outputs of the systems are highly variable. Biological processes for which highly stereotyped signal generations are necessary features appear to have reduced their signal variabilities by employing multiple processing steps. To better understand why this multistep cascade structure might be desirable, we prove that the reliability of a signal generated by a multistate system with no memory (i.e., a Markov chain) is maximal if and only if the system topology is such that the process steps irreversibly through each state, with transition rates chosen such that an equal fraction of the total signal is generated in each state. Furthermore, our result indicates that by increasing the number of states, it is possible to arbitrarily increase the reliability of the system. In a physical system, however, an energy cost is associated with maintaining irreversible transitions, and this cost increases with the number of such transitions (i.e., the number of states). Thus, an infinite-length chain, which would be perfectly reliable, is infeasible. To model the effects of energy demands on the maximally reliable solution, we numerically optimize the topology under two distinct energy functions that penalize either irreversible transitions or incommunicability between states, respectively. In both cases, the solutions are essentially irreversible linear chains, but with upper bounds on the number of states set by the amount of available energy. We therefore conclude that a physical system for which signal reliability is important should employ a linear architecture, with the number of states (and thus the reliability) determined by the intrinsic energy constraints of the system.

  2. Does Maximizing Information at the Cut Score Always Maximize Classification Accuracy and Consistency?

    ERIC Educational Resources Information Center

    Wyse, Adam E.; Babcock, Ben

    2016-01-01

    A common suggestion made in the psychometric literature for fixed-length classification tests is that one should design tests so that they have maximum information at the cut score. Designing tests in this way is believed to maximize the classification accuracy and consistency of the assessment. This article uses simulated examples to illustrate…

  3. Anaerobic contribution during maximal anaerobic running test: correlation with maximal accumulated oxygen deficit.

    PubMed

    Zagatto, A; Redkva, P; Loures, J; Kalva Filho, C; Franco, V; Kaminagakura, E; Papoti, M

    2011-12-01

    The aims of this study were: (i) to measure energy system contributions in maximal anaerobic running test (MART); and (ii) to verify any correlation between MART and maximal accumulated oxygen deficit (MAOD). Eleven members of the armed forces were recruited for this study. Participants performed MART and MAOD, both accomplished on a treadmill. MART consisted of intermittent exercise, 20 s effort with 100 s recovery, after each spell of effort exercise. Energy system contributions by MART were also determined by excess post-exercise oxygen consumption, lactate response, and oxygen uptake measurements. MAOD was determined by five submaximal intensities and one supramaximal intensity exercises corresponding to 120% at maximal oxygen uptake intensity. Energy system contributions were 65.4±1.1% to aerobic; 29.5±1.1% to anaerobic a-lactic; and 5.1±0.5% to anaerobic lactic system throughout the whole test, while only during effort periods the anaerobic contribution corresponded to 73.5±1.0%. Maximal power found in MART corresponded to 111.25±1.33 mL/kg/min but did not significantly correlate with MAOD (4.69±0.30 L and 70.85±4.73 mL/kg). We concluded that the anaerobic a-lactic system is the main energy system in MART efforts and this test did not significantly correlate to MAOD.

  4. From entropy-maximization to equality-maximization: Gauss, Laplace, Pareto, and Subbotin

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2014-12-01

    The entropy-maximization paradigm of statistical physics is well known to generate the omnipresent Gauss law. In this paper we establish an analogous socioeconomic model which maximizes social equality, rather than physical disorder, in the context of the distributions of income and wealth in human societies. We show that-on a logarithmic scale-the Laplace law is the socioeconomic equality-maximizing counterpart of the physical entropy-maximizing Gauss law, and that this law manifests an optimized balance between two opposing forces: (i) the rich and powerful, striving to amass ever more wealth, and thus to increase social inequality; and (ii) the masses, struggling to form more egalitarian societies, and thus to increase social equality. Our results lead from log-Gauss statistics to log-Laplace statistics, yield Paretian power-law tails of income and wealth distributions, and show how the emergence of a middle-class depends on the underlying levels of socioeconomic inequality and variability. Also, in the context of asset-prices with Laplace-distributed returns, our results imply that financial markets generate an optimized balance between risk and predictability.

  5. Overestimate of relative aerobic contribution with maximal accumulated oxygen deficit: a review.

    PubMed

    Li, Y; Niessen, M; Chen, X; Hartmann, U

    2015-05-01

    Maximal accumulated oxygen deficit (MAOD) is widely utilized in calculating energy production during supra-maximal exercises. Since its introduction by Medbo et al. in 1988, debate on MAOD existed. The purpose of this review was to summarize the development and description of MAOD and another method of calculating energy production (Pcr-La-O₂). We reviewed similar studies on relative aerobic contribution (O₂%) and analyzed various results of O₂% calculated using MAOD or Pcr-La-O₂. An overestimate of O₂% was found when using MAOD compared to Pcr-La-O₂. The overestimate when using MAOD is likely due to the linear extrapolation of oxygen uptake at supra-maximal intensity, the neglect of anaerobic energy release and the reduced duration of each step in sub-maximal incremental test. Since it is unknown which method provides a more reliable estimation of O₂%, an exponential regression function (y=22.404 * ex + 45.176, where y=O₂% in percentage, x=duration of the supra-maximal exercise in minute) was drawn from the existing data using both methods.

  6. MATISSE: specifications and expected performances

    NASA Astrophysics Data System (ADS)

    Matter, A.; Lagarde, S.; Petrov, R. G.; Berio, P.; Robbe-Dubois, S.; Lopez, B.; Antonelli, P.; Allouche, F.; Cruzalebes, P.; Millour, F.; Bazin, G.; Bourgès, L.

    2016-08-01

    MATISSE (Multi AperTure mid-Infrared SpectroScopic Experiment) is the next generation spectro-interferometer at the European Southern Observatory VLTI operating in the spectral bands L, M and N, and combining four beams from the unit and auxiliary telescopes. MATISSE is now fully integrated at the Observatoire de la Côte d'Azur in Nice (France), and has entered very recently its testing phase in laboratory. This paper summarizes the equations describing the MATISSE signal and the associated sources of noise. The specifications and the expected performances of the instrument are then evaluated taking into account the current characteristics of the instrument and the VLTI infrastructure, including transmission and contrast degradation budgets. In addition, we present the different MATISSE simulation tools that will be made available to the future users.

  7. Maximizing Educational Opportunity through Community Resources.

    ERIC Educational Resources Information Center

    Maradian, Steve

    In the face of increased demands and diminishing resources, educational administrators at correctional facilities should look beyond institutional resources and utilize the services of area community colleges. The community college has an established track record in correctional education. Besides the nationally recognized correctional programs…

  8. Motor Activity Improves Temporal Expectancy

    PubMed Central

    Fautrelle, Lilian; Mareschal, Denis; French, Robert; Addyman, Caspar; Thomas, Elizabeth

    2015-01-01

    Certain brain areas involved in interval timing are also important in motor activity. This raises the possibility that motor activity might influence interval timing. To test this hypothesis, we assessed interval timing in healthy adults following different types of training. The pre- and post-training tasks consisted of a button press in response to the presentation of a rhythmic visual stimulus. Alterations in temporal expectancy were evaluated by measuring response times. Training consisted of responding to the visual presentation of regularly appearing stimuli by either: (1) pointing with a whole-body movement, (2) pointing only with the arm, (3) imagining pointing with a whole-body movement, (4) simply watching the stimulus presentation, (5) pointing with a whole-body movement in response to a target that appeared at irregular intervals (6) reading a newspaper. Participants performing a motor activity in response to the regular target showed significant improvements in judgment times compared to individuals with no associated motor activity. Individuals who only imagined pointing with a whole-body movement also showed significant improvements. No improvements were observed in the group that trained with a motor response to an irregular stimulus, hence eliminating the explanation that the improved temporal expectations of the other motor training groups was purely due to an improved motor capacity to press the response button. All groups performed a secondary task equally well, hence indicating that our results could not simply be attributed to differences in attention between the groups. Our results show that motor activity, even when it does not play a causal or corrective role, can lead to improved interval timing judgments. PMID:25806813

  9. A sampling plan for conduit-flow karst springs: Minimizing sampling cost and maximizing statistical utility

    USGS Publications Warehouse

    Currens, J.C.

    1999-01-01

    Analytical data for nitrate and triazines from 566 samples collected over a 3-year period at Pleasant Grove Spring, Logan County, KY, were statistically analyzed to determine the minimum data set needed to calculate meaningful yearly averages for a conduit-flow karst spring. Results indicate that a biweekly sampling schedule augmented with bihourly samples from high-flow events will provide meaningful suspended-constituent and dissolved-constituent statistics. Unless collected over an extensive period of time, daily samples may not be representative and may also be autocorrelated. All high-flow events resulting in a significant deflection of a constituent from base-line concentrations should be sampled. Either the geometric mean or the flow-weighted average of the suspended constituents should be used. If automatic samplers are used, then they may be programmed to collect storm samples as frequently as every few minutes to provide details on the arrival time of constituents of interest. However, only samples collected bihourly should be used to calculate averages. By adopting a biweekly sampling schedule augmented with high-flow samples, the need to continuously monitor discharge, or to search for and analyze existing data to develop a statistically valid monitoring plan, is lessened.Analytical data for nitrate and triazines from 566 samples collected over a 3-year period at Pleasant Grove Spring, Logan County, KY, were statistically analyzed to determine the minimum data set needed to calculate meaningful yearly averages for a conduit-flow karst spring. Results indicate that a biweekly sampling schedule augmented with bihourly samples from high-flow events will provide meaningful suspended-constituent and dissolved-constituent statistics. Unless collected over an extensive period of time, daily samples may not be representative and may also be autocorrelated. All high-flow events resulting in a significant deflection of a constituent from base-line concentrations should be sampled. Either the geometric mean or the flow-weighted average of the suspended constituents should be used. If automatic samplers are used, then they may be programmed to collect storm samples as frequently as every few minutes to provide details on the arrival time of constituents of interest. However, only samples collected bihourly should be used to calculate averages. By adopting a biweekly sampling schedule augmented with high-flow samples, the need to continuously monitor discharge, or to search for and analyze existing data to develop a statistically valid monitoring plan, is lessened.

  10. Maximizing semi-active vibration isolation utilizing a magnetorheological damper with an inner bypass configuration

    SciTech Connect

    Bai, Xian-Xu; Wereley, Norman M.; Hu, Wei

    2015-05-07

    A single-degree-of-freedom (SDOF) semi-active vibration control system based on a magnetorheological (MR) damper with an inner bypass is investigated in this paper. The MR damper employing a pair of concentric tubes, between which the key structure, i.e., the inner bypass, is formed and MR fluids are energized, is designed to provide large dynamic range (i.e., ratio of field-on damping force to field-off damping force) and damping force range. The damping force performance of the MR damper is modeled using phenomenological model and verified by the experimental tests. In order to assess its feasibility and capability in vibration control systems, the mathematical model of a SDOF semi-active vibration control system based on the MR damper and skyhook control strategy is established. Using an MTS 244 hydraulic vibration exciter system and a dSPACE DS1103 real-time simulation system, experimental study for the SDOF semi-active vibration control system is also conducted. Simulation results are compared to experimental measurements.

  11. An Analysis of Methods for Maximizing the Utilization of Space in USAF Facilities.

    DTIC Science & Technology

    1987-09-01

    order to minimize costs, or by noise dampening requirements such as the need to separate classroom areas from plant rooms and heavy workshop areas. 9... classroom requirements of educational facilities. They involve examining the number of students enrolled in each course and the schedule of classes for...facility design but in timetable formulation to ensure that class scheduling does not exceed the capacity of classroom space available. Queuing Models

  12. Solving for Optimal Retirement Financial Plans by Maximizing a Discounted Habit Formation Utility Function

    DTIC Science & Technology

    2009-03-01

    20)up mkts t up mktst s up downp ψ ψ −= where “#up mkts ” is the cumulative number of up markets experienced in time...the effects of habit formation on optimal retirement financial plans. Mkt Perf is the average number of up markets experienced in the sample of 30...the Mean C confidence interval for the sample. Table 2. Scenario MURP results dt Mkt Perf Mean C Std Dev 95% C.I. +/- 0.00 15.00

  13. Improving Simulated Annealing by Replacing Its Variables with Game-Theoretic Utility Maximizers

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Bandari, Esfandiar; Tumer, Kagan

    2001-01-01

    The game-theory field of Collective INtelligence (COIN) concerns the design of computer-based players engaged in a non-cooperative game so that as those players pursue their self-interests, a pre-specified global goal for the collective computational system is achieved as a side-effect. Previous implementations of COIN algorithms have outperformed conventional techniques by up to several orders of magnitude, on domains ranging from telecommunications control to optimization in congestion problems. Recent mathematical developments have revealed that these previously developed algorithms were based on only two of the three factors determining performance. Consideration of only the third factor would instead lead to conventional optimization techniques like simulated annealing that have little to do with non-cooperative games. In this paper we present an algorithm based on all three terms at once. This algorithm can be viewed as a way to modify simulated annealing by recasting it as a non-cooperative game, with each variable replaced by a player. This recasting allows us to leverage the intelligent behavior of the individual players to substantially improve the exploration step of the simulated annealing. Experiments are presented demonstrating that this recasting significantly improves simulated annealing for a model of an economic process run over an underlying small-worlds topology. Furthermore, these experiments reveal novel small-worlds phenomena, and highlight the shortcomings of conventional mechanism design in bounded rationality domains.

  14. Using materials prognosis to maximize the utilization potential of complex mechanical systems

    NASA Astrophysics Data System (ADS)

    Christodoulou, Leo; Larsen, James M.

    2004-03-01

    Performance and life limits for structural materials in complex mechanical systems are often established based heavily on a fear of failure. Conventional approaches for avoiding structural failure often involve extensive periodic in spections, lengthy maintenance processes, and highly conservative “go, no-go” operational decisions, all of which may significantly impair system readiness. This article summarizes a typical present-day life-management process for an advanced system and then presents the key elements of an alternative life-management approach known as materials damage prognosis.

  15. IMPORTANCE OF MITOCHONDRIAL PO2 IN MAXIMAL O2 TRANSPORT AND UTILIZATION: A THEORETICAL ANALYSIS

    PubMed Central

    Cano, I; Mickael, M; Gomez-Cabrero, D.; Tegnér, J; Roca, J; Wagner, PD

    2013-01-01

    In previous calculations of how the O2 transport system limits V̇O2max, it was reasonably assumed that mitochondrial PO2 (PmO2) could be neglected (set to zero). However, in reality, PmO2 must exceed zero and the red cell to mitochondrion diffusion gradient may therefore be reduced, impairing diffusive transport of O2 and V̇O2max. Accordingly, we investigated the influence of PmO2 on these calculations by coupling previously used equations for O2 transport to one for mitochondrial respiration relating mitochondrial V̇O2 to PO2. This hyperbolic function, characterized by its P50 and V̇MAX, allowed PmO2 to become a model output (rather than set to zero as previously). Simulations using data from exercising normal subjects showed that at V̇O2max, PmO2was usually < 1 mm Hg, and that the effects on V̇O2max were minimal. However, when O2 transport capacity exceeded mitochondrial V̇MAX, or if P50 were elevated, PmO2 often reached double digit values, thereby reducing the diffusion gradient and significantly decreasing V̇O2max. PMID:24012990

  16. Cargo Loading--A Proposed Approach for Maximizing Space Utilization of Containers Loaded with Palletized Loads.

    DTIC Science & Technology

    1982-09-01

    HOVER and select pattern J such that, for all I, 0 < HTTIER(N,J) - HOV(N) < HTTIER(N,I) - HOV(N). If J is void , then an underfill cannot be...x2SMALL is void , go to Step 8. Otherwise, set n2 = n 2 + 1 and go to Step 4. Step 4: Test if nl, n2 satisfy X - nl x < X2SMALL where x2SMALL denotes...x2LOWER =X T n n 2 and x X n nx 1 2UPPER 21 Let 1x2 1 denote set of available sizes which satisfy x < x <(x x2LOWER - 2 - 2UPPER. If 1x2I is void , go

  17. 76 FR 37376 - Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility, and Integrity of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-27

    ..., the Commission develops and disseminates scientific and other information and reviews information..., cartographic, narrative, or audiovisual. 5. ``Integrity'' refers to security--the protection of information.... Information Quality Standards and Pre-dissemination Review The Marine Mammal Commission remains committed...

  18. Ground truth spectrometry and imagery of eruption clouds to maximize utility of satellite imagery

    NASA Technical Reports Server (NTRS)

    Rose, William I.

    1993-01-01

    Field experiments with thermal imaging infrared radiometers were performed and a laboratory system was designed for controlled study of simulated ash clouds. Using AVHRR (Advanced Very High Resolution Radiometer) thermal infrared bands 4 and 5, a radiative transfer method was developed to retrieve particle sizes, optical depth and particle mass involcanic clouds. A model was developed for measuring the same parameters using TIMS (Thermal Infrared Multispectral Scanner), MODIS (Moderate Resolution Imaging Spectrometer), and ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer). Related publications are attached.

  19. Value out of the Rear of the Gin - Utilizing Cottonseed and Gin Wastes to Maximize Profits

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The best way to tell how good of job you are doing in the gin is to look at what is coming out the rear of the gin. The gin takes the harvested cotton modules, conditions and separates it into different product streams; the lint into a marketable UD bale, the seed into storage (short or long term), ...

  20. 77 FR 46069 - Proposed Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility, and Integrity...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-02

    ... days between the hours of 10 a.m. and 5 p.m. Eastern Time. You can make an appointment to inspect the...: Requests for additional information should be directed to Christopher Willey, Chief Information...

  1. Maximized exoEarth candidate yields for starshades

    NASA Astrophysics Data System (ADS)

    Stark, Christopher C.; Shaklan, Stuart; Lisman, Doug; Cady, Eric; Savransky, Dmitry; Roberge, Aki; Mandell, Avi M.

    2016-10-01

    The design and scale of a future mission to directly image and characterize potentially Earth-like planets will be impacted, to some degree, by the expected yield of such planets. Recent efforts to increase the estimated yields, by creating observation plans optimized for the detection and characterization of Earth-twins, have focused solely on coronagraphic instruments; starshade-based missions could benefit from a similar analysis. Here we explore how to prioritize observations for a starshade given the limiting resources of both fuel and time, present analytic expressions to estimate fuel use, and provide efficient numerical techniques for maximizing the yield of starshades. We implemented these techniques to create an approximate design reference mission code for starshades and used this code to investigate how exoEarth candidate yield responds to changes in mission, instrument, and astrophysical parameters for missions with a single starshade. We find that a starshade mission operates most efficiently somewhere between the fuel- and exposuretime-limited regimes and, as a result, is less sensitive to photometric noise sources as well as parameters controlling the photon collection rate in comparison to a coronagraph. We produced optimistic yield curves for starshades, assuming our optimized observation plans are schedulable and future starshades are not thrust-limited. Given these yield curves, detecting and characterizing several dozen exoEarth candidates requires either multiple starshades or an η≳0.3.

  2. How do we assign punishment? The impact of minimal and maximal standards on the evaluation of deviants.

    PubMed

    Kessler, Thomas; Neumann, Jörg; Mummendey, Amélie; Berthold, Anne; Schubert, Thomas; Waldzus, Sven

    2010-09-01

    To explain the determinants of negative behavior toward deviants (e.g., punishment), this article examines how people evaluate others on the basis of two types of standards: minimal and maximal. Minimal standards focus on an absolute cutoff point for appropriate behavior; accordingly, the evaluation of others varies dichotomously between acceptable or unacceptable. Maximal standards focus on the degree of deviation from that standard; accordingly, the evaluation of others varies gradually from positive to less positive. This framework leads to the prediction that violation of minimal standards should elicit punishment regardless of the degree of deviation, whereas punishment in response to violations of maximal standards should depend on the degree of deviation. Four studies assessed or manipulated the type of standard and degree of deviation displayed by a target. Results consistently showed the expected interaction between type of standard (minimal and maximal) and degree of deviation on punishment behavior.

  3. Expectation-based syntactic comprehension.

    PubMed

    Levy, Roger

    2008-03-01

    This paper investigates the role of resource allocation as a source of processing difficulty in human sentence comprehension. The paper proposes a simple information-theoretic characterization of processing difficulty as the work incurred by resource reallocation during parallel, incremental, probabilistic disambiguation in sentence comprehension, and demonstrates its equivalence to the theory of Hale [Hale, J. (2001). A probabilistic Earley parser as a psycholinguistic model. In Proceedings of NAACL (Vol. 2, pp. 159-166)], in which the difficulty of a word is proportional to its surprisal (its negative log-probability) in the context within which it appears. This proposal subsumes and clarifies findings that high-constraint contexts can facilitate lexical processing, and connects these findings to well-known models of parallel constraint-based comprehension. In addition, the theory leads to a number of specific predictions about the role of expectation in syntactic comprehension, including the reversal of locality-based difficulty patterns in syntactically constrained contexts, and conditions under which increased ambiguity facilitates processing. The paper examines a range of established results bearing on these predictions, and shows that they are largely consistent with the surprisal theory.

  4. Electric utility power plant choice under investment regulation

    SciTech Connect

    Rothwell, G.S.

    1985-01-01

    Economists have examined electric utility behavior under an allowed rate of return (ARoR), but little attention has been given to the regulation of generation technology choice. Two financial methods for regulating this investment decision are Allowance for Funds Used During Construction (AFUDC) and Construction Work in Progress (CWIP). The switch from AFUDC to CWIP by federal and state commissions has sparked a national debate. One issue addressed in this study is whether AFUDC or CWIP influenced technology choice. The first chapter reviews policy and compares the present value of returns to the firm under AFUDC with returns under CWIP. Chapter 2 discusses previous studied of electric utility behavior. In the third chapter, a general model is developed in which profit and revenue maximization, and cost minimization, are nested. It incorporates cost uncertainty, as well as AFUDC and CWIP. Chapter 4 estimates cost functions for coal and nuclear capacity with data on 350 units completed between 1965 and 1980. These estimates are used to forecast firm expectations on the mean and variance of power plant costs. Results suggest that while rate-of-return and investment regulation may not have had a great influence on plant choice, policies that change perceived cost uncertainties will have a significant impact on firm behavior.

  5. Maximal and sub-maximal functional lifting performance at different platform heights.

    PubMed

    Savage, Robert J; Jaffrey, Mark A; Billing, Daniel C; Ham, Daniel J

    2015-01-01

    Introducing valid physical employment tests requires identifying and developing a small number of practical tests that provide broad coverage of physical performance across the full range of job tasks. This study investigated discrete lifting performance across various platform heights reflective of common military lifting tasks. Sixteen Australian Army personnel performed a discrete lifting assessment to maximal lifting capacity (MLC) and maximal acceptable weight of lift (MAWL) at four platform heights between 1.30 and 1.70 m. There were strong correlations between platform height and normalised lifting performance for MLC (R(2) = 0.76 ± 0.18, p < 0.05) and MAWL (R(2) = 0.73 ± 0.21, p < 0.05). The developed relationship allowed prediction of lifting capacity at one platform height based on lifting capacity at any of the three other heights, with a standard error of < 4.5 kg and < 2.0 kg for MLC and MAWL, respectively.

  6. 10 CFR 63.304 - Reasonable expectation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Reasonable expectation. 63.304 Section 63.304 Energy... Reasonable expectation. Reasonable expectation means that the Commission is satisfied that compliance will be achieved based upon the full record before it. Characteristics of reasonable expectation include that...

  7. Maximizing exosome colloidal stability following electroporation.

    PubMed

    Hood, Joshua L; Scott, Michael J; Wickline, Samuel A

    2014-03-01

    Development of exosome-based semisynthetic nanovesicles for diagnostic and therapeutic purposes requires novel approaches to load exosomes with cargo. Electroporation has previously been used to load exosomes with RNA. However, investigations into exosome colloidal stability following electroporation have not been considered. Herein, we report the development of a unique trehalose pulse media (TPM) that minimizes exosome aggregation following electroporation. Dynamic light scattering (DLS) and RNA absorbance were employed to determine the extent of exosome aggregation and electroextraction post electroporation in TPM compared to common PBS pulse media or sucrose pulse media (SPM). Use of TPM to disaggregate melanoma exosomes post electroporation was dependent on both exosome concentration and electric field strength. TPM maximized exosome dispersal post electroporation for both homogenous B16 melanoma and heterogeneous human serum-derived populations of exosomes. Moreover, TPM enabled heavy cargo loading of melanoma exosomes with 5nm superparamagnetic iron oxide nanoparticles (SPION5) while maintaining original exosome size and minimizing exosome aggregation as evidenced by transmission electron microscopy. Loading exosomes with SPION5 increased exosome density on sucrose gradients. This provides a simple, label-free means of enriching exogenously modified exosomes and introduces the potential for MRI-driven theranostic exosome investigations in vivo.

  8. Maximal respiratory pressure in healthy Japanese children.

    PubMed

    Tagami, Miki; Okuno, Yukako; Matsuda, Tadamitsu; Kawamura, Kenta; Shoji, Ryosuke; Tomita, Kazuhide

    2017-03-01

    [Purpose] Normal values for respiratory muscle pressures during development in Japanese children have not been reported. The purpose of this study was to investigate respiratory muscle pressures in Japanese children aged 3-12 years. [Subjects and Methods] We measured respiratory muscle pressure values using a manovacuometer without a nose clip, with subjects in a sitting position. Data were collected for ages 3-6 (Group I: 68 subjects), 7-9 (Group II: 86 subjects), and 10-12 (Group III: 64 subjects) years. [Results] The values for respiratory muscle pressures in children were significantly higher with age in both sexes, and were higher in boys than in girls. Correlation coefficients were significant at values of 0.279 to 0.471 for each gender relationship between maximal respiratory pressure and age, height, and weight, respectively. [Conclusion] In this study, we showed pediatric respiratory muscle pressure reference value for each age. In the present study, values for respiratory muscle pressures were lower than Brazilian studies. This suggests that differences in respiratory muscle pressures vary with ethnicity.

  9. Maximally localized Wannier functions: Theory and applications

    NASA Astrophysics Data System (ADS)

    Marzari, Nicola; Mostofi, Arash A.; Yates, Jonathan R.; Souza, Ivo; Vanderbilt, David

    2012-10-01

    The electronic ground state of a periodic system is usually described in terms of extended Bloch orbitals, but an alternative representation in terms of localized “Wannier functions” was introduced by Gregory Wannier in 1937. The connection between the Bloch and Wannier representations is realized by families of transformations in a continuous space of unitary matrices, carrying a large degree of arbitrariness. Since 1997, methods have been developed that allow one to iteratively transform the extended Bloch orbitals of a first-principles calculation into a unique set of maximally localized Wannier functions, accomplishing the solid-state equivalent of constructing localized molecular orbitals, or “Boys orbitals” as previously known from the chemistry literature. These developments are reviewed here, and a survey of the applications of these methods is presented. This latter includes a description of their use in analyzing the nature of chemical bonding, or as a local probe of phenomena related to electric polarization and orbital magnetization. Wannier interpolation schemes are also reviewed, by which quantities computed on a coarse reciprocal-space mesh can be used to interpolate onto much finer meshes at low cost, and applications in which Wannier functions are used as efficient basis functions are discussed. Finally the construction and use of Wannier functions outside the context of electronic-structure theory is presented, for cases that include phonon excitations, photonic crystals, and cold-atom optical lattices.

  10. Inverting Monotonic Nonlinearities by Entropy Maximization

    PubMed Central

    López-de-Ipiña Pena, Karmele; Caiafa, Cesar F.

    2016-01-01

    This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results. PMID:27780261

  11. Viral quasispecies assembly via maximal clique enumeration.

    PubMed

    Töpfer, Armin; Marschall, Tobias; Bull, Rowena A; Luciani, Fabio; Schönhuth, Alexander; Beerenwinkel, Niko

    2014-03-01

    Virus populations can display high genetic diversity within individual hosts. The intra-host collection of viral haplotypes, called viral quasispecies, is an important determinant of virulence, pathogenesis, and treatment outcome. We present HaploClique, a computational approach to reconstruct the structure of a viral quasispecies from next-generation sequencing data as obtained from bulk sequencing of mixed virus samples. We develop a statistical model for paired-end reads accounting for mutations, insertions, and deletions. Using an iterative maximal clique enumeration approach, read pairs are assembled into haplotypes of increasing length, eventually enabling global haplotype assembly. The performance of our quasispecies assembly method is assessed on simulated data for varying population characteristics and sequencing technology parameters. Owing to its paired-end handling, HaploClique compares favorably to state-of-the-art haplotype inference methods. It can reconstruct error-free full-length haplotypes from low coverage samples and detect large insertions and deletions at low frequencies. We applied HaploClique to sequencing data derived from a clinical hepatitis C virus population of an infected patient and discovered a novel deletion of length 357±167 bp that was validated by two independent long-read sequencing experiments. HaploClique is available at https://github.com/armintoepfer/haploclique. A summary of this paper appears in the proceedings of the RECOMB 2014 conference, April 2-5.

  12. Evolution of correlated multiplexity through stability maximization

    NASA Astrophysics Data System (ADS)

    Dwivedi, Sanjiv K.; Jalan, Sarika

    2017-02-01

    Investigating the relation between various structural patterns found in real-world networks and the stability of underlying systems is crucial to understand the importance and evolutionary origin of such patterns. We evolve multiplex networks, comprising antisymmetric couplings in one layer depicting predator-prey relationship and symmetric couplings in the other depicting mutualistic (or competitive) relationship, based on stability maximization through the largest eigenvalue of the corresponding adjacency matrices. We find that there is an emergence of the correlated multiplexity between the mirror nodes as the evolution progresses. Importantly, evolved values of the correlated multiplexity exhibit a dependence on the interlayer coupling strength. Additionally, the interlayer coupling strength governs the evolution of the disassortativity property in the individual layers. We provide analytical understanding to these findings by considering starlike networks representing both the layers. The framework discussed here is useful for understanding principles governing the stability as well as the importance of various patterns in the underlying networks of real-world systems ranging from the brain to ecology which consist of multiple types of interaction behavior.

  13. Maximizing Exosome Colloidal Stability Following Electroporation

    PubMed Central

    Hood, Joshua L.; Scott, Michael J.; Wickline, Samuel A.

    2014-01-01

    Development of exosome based semi-synthetic nanovesicles for diagnostic and therapeutic purposes requires novel approaches to load exosomes with cargo. Electroporation has previously been used to load exosomes with RNA. However, investigations into exosome colloidal stability following electroporation have not been considered. Herein, we report the development of a unique trehalose pulse media (TPM) that minimizes exosome aggregation following electroporation. Dynamic light scattering (DLS) and RNA absorbance were employed to determine the extent of exosome aggregation and electroextraction post electroporation in TPM compared to common PBS pulse media or sucrose pulse media (SPM). Use of TPM to disaggregate melanoma exosomes post electroporation was dependent on both exosome concentration and electric field strength. TPM maximized exosome dispersal post electroporation for both homogenous B16 melanoma and heterogeneous human serum derived populations of exosomes. Moreover, TPM enabled heavy cargo loading of melanoma exosomes with 5 nm superparamagnetic iron oxide nanoparticles (SPION5) while maintaining original exosome size and minimizing exosome aggregation as evidenced by transmission electron microscopy. Loading exosomes with SPION5 increased exosome density on sucrose gradients. This provides a simple, label free means to enrich exogenously modified exosomes and introduces the potential for MRI driven theranostic exosome investigations in vivo. PMID:24333249

  14. Predicting maximal grip strength using hand circumference.

    PubMed

    Li, Ke; Hewson, David J; Duchêne, Jacques; Hogrel, Jean-Yves

    2010-12-01

    The objective of this study was to analyze the correlations between anthropometric data and maximal grip strength (MGS) in order to establish a simple model to predict "normal" MGS. Randomized bilateral measurement of MGS was performed on a homogeneous population of 100 subjects. MGS was measured according to a standardized protocol with three dynamometers (Jamar, Myogrip and Martin Vigorimeter) for both dominant and non-dominant sides. Several anthropometric data were also measured: height; weight; hand, wrist and forearm circumference; hand and palm length. Among these data, hand circumference had the strongest correlation with MGS for all three dynamometers and for both hands (0.789 and 0.782 for Jamar; 0.829 and 0.824 for Myogrip; 0.663 and 0.730 for Vigorimeter). In addition, the only anthropometric variable systematically selected by a stepwise multiple linear regression analysis was also hand circumference. Based on this parameter alone, a predictive regression model presented good results (r(2) = 0.624 for Jamar; r(2) = 0.683 for Myogrip and r(2) = 0.473 for Vigorimeter; all adjusted r(2)). Moreover a single equation was predictive of MGS for both men and women and for both non-dominant and dominant hands. "Normal" MGS can be predicted using hand circumference alone.

  15. Maximal respiratory pressure in healthy Japanese children

    PubMed Central

    Tagami, Miki; Okuno, Yukako; Matsuda, Tadamitsu; Kawamura, Kenta; Shoji, Ryosuke; Tomita, Kazuhide

    2017-01-01

    [Purpose] Normal values for respiratory muscle pressures during development in Japanese children have not been reported. The purpose of this study was to investigate respiratory muscle pressures in Japanese children aged 3–12 years. [Subjects and Methods] We measured respiratory muscle pressure values using a manovacuometer without a nose clip, with subjects in a sitting position. Data were collected for ages 3–6 (Group I: 68 subjects), 7–9 (Group II: 86 subjects), and 10–12 (Group III: 64 subjects) years. [Results] The values for respiratory muscle pressures in children were significantly higher with age in both sexes, and were higher in boys than in girls. Correlation coefficients were significant at values of 0.279 to 0.471 for each gender relationship between maximal respiratory pressure and age, height, and weight, respectively. [Conclusion] In this study, we showed pediatric respiratory muscle pressure reference value for each age. In the present study, values for respiratory muscle pressures were lower than Brazilian studies. This suggests that differences in respiratory muscle pressures vary with ethnicity. PMID:28356644

  16. Reflection quasilattices and the maximal quasilattice

    NASA Astrophysics Data System (ADS)

    Boyle, Latham; Steinhardt, Paul J.

    2016-08-01

    We introduce the concept of a reflection quasilattice, the quasiperiodic generalization of a Bravais lattice with irreducible reflection symmetry. Among their applications, reflection quasilattices are the reciprocal (i.e., Bragg diffraction) lattices for quasicrystals and quasicrystal tilings, such as Penrose tilings, with irreducible reflection symmetry and discrete scale invariance. In a follow-up paper, we will show that reflection quasilattices can be used to generate tilings in real space with properties analogous to those in Penrose tilings, but with different symmetries and in various dimensions. Here we explain that reflection quasilattices only exist in dimensions two, three, and four, and we prove that there is a unique reflection quasilattice in dimension four: the "maximal reflection quasilattice" in terms of dimensionality and symmetry. Unlike crystallographic Bravais lattices, all reflection quasilattices are invariant under rescaling by certain discrete scale factors. We tabulate the complete set of scale factors for all reflection quasilattices in dimension d >2 , and for all those with quadratic irrational scale factors in d =2 .

  17. Expectation-Based Control of Noise and Chaos

    NASA Technical Reports Server (NTRS)

    Zak, Michael

    2006-01-01

    A proposed approach to control of noise and chaos in dynamic systems would supplement conventional methods. The approach is based on fictitious forces composed of expectations governed by Fokker-Planck or Liouville equations that describe the evolution of the probability densities of the controlled parameters. These forces would be utilized as feedback control forces that would suppress the undesired diffusion of the controlled parameters. Examples of dynamic systems in which the approach is expected to prove beneficial include spacecraft, electronic systems, and coupled lasers.

  18. Mammogram segmentation using maximal cell strength updation in cellular automata.

    PubMed

    Anitha, J; Peter, J Dinesh

    2015-08-01

    Breast cancer is the most frequently diagnosed type of cancer among women. Mammogram is one of the most effective tools for early detection of the breast cancer. Various computer-aided systems have been introduced to detect the breast cancer from mammogram images. In a computer-aided diagnosis system, detection and segmentation of breast masses from the background tissues is an important issue. In this paper, an automatic segmentation method is proposed to identify and segment the suspicious mass regions of mammogram using a modified transition rule named maximal cell strength updation in cellular automata (CA). In coarse-level segmentation, the proposed method performs an adaptive global thresholding based on the histogram peak analysis to obtain the rough region of interest. An automatic seed point selection is proposed using gray-level co-occurrence matrix-based sum average feature in the coarse segmented image. Finally, the method utilizes CA with the identified initial seed point and the modified transition rule to segment the mass region. The proposed approach is evaluated over the dataset of 70 mammograms with mass from mini-MIAS database. Experimental results show that the proposed approach yields promising results to segment the mass region in the mammograms with the sensitivity of 92.25% and accuracy of 93.48%.

  19. Practical applicability of Nyayas - (Maxims) mentioned in Chakrapani Tika.

    PubMed

    Vyas, Mahesh Kumar; Dwivedi, Rambabu

    2014-01-01

    The Nyayas - (Maxims) are of two types: (1) Loukika Nyaya and (2) Shastriya Nyaya. Loukika Nyayas are the one which are used by the common public in day today life whereas Shastriya Nyayas are the one which are used by the authors of the treatise to explore their concepts. Most commonly by using the meaning and gist of Loukika Nyaya, the Shastriya Nyayas were put forth by the Granthakaras. Moreover, notion of Nyaya depends upon the situation, place, and topic of explanation mainly. To explain the meaning of the topic, these Nyayas helped since Vaidika Kala. They teach hidden meaning correctly. As like Vedas, these Nyayas are also a part of other Shastras and so as in Ayurveda Shastra too. While explaining the Nidana, Chikitsa, etc., these Nyayas were utilized by the Acharyas of Ayurveda. To discern these Nyayas in their entirety at one place with examples is necessary for easy understanding of the Shastra. Here is an attempt to explore such Nyayas mentioned in Ayurveda for the benefit of easy understanding of the subject.

  20. Practical applicability of Nyayas – (Maxims) mentioned in Chakrapani Tika

    PubMed Central

    Vyas, Mahesh Kumar; Dwivedi, Rambabu

    2014-01-01

    The Nyayas – (Maxims) are of two types: (1) Loukika Nyaya and (2) Shastriya Nyaya. Loukika Nyayas are the one which are used by the common public in day today life whereas Shastriya Nyayas are the one which are used by the authors of the treatise to explore their concepts. Most commonly by using the meaning and gist of Loukika Nyaya, the Shastriya Nyayas were put forth by the Granthakaras. Moreover, notion of Nyaya depends upon the situation, place, and topic of explanation mainly. To explain the meaning of the topic, these Nyayas helped since Vaidika Kala. They teach hidden meaning correctly. As like Vedas, these Nyayas are also a part of other Shastras and so as in Ayurveda Shastra too. While explaining the Nidana, Chikitsa, etc., these Nyayas were utilized by the Acharyas of Ayurveda. To discern these Nyayas in their entirety at one place with examples is necessary for easy understanding of the Shastra. Here is an attempt to explore such Nyayas mentioned in Ayurveda for the benefit of easy understanding of the subject. PMID:26664230

  1. Maximizing the liquid fuel yield in a biorefining process.

    PubMed

    Zhang, Bo; von Keitz, Marc; Valentas, Kenneth

    2008-12-01

    Biorefining strives to recover the maximum value from each fraction, at minimum energy cost. In order to seek an unbiased and thorough assessment of the alleged opportunity offered by biomass fuels, the direct conversion of various lignocellulosic biomass was studied: aspen pulp wood (Populus tremuloides), aspen wood pretreated with dilute acid, aspen lignin, aspen logging residues, corn stalk, corn spathe, corn cob, corn stover, corn stover pellet, corn stover pretreated with dilute acid, and lignin extracted from corn stover. Besides the heating rate, the yield of liquid products was found to be dependent on the final liquefaction temperature and the length of liquefaction time. The major compounds of the liquid products from various origins were identified by GC-MS. The lignin was found to be a good candidate for the liquefaction process, and biomass fractionation was necessary to maximize the yield of the liquid bio-fuel. The results suggest a biorefinery process accompanying pretreatment, fermentation to ethanol, liquefaction to bio-crude oil, and other thermo-conversion technologies, such as gasification. Other biorefinery options, including supercritical water gasification and the effectual utilization of the bio-crude oil, are also addressed.

  2. A maximally selected test of symmetry about zero.

    PubMed

    Laska, Eugene; Meisner, Morris; Wanderling, Joseph

    2012-11-20

    The problem of testing symmetry about zero has a long and rich history in the statistical literature. We introduce a new test that sequentially discards observations whose absolute value is below increasing thresholds defined by the data. McNemar's statistic is obtained at each threshold and the largest is used as the test statistic. We obtain the exact distribution of this maximally selected McNemar and provide tables of critical values and a program for computing p-values. Power is compared with the t-test, the Wilcoxon Signed Rank Test and the Sign Test. The new test, MM, is slightly less powerful than the t-test and Wilcoxon Signed Rank Test for symmetric normal distributions with nonzero medians and substantially more powerful than all three tests for asymmetric mixtures of normal random variables with or without zero medians. The motivation for this test derives from the need to appraise the safety profile of new medications. If pre and post safety measures are obtained, then under the null hypothesis, the variables are exchangeable and the distribution of their difference is symmetric about a zero median. Large pre-post differences are the major concern of a safety assessment. The discarded small observations are not particularly relevant to safety and can reduce power to detect important asymmetry. The new test was utilized on data from an on-road driving study performed to determine if a hypnotic, a drug used to promote sleep, has next day residual effects.

  3. Expected geoneutrino signal at JUNO

    NASA Astrophysics Data System (ADS)

    Strati, Virginia; Baldoncini, Marica; Callegari, Ivan; Mantovani, Fabio; McDonough, William F.; Ricci, Barbara; Xhixha, Gerti

    2015-12-01

    Constraints on the Earth's composition and on its radiogenic energy budget come from the detection of geoneutrinos. The Kamioka Liquid scintillator Antineutrino Detector (KamLAND) and Borexino experiments recently reported the geoneutrino flux, which reflects the amount and distribution of U and Th inside the Earth. The Jiangmen Underground Neutrino Observatory (JUNO) neutrino experiment, designed as a 20 kton liquid scintillator detector, will be built in an underground laboratory in South China about 53 km from the Yangjiang and Taishan nuclear power plants, each one having a planned thermal power of approximately 18 GW. Given the large detector mass and the intense reactor antineutrino flux, JUNO aims not only to collect high statistics antineutrino signals from reactors but also to address the challenge of discriminating the geoneutrino signal from the reactor background. The predicted geoneutrino signal at JUNO is terrestrial neutrino unit (TNU), based on the existing reference Earth model, with the dominant source of uncertainty coming from the modeling of the compositional variability in the local upper crust that surrounds (out to approximately 500 km) the detector. A special focus is dedicated to the 6° × 4° local crust surrounding the detector which is estimated to contribute for the 44% of the signal. On the basis of a worldwide reference model for reactor antineutrinos, the ratio between reactor antineutrino and geoneutrino signals in the geoneutrino energy window is estimated to be 0.7 considering reactors operating in year 2013 and reaches a value of 8.9 by adding the contribution of the future nuclear power plants. In order to extract useful information about the mantle's composition, a refinement of the abundance and distribution of U and Th in the local crust is required, with particular attention to the geochemical characterization of the accessible upper crust where 47% of the expected geoneutrino signal originates and this region contributes

  4. Rare flavor processes in Maximally Natural Supersymmetry

    NASA Astrophysics Data System (ADS)

    García, Isabel García; March-Russell, John

    2015-01-01

    We study CP-conserving rare flavor violating processes in the recently proposed theory of Maximally Natural Supersymmetry (MNSUSY). MNSUSY is an unusual supersymmetric (SUSY) extension of the Standard Model (SM) which, remarkably, is untuned at present LHC limits. It employs Scherk-Schwarz breaking of SUSY by boundary conditions upon compactifying an underlying 5-dimensional (5D) theory down to 4D, and is not well-described by softly-broken SUSY, with much different phenomenology than the Minimal Supersymmetric Standard Model (MSSM) and its variants. The usual CP-conserving SUSY-flavor problem is automatically solved in MNSUSY due to a residual almost exact U(1) R symmetry, naturally heavy and highly degenerate 1st- and 2nd-generation sfermions, and heavy gauginos and Higgsinos. Depending on the exact implementation of MNSUSY there exist important new sources of flavor violation involving gauge boson Kaluza-Klein (KK) excitations. The spatial localization properties of the matter multiplets, in particular the brane localization of the 3rd generation states, imply KK-parity is broken and tree-level contributions to flavor changing neutral currents are present in general. Nevertheless, we show that simple variants of the basic MNSUSY model are safe from present flavor constraints arising from kaon and B-meson oscillations, the rare decays B s, d → μ + μ -, μ → ēee and μ- e conversion in nuclei. We also briefly discuss some special features of the radiative decays μ → eγ and . Future experiments, especially those concerned with lepton flavor violation, should see deviations from SM predictions unless one of the MNSUSY variants with enhanced flavor symmetries is realized.

  5. Maximal Oxygen Uptake, Sweating and Tolerance to Exercise in the Heat

    NASA Technical Reports Server (NTRS)

    Greenleaf, J. E.; Castle, B. L.; Ruff, W. K.

    1972-01-01

    The physiological mechanisms that facilitate acute acclimation to heat have not been fully elucidated, but the result is the establishment of a more efficient cardiovascular system to increase heat dissipation via increased sweating that allows the acclimated man to function with a cooler internal environment and to extend his performance. Men in good physical condition with high maximal oxygen uptakes generally acclimate to heat more rapidly and retain it longer than men in poorer condition. Also, upon first exposure trained men tolerate exercise in the heat better than untrained men. Both resting in heat and physical training in a cool environment confer only partial acclimation when first exposed to work in the heat. These observations suggest separate additive stimuli of metabolic heat from exercise and environmental heat to increase sweating during the acclimation process. However, the necessity of utilizing physical exercise during acclimation has been questioned. Bradbury et al. (1964) have concluded exercise has no effect on the course of heat acclimation since increased sweating can be induced by merely heating resting subjects. Preliminary evidence suggests there is a direct relationship between the maximal oxygen uptake and the capacity to maintain thermal regulation, particularly through the control of sweating. Since increased sweating is an important mechanism for the development of heat acclimation, and fit men have high sweat rates, it follows that upon initial exposure to exercise in the heat, men with high maximal oxygen uptakes should exhibit less strain than men with lower maximal oxygen uptakes. The purpose of this study was: (1) to determine if men with higher maximal oxygen uptakes exhibit greater tolerance than men with lower oxygen uptakes during early exposure to exercise in the heat, and (2) to investigate further the mechanism of the relationship between sweating and maximal work capacity.

  6. Feature Extraction Using Supervised Independent Component Analysis by Maximizing Class Distance

    NASA Astrophysics Data System (ADS)

    Sakaguchi, Yoshinori; Ozawa, Seiichi; Kotani, Manabu

    Recently, Independent Component Analysis (ICA) has been applied to not only problems of blind signal separation, but also feature extraction of patterns. However, the effectiveness of pattern features extracted by conventional ICA algorithms depends on pattern sets; that is, how patterns are distributed in the feature space. As one of the reasons, we have pointed out that ICA features are obtained by increasing only their independence even if the class information is available. In this context, we can expect that more high-performance features can be obtained by introducing the class information into conventional ICA algorithms. In this paper, we propose a supervised ICA (SICA) that maximizes Mahalanobis distance between features of different classes as well as maximize their independence. In the first experiment, two-dimensional artificial data are applied to the proposed SICA algorithm to see how maximizing Mahalanobis distance works well in the feature extraction. As a result, we demonstrate that the proposed SICA algorithm gives good features with high separability as compared with principal component analysis and a conventional ICA. In the second experiment, the recognition performance of features extracted by the proposed SICA is evaluated using the three data sets of UCI Machine Learning Repository. From the results, we show that the better recognition accuracy is obtained using our proposed SICA. Furthermore, we show that pattern features extracted by SICA are better than those extracted by only maximizing the Mahalanobis distance.

  7. Maximal stochastic transport in the Lorenz equations

    NASA Astrophysics Data System (ADS)

    Agarwal, Sahil; Wettlaufer, John

    2015-11-01

    We calculate the stochastic upper bounds for the Lorenz equations using an extension of the background method. In analogy with Rayleigh-Benard convection the upper bounds are for heat transport versus Rayleigh number. As might be expected the stochastic upper bounds are larger than the deterministic counterpart of Souza and Doering (2015), but their variation with noise amplitude exhibits surprising behavior. Below the transition to chaotic dynamics the upper bounds increase monotonically with noise amplitude. However, in the chaotic regime this monotonicity is lost; at a particular Rayleigh number the bound may increase or decrease with noise amplitude. The origin of this behavior is the coupling between the noise and unstable periodic orbits. This is confirmed by examining the close returns plots of the full solutions to the stochastic equations. Finally, we note that these solutions demonstrate that the effect of noise is equivalent to the effect of chaos.

  8. Maximizing industrial infrastructure efficiency in Iceland

    NASA Astrophysics Data System (ADS)

    Ingason, Helgi Thor; Sigfusson, Thorsteinn I.

    2010-08-01

    As a consequence of the increasing aluminum production in Iceland, local processing of aluminum skimmings has become a feasible business opportunity. A recycling plant for this purpose was built in Helguvik on the Reykjanes peninsula in 2003. The case of the recycling plant reflects increased concern regarding environmental aspects of the industry. An interesting characteristic of this plant is the fact that it is run in the same facilities as a large fishmeal production installation. It is operated by the same personnel and uses—partly—the same equipment and infrastructure. This paper reviews the grounds for these decisions and the experience of this merger of a traditional fish melting industry and a more recent aluminum melting industry after 6 years of operation. The paper is written by the original entrepreneurs behind the company, who provide observations on how the aluminum industry in Iceland has evolved since the starting of Alur’s operation and what might be expected in the near future.

  9. Patients'/Clients' Expectation Toward and Satisfaction from Pharmacy Services

    PubMed Central

    Ayalew, Mohammed Biset; Taye, Kaleab; Asfaw, Daniel; Lemma, Bethlehem; Dadi, Filagot; Solomon, Habtamu; Tazeze, Haile; Tsega, Bayew

    2017-01-01

    Objective: Satisfaction is becoming a popular health-care quality indicator as it reflects the reality of service or care provided. The aim of this study was to assess the level of patients' expectation toward and satisfaction from pharmacy service provided and to identify associated factor that might affect their expectation and satisfaction. Methods: A cross-sectional study was conducted on 287 patients, who were served in five pharmacies of Gondar University Hospital in May 2015. Data regarding socio-demographic characteristics and parameters that measure patients' expectation and satisfaction were collected through interview using the Amharic version of the questionnaire. Data were entered into SPSS version 21, and descriptive statistics, cross-tabs, and binary logistic regressions were utilized. P < 0.05 was used to declare association. Findings: Among 287 respondents involved in the study, 149 (51.9%) claimed to be satisfied with the pharmacy service and setting. Two hundred and twenty-nine (79.4%) respondents have high expectation toward gaining good services. Even though significant association was observed between the pharmacy type and patients level of satisfaction, sociodemographic characteristics of a patient were not found to predict the level of satisfaction. There is a higher level of expectation among study participants who earn higher income per month (>(2000 Ethiopian birr [ETB]) than those who get less income (<1000 ETB). Conclusion: Although patients have a higher level of expectation toward pharmacy services, their satisfaction from the service was found to be low. PMID:28331862

  10. Maximality-Based Structural Operational Semantics for Petri Nets

    NASA Astrophysics Data System (ADS)

    Saīdouni, Djamel Eddine; Belala, Nabil; Bouneb, Messaouda

    2009-03-01

    The goal of this work is to exploit an implementable model, namely the maximality-based labeled transition system, which permits to express true-concurrency in a natural way without splitting actions on their start and end events. One can do this by giving a maximality-based structural operational semantics for the model of Place/Transition Petri nets in terms of maximality-based labeled transition systems structures.

  11. Multicultural Differences in Women's Expectations of Birth.

    PubMed

    Moore, Marianne F

    2016-01-01

    This review surveyed qualitative and quantitative studies to explore the expectations around birth that are held by women from different cultures. These studies are grouped according to expectations of personal control expectations of support from partner/others/family; expectations of carel behavior from providers such as nurses, doctors, and/or midwives; expectations about the health of the baby; and expectations about pain in childbirth. Discussed are the findings and the role that Western culture in medicine, power and privilege are noted in providing care to these women.

  12. Ischemic preconditioning of the muscle improves maximal exercise performance but not maximal oxygen uptake in humans.

    PubMed

    Crisafulli, Antonio; Tangianu, Flavio; Tocco, Filippo; Concu, Alberto; Mameli, Ombretta; Mulliri, Gabriele; Caria, Marcello A

    2011-08-01

    Brief episodes of nonlethal ischemia, commonly known as "ischemic preconditioning" (IP), are protective against cell injury induced by infarction. Moreover, muscle IP has been found capable of improving exercise performance. The aim of the study was the comparison of standard exercise performances carried out in normal conditions with those carried out following IP, achieved by brief muscle ischemia at rest (RIP) and after exercise (EIP). Seventeen physically active, healthy male subjects performed three incremental, randomly assigned maximal exercise tests on a cycle ergometer up to exhaustion. One was the reference (REF) test, whereas the others were performed after the RIP and EIP sessions. Total exercise time (TET), total work (TW), and maximal power output (W(max)), oxygen uptake (VO(2max)), and pulmonary ventilation (VE(max)) were assessed. Furthermore, impedance cardiography was used to measure maximal heart rate (HR(max)), stroke volume (SV(max)), and cardiac output (CO(max)). A subgroup of volunteers (n = 10) performed all-out tests to assess their anaerobic capacity. We found that both RIP and EIP protocols increased in a similar fashion TET, TW, W(max), VE(max), and HR(max) with respect to the REF test. In particular, W(max) increased by ∼ 4% in both preconditioning procedures. However, preconditioning sessions failed to increase traditionally measured variables such as VO(2max), SV(max,) CO(max), and anaerobic capacity(.) It was concluded that muscle IP improves performance without any difference between RIP and EIP procedures. The mechanism of this effect could be related to changes in fatigue perception.

  13. Maximizing energy transfer in vibrofluidized granular systems.

    PubMed

    Windows-Yule, C R K; Rosato, A D; Parker, D J; Thornton, A R

    2015-05-01

    Using discrete particle simulations validated by experimental data acquired using the positron emission particle tracking technique, we study the efficiency of energy transfer from a vibrating wall to a system of discrete, macroscopic particles. We demonstrate that even for a fixed input energy from the wall, energy conveyed to the granular system under excitation may vary significantly dependent on the frequency and amplitude of the driving oscillations. We investigate the manner in which the efficiency with which energy is transferred to the system depends on the system variables and determine the key control parameters governing the optimization of this energy transfer. A mechanism capable of explaining our results is proposed, and the implications of our findings in the research field of granular dynamics as well as their possible utilization in industrial applications are discussed.

  14. Criticality Maximizes Complexity in Neural Tissue.

    PubMed

    Timme, Nicholas M; Marshall, Najja J; Bennett, Nicholas; Ripp, Monica; Lautzenhiser, Edward; Beggs, John M

    2016-01-01

    The analysis of neural systems leverages tools from many different fields. Drawing on techniques from the study of critical phenomena in statistical mechanics, several studies have reported signatures of criticality in neural systems, including power-law distributions, shape collapses, and optimized quantities under tuning. Independently, neural complexity-an information theoretic measure-has been introduced in an effort to quantify the strength of correlations across multiple scales in a neural system. This measure represents an important tool in complex systems research because it allows for the quantification of the complexity of a neural system. In this analysis, we studied the relationships between neural complexity and criticality in neural culture data. We analyzed neural avalanches in 435 recordings from dissociated hippocampal cultures produced from rats, as well as neural avalanches from a cortical branching model. We utilized recently developed maximum likelihood estimation power-law fitting methods that account for doubly truncated power-laws, an automated shape collapse algorithm, and neural complexity and branching ratio calculation methods that account for sub-sampling, all of which are implemented in the freely available Neural Complexity and Criticality MATLAB toolbox. We found evidence that neural systems operate at or near a critical point and that neural complexity is optimized in these neural systems at or near the critical point. Surprisingly, we found evidence that complexity in neural systems is dependent upon avalanche profiles and neuron firing rate, but not precise spiking relationships between neurons. In order to facilitate future research, we made all of the culture data utilized in this analysis freely available online.

  15. Criticality Maximizes Complexity in Neural Tissue

    PubMed Central

    Timme, Nicholas M.; Marshall, Najja J.; Bennett, Nicholas; Ripp, Monica; Lautzenhiser, Edward; Beggs, John M.

    2016-01-01

    The analysis of neural systems leverages tools from many different fields. Drawing on techniques from the study of critical phenomena in statistical mechanics, several studies have reported signatures of criticality in neural systems, including power-law distributions, shape collapses, and optimized quantities under tuning. Independently, neural complexity—an information theoretic measure—has been introduced in an effort to quantify the strength of correlations across multiple scales in a neural system. This measure represents an important tool in complex systems research because it allows for the quantification of the complexity of a neural system. In this analysis, we studied the relationships between neural complexity and criticality in neural culture data. We analyzed neural avalanches in 435 recordings from dissociated hippocampal cultures produced from rats, as well as neural avalanches from a cortical branching model. We utilized recently developed maximum likelihood estimation power-law fitting methods that account for doubly truncated power-laws, an automated shape collapse algorithm, and neural complexity and branching ratio calculation methods that account for sub-sampling, all of which are implemented in the freely available Neural Complexity and Criticality MATLAB toolbox. We found evidence that neural systems operate at or near a critical point and that neural complexity is optimized in these neural systems at or near the critical point. Surprisingly, we found evidence that complexity in neural systems is dependent upon avalanche profiles and neuron firing rate, but not precise spiking relationships between neurons. In order to facilitate future research, we made all of the culture data utilized in this analysis freely available online. PMID:27729870

  16. Nursing students' expectations of the college experience.

    PubMed

    Zysberg, Leehu; Zisberg, Anna

    2008-09-01

    Nursing students' expectations of college have not received much attention in the empirical literature. These expectations may be important in better understanding nurses' motivations, role acquisition, and academic and professional success. The first study discussed in this article examined the reliability and construct validity of an instrument designed to assess students' (N = 95) expectations of their college experience. The results indicate good reliability and validity. The second study discussed in this article examined differences in expectations, comparing nursing and non-nursing students (N = 160) in an urban college setting. The results suggest expectations emphasizing practical and professional aspects (i.e., acquiring a profession, earning more money), followed by self-betterment and social life expectations. Nursing students differed from non-nursing students by reporting higher self-betterment and professional expectations but lower academic expectations. Implications for application and further research are discussed.

  17. General conditions for maximal violation of non-contextuality in discrete and continuous variables

    NASA Astrophysics Data System (ADS)

    Laversanne-Finot, A.; Ketterer, A.; Barros, M. R.; Walborn, S. P.; Coudreau, T.; Keller, A.; Milman, P.

    2017-04-01

    The contextuality of quantum mechanics can be shown by the violation of inequalities based on measurements of well chosen observables. An important property of such observables is that their expectation value can be expressed in terms of probabilities for obtaining two exclusive outcomes. Examples of such inequalities have been constructed using either observables with a dichotomic spectrum or using periodic functions obtained from displacement operators in phase space. Here we identify the general conditions on the spectral decomposition of observables demonstrating state independent contextuality of quantum mechanics. Our results not only unify existing strategies for maximal violation of state independent non-contextuality inequalities but also lead to new scenarios enabling such violations. Among the consequences of our results is the impossibility of having a state independent maximal violation of non-contextuality in the Peres–Mermin scenario with discrete observables of odd dimensions.

  18. Interpersonal Expectancy Effects: A Forty Year Perspective.

    ERIC Educational Resources Information Center

    Rosenthal, Robert

    Interpersonal expectancy effects--the unintentional expectations that experimenters, teachers, and authority figures bring to experiments, classrooms, and other situations--can wield significant influence on individuals. Some of the issues surrounding expectancy effects are detailed in this paper. The effect itself has been recreated in…

  19. Brain mechanisms supporting violated expectations of pain.

    PubMed

    Zeidan, Fadel; Lobanov, Oleg V; Kraft, Robert A; Coghill, Robert C

    2015-09-01

    The subjective experience of pain is influenced by interactions between experiences, future predictions, and incoming afferent information. Expectations of high pain can exacerbate pain, whereas expectations of low pain during a consistently noxious stimulus can produce significant reductions in pain. However, the brain mechanisms associated with processing mismatches between expected and experienced pain are poorly understood, but are important for imparting salience to a sensory event to override erroneous top-down expectancy-mediated information. This investigation examined pain-related brain activation when expectations of pain were abruptly violated. After conditioning participants to cues predicting low or high pain, 10 incorrectly cued stimuli were administered across 56 stimulus trials to determine whether expectations would be less influential on pain when there is a high discordance between prestimulus cues and corresponding thermal stimulation. Incorrectly cued stimuli produced pain ratings and pain-related brain activation consistent with placebo analgesia, nocebo hyperalgesia, and violated expectations. Violated expectations of pain were associated with activation in distinct regions of the inferior parietal lobe, including the supramarginal and angular gyrus, and intraparietal sulcus, the superior parietal lobe, cerebellum, and occipital lobe. Thus, violated expectations of pain engage mechanisms supporting salience-driven sensory discrimination, working memory, and associative learning processes. By overriding the influence of expectations on pain, these brain mechanisms are likely engaged in clinical situations in which patients' unrealistic expectations of pain relief diminish the efficacy of pain treatments. Accordingly, these findings underscore the importance of maintaining realistic expectations to augment the effectiveness of pain management.

  20. Are Grade Expectations Rational? A Classroom Experiment

    ERIC Educational Resources Information Center

    Hossain, Belayet; Tsigaris, Panagiotis

    2015-01-01

    This study examines students' expectations about their final grade. An attempt is made to determine whether students form expectations rationally. Expectations in economics, rational or otherwise, carry valuable information and have important implications in terms of both teaching effectiveness and the role of grades as an incentive structure for…

  1. Community Expectations of College Completion and Attendance

    ERIC Educational Resources Information Center

    Derden, Michael Wade

    2011-01-01

    Communities relay expectations of behavior that influence residents' decision making processes. The study's purpose was to define and identify social, cultural, and human capital variables relevant to understanding community expectations of postsecondary attainment. The study sought an operational model of community expectancy that would allow…

  2. Maximal stochastic transport in the Lorenz equations

    NASA Astrophysics Data System (ADS)

    Agarwal, Sahil; Wettlaufer, J. S.

    2016-01-01

    We calculate the stochastic upper bounds for the Lorenz equations using an extension of the background method. In analogy with Rayleigh-Bénard convection the upper bounds are for heat transport versus Rayleigh number. As might be expected, the stochastic upper bounds are larger than the deterministic counterpart of Souza and Doering [1], but their variation with noise amplitude exhibits interesting behavior. Below the transition to chaotic dynamics the upper bounds increase monotonically with noise amplitude. However, in the chaotic regime this monotonicity depends on the number of realizations in the ensemble; at a particular Rayleigh number the bound may increase or decrease with noise amplitude. The origin of this behavior is the coupling between the noise and unstable periodic orbits, the degree of which depends on the degree to which the ensemble represents the ergodic set. This is confirmed by examining the close returns plots of the full solutions to the stochastic equations and the numerical convergence of the noise correlations. The numerical convergence of both the ensemble and time averages of the noise correlations is sufficiently slow that it is the limiting aspect of the realization of these bounds. Finally, we note that the full solutions of the stochastic equations demonstrate that the effect of noise is equivalent to the effect of chaos.

  3. Maximizing exposure therapy: an inhibitory learning approach.

    PubMed

    Craske, Michelle G; Treanor, Michael; Conway, Christopher C; Zbozinek, Tomislav; Vervliet, Bram

    2014-07-01

    Exposure therapy is an effective approach for treating anxiety disorders, although a substantial number of individuals fail to benefit or experience a return of fear after treatment. Research suggests that anxious individuals show deficits in the mechanisms believed to underlie exposure therapy, such as inhibitory learning. Targeting these processes may help improve the efficacy of exposure-based procedures. Although evidence supports an inhibitory learning model of extinction, there has been little discussion of how to implement this model in clinical practice. The primary aim of this paper is to provide examples to clinicians for how to apply this model to optimize exposure therapy with anxious clients, in ways that distinguish it from a 'fear habituation' approach and 'belief disconfirmation' approach within standard cognitive-behavior therapy. Exposure optimization strategies include (1) expectancy violation, (2) deepened extinction, (3) occasional reinforced extinction, (4) removal of safety signals, (5) variability, (6) retrieval cues, (7) multiple contexts, and (8) affect labeling. Case studies illustrate methods of applying these techniques with a variety of anxiety disorders, including obsessive-compulsive disorder, posttraumatic stress disorder, social phobia, specific phobia, and panic disorder.

  4. Maximizing Exposure Therapy: An Inhibitory Learning Approach

    PubMed Central

    Craske, Michelle G.; Treanor, Michael; Conway, Chris; Zbozinek, Tomislav; Vervliet, Bram

    2014-01-01

    Exposure therapy is an effective approach for treating anxiety disorders, although a substantial number of individuals fail to benefit or experience a return of fear after treatment. Research suggests that anxious individuals show deficits in the mechanisms believed to underlie exposure therapy, such as inhibitory learning. Targeting these processes may help improve the efficacy of exposure-based procedures. Although evidence supports an inhibitory learning model of extinction, there has been little discussion of how to implement this model in clinical practice. The primary aim of this paper is to provide examples to clinicians for how to apply this model to optimize exposure therapy with anxious clients, in ways that distinguish it from a ‘fear habituation’ approach and ‘belief disconfirmation’ approach within standard cognitive-behavior therapy. Exposure optimization strategies include 1) expectancy violation, 2) deepened extinction, 3) occasional reinforced extinction, 4) removal of safety signals, 5) variability, 6) retrieval cues, 7) multiple contexts, and 8) affect labeling. Case studies illustrate methods of applying these techniques with a variety of anxiety disorders, including obsessive-compulsive disorder, posttraumatic stress disorder, social phobia, specific phobia, and panic disorder. PMID:24864005

  5. Pace's Maxims for Homegrown Library Projects. Coming Full Circle

    ERIC Educational Resources Information Center

    Pace, Andrew K.

    2005-01-01

    This article discusses six maxims by which to run library automation. The following maxims are discussed: (1) Solve only known problems; (2) Avoid changing data to fix display problems; (3) Aut viam inveniam aut faciam; (4) If you cannot make it yourself, buy something; (5) Kill the alligator closest to the boat; and (6) Just because yours is…

  6. The Negative Consequences of Maximizing in Friendship Selection.

    PubMed

    Newman, David B; Schug, Joanna; Yuki, Masaki; Yamada, Junko; Nezlek, John B

    2017-02-27

    Previous studies have shown that the maximizing orientation, reflecting a motivation to select the best option among a given set of choices, is associated with various negative psychological outcomes. In the present studies, we examined whether these relationships extend to friendship selection and how the number of options for friends moderated these effects. Across 5 studies, maximizing in selecting friends was negatively related to life satisfaction, positive affect, and self-esteem, and was positively related to negative affect and regret. In Study 1, a maximizing in selecting friends scale was created, and regret mediated the relationships between maximizing and well-being. In a naturalistic setting in Studies 2a and 2b, the tendency to maximize among those who participated in the fraternity and sorority recruitment process was negatively related to satisfaction with their selection, and positively related to regret and negative affect. In Study 3, daily levels of maximizing were negatively related to daily well-being, and these relationships were mediated by daily regret. In Study 4, we extended the findings to samples from the U.S. and Japan. When participants who tended to maximize were faced with many choices, operationalized as the daily number of friends met (Study 3) and relational mobility (Study 4), the opportunities to regret a decision increased and further diminished well-being. These findings imply that, paradoxically, attempts to maximize when selecting potential friends is detrimental to one's well-being. (PsycINFO Database Record

  7. Detrimental Relations of Maximization with Academic and Career Attitudes

    ERIC Educational Resources Information Center

    Dahling, Jason J.; Thompson, Mindi N.

    2013-01-01

    Maximization refers to a decision-making style that involves seeking the single best option when making a choice, which is generally dysfunctional because people are limited in their ability to rationally evaluate all options and identify the single best outcome. The vocational consequences of maximization are examined in two samples, college…

  8. Maximizing a transport platform through computer technology.

    PubMed

    Hudson, Timothy L

    2003-01-01

    One of the most recent innovations coalescing computer technology and medical care is the further development of integrated medical component technology coupled with a computer subsystem. One such example is the self-contained patient transport system known as the Life Support for Trauma and Transport (LSTAT(tm)). The LSTAT creates a new transport platform that integrates the most current medical monitoring and therapeutic capabilities with computer processing capacity, creating the first "smart litter". The LSTAT is built around a computer system that is network capable and acts as the data hub for multiple medical devices and utilities, including data, power, and oxygen systems. The system logs patient and device data in a simultaneous, time-synchronized, continuous format, allowing electronic transmission, storage, and electronic documentation. The third-generation LSTAT includes an oxygen system, ventilator, clinical point-of-care blood analyzer, suction, defibrillator, infusion pump, and physiologic monitor, as well as on-board power and oxygen systems. The developers of LSTAT and other developers have the ability to further expand integrative component technology by developing and integrating clinical decision support systems.

  9. Quantitative determination of maximal imaging depth in all-NIR multiphoton microscopy images of thick tissues

    NASA Astrophysics Data System (ADS)

    Sarder, Pinaki; Akers, Walter J.; Sudlow, Gail P.; Yazdanfar, Siavash; Achilefu, Samuel

    2014-02-01

    We report two methods for quantitatively determining maximal imaging depth from thick tissue images captured using all-near-infrared (NIR) multiphoton microscopy (MPM). All-NIR MPM is performed using 1550 nm laser excitation with NIR detection. This method enables imaging more than five-fold deep in thick tissues in comparison with other NIR excitation microscopy methods. In this study, we show a correlation between the multiphoton signal along the depth of tissue samples and the shape of the corresponding empirical probability density function (pdf) of the photon counts. Histograms from this analysis become increasingly symmetric with the imaging depth. This distribution transitions toward the background distribution at higher imaging depths. Inspired by these observations, we propose two independent methods based on which one can automatically determine maximal imaging depth in the all-NIR MPM images of thick tissues. At this point, the signal strength is expected to be weak and similar to the background. The first method suggests the maximal imaging depth corresponds to the deepest image plane where the ratio between the mean and median of the empirical photon-count pdf is outside the vicinity of 1. The second method suggests the maximal imaging depth corresponds to the deepest image plane where the squared distance between the empirical photon-count mean obtained from the object and the mean obtained from the background is greater than a threshold. We demonstrate the application of these methods in all-NIR MPM images of mouse kidney tissues to study maximal depth penetration in such tissues.

  10. Combustion Research Aboard the ISS Utilizing the Combustion Integrated Rack and Microgravity Science Glovebox

    NASA Technical Reports Server (NTRS)

    Sutliff, Thomas J.; Otero, Angel M.; Urban, David L.

    2002-01-01

    The Physical Sciences Research Program of NASA sponsors a broad suite of peer-reviewed research investigating fundamental combustion phenomena and applied combustion research topics. This research is performed through both ground-based and on-orbit research capabilities. The International Space Station (ISS) and two facilities, the Combustion Integrated Rack and the Microgravity Science Glovebox, are key elements in the execution of microgravity combustion flight research planned for the foreseeable future. This paper reviews the Microgravity Combustion Science research planned for the International Space Station implemented from 2003 through 2012. Examples of selected research topics, expected outcomes, and potential benefits will be provided. This paper also summarizes a multi-user hardware development approach, recapping the progress made in preparing these research hardware systems. Within the description of this approach, an operational strategy is presented that illustrates how utilization of constrained ISS resources may be maximized dynamically to increase science through design decisions made during hardware development.

  11. Siting Samplers to Minimize Expected Time to Detection

    SciTech Connect

    Walter, Travis; Lorenzetti, David M.; Sohn, Michael D.

    2012-05-02

    We present a probabilistic approach to designing an indoor sampler network for detecting an accidental or intentional chemical or biological release, and demonstrate it for a real building. In an earlier paper, Sohn and Lorenzetti(1) developed a proof of concept algorithm that assumed samplers could return measurements only slowly (on the order of hours). This led to optimal detect to treat architectures, which maximize the probability of detecting a release. This paper develops a more general approach, and applies it to samplers that can return measurements relatively quickly (in minutes). This leads to optimal detect to warn architectures, which minimize the expected time to detection. Using a model of a real, large, commercial building, we demonstrate the approach by optimizing networks against uncertain release locations, source terms, and sampler characteristics. Finally, we speculate on rules of thumb for general sampler placement.

  12. Mining maximal cohesive induced subnetworks and patterns by integrating biological networks with gene profile data.

    PubMed

    Alroobi, Rami; Ahmed, Syed; Salem, Saeed

    2013-09-01

    With the availability of vast amounts of protein-protein, protein-DNA interactions, and genome-wide mRNA expression data for several organisms, identifying biological complexes has emerged as a major task in systems biology. Most of the existing approaches for complex identification have focused on utilizing one source of data. Recent research has shown that systematic integration of gene profile data with interaction data yields significant patterns. In this paper, we introduce the problem of mining maximal cohesive subnetworks that satisfy user-defined constraints defined over the gene profiles of the reported subnetworks. Moreover, we introduce the problem of finding maximal cohesive patterns which are sets of cohesive genes. Experiments on Yeast and Human datasets show the effectiveness of the proposed approach by assessing the overlap of the discovered subnetworks with known biological complexes. Moreover, GO enrichment analysis shows that the discovered subnetworks are biologically significant.

  13. Anticonvulsant effects of benzhydryl piperazines on maximal electroshock seizures in rats.

    PubMed

    Novack, G D; Stark, L G; Peterson, S L

    1979-03-01

    The anticonvulsant effects of four benzhydryl piperazines, SC-13504 (ropizine, an anticonvulsant), hydroxyzine (HDX, an anxiolytic), chlorcyclizine (CCZ, an antihistaminic) and buclizine (BUC, an antihistaminic), were investigated utilizing a modified maximal electroshock seizure test in rats. In addition to detecting the presence or absence of tonic hindlimb extension, the modified method quantified various phases of the seizure. All four benzhydryl piperazines exhibited anticonvulsant activity in maximal electroshock seizure, but SC-13504 was similar in efficacy to phenobarbital and phenytoin, and much more effective than HDX, CCZ or BUC. Additionally, SC-13504 possessed a therapeutic index much greater than any of the compounds tested. The duration of action of the benzhydryl piperazines, in hours was: SC-13504, 0.5 to 8; HDX, 0.5 to 2; CCZ, 0.5 to 16; and BUC, 2 to 8. Buc and CCZ are postulated to be converted to active anticonvulsant metabolites.

  14. Analytical properties of credibilistic expectation functions.

    PubMed

    Wang, Shuming; Wang, Bo; Watada, Junzo

    2014-01-01

    The expectation function of fuzzy variable is an important and widely used criterion in fuzzy optimization, and sound properties on the expectation function may help in model analysis and solution algorithm design for the fuzzy optimization problems. The present paper deals with some analytical properties of credibilistic expectation functions of fuzzy variables that lie in three aspects. First, some continuity theorems on the continuity and semicontinuity conditions are proved for the expectation functions. Second, a differentiation formula of the expectation function is derived which tells that, under certain conditions, the derivative of the fuzzy expectation function with respect to the parameter equals the expectation of the derivative of the fuzzy function with respect to the parameter. Finally, a law of large numbers for fuzzy variable sequences is obtained leveraging on the Chebyshev Inequality of fuzzy variables. Some examples are provided to verify the results obtained.

  15. Prior expectations facilitate metacognition for perceptual decision.

    PubMed

    Sherman, M T; Seth, A K; Barrett, A B; Kanai, R

    2015-09-01

    The influential framework of 'predictive processing' suggests that prior probabilistic expectations influence, or even constitute, perceptual contents. This notion is evidenced by the facilitation of low-level perceptual processing by expectations. However, whether expectations can facilitate high-level components of perception remains unclear. We addressed this question by considering the influence of expectations on perceptual metacognition. To isolate the effects of expectation from those of attention we used a novel factorial design: expectation was manipulated by changing the probability that a Gabor target would be presented; attention was manipulated by instructing participants to perform or ignore a concurrent visual search task. We found that, independently of attention, metacognition improved when yes/no responses were congruent with expectations of target presence/absence. Results were modeled under a novel Bayesian signal detection theoretic framework which integrates bottom-up signal propagation with top-down influences, to provide a unified description of the mechanisms underlying perceptual decision and metacognition.

  16. Integrated life sciences technology utilization development program

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The goal of the TU program was to maximize the development of operable hardware and systems which will be of substantial benefit to the public. Five working prototypes were developed, and a meal system for the elderly is now undergoing evaluation. Manpower utilization is shown relative to the volume of requests in work for each month. The ASTP mobile laboratories and post Skylab bedrest study are also described.

  17. Does explicit expectation really affect preparation?

    PubMed

    Umbach, Valentin J; Schwager, Sabine; Frensch, Peter A; Gaschler, Robert

    2012-01-01

    Expectation enables preparation for an upcoming event and supports performance if the anticipated situation occurs, as manifested in behavioral effects (e.g., decreased RT). However, demonstrating coincidence between expectation and preparation is not sufficient for attributing a causal role to the former. The content of explicit expectation may simply reflect the present preparation state. We targeted this issue by experimentally teasing apart demands for preparation and explicit expectations. Expectations often originate from our experience: we expect that events occurring with a high frequency in the past are more likely to occur again. In addition to expectation, other task demands can feed into action preparation. In four experiments, frequency-based expectation was pitted against a selective response deadline. In a three-choice reaction time task, participants responded to stimuli that appeared with varying frequency (60, 30, 10%). Trial-by-trial stimulus expectations were either captured via verbal predictions or induced by visual cues. Predictions as well as response times quickly conformed to the variation in stimulus frequency. After two (of five) experimental blocks we forced participants by selective time pressure to respond faster to a less frequent stimulus. Therefore, participants had to prepare for one stimulus (medium frequency) while often explicitly expecting a different one (high frequency). Response times for the less frequent stimulus decreased immediately, while explicit expectations continued to indicate the (unchanged) presentation frequencies. Explicit expectations were thus not just reflecting preparation. In fact, participants responded faster when the stimulus matched the trial-wise expectations, even when task demands discouraged their use. In conclusion, we argue that explicit expectation feeds into preparatory processes instead of being a mere by-product.

  18. STOCK MARKET CRASH AND EXPECTATIONS OF AMERICAN HOUSEHOLDS*

    PubMed Central

    HUDOMIET, PÉTER; KÉZDI, GÁBOR; WILLIS, ROBERT J.

    2011-01-01

    SUMMARY This paper utilizes data on subjective probabilities to study the impact of the stock market crash of 2008 on households’ expectations about the returns on the stock market index. We use data from the Health and Retirement Study that was fielded in February 2008 through February 2009. The effect of the crash is identified from the date of the interview, which is shown to be exogenous to previous stock market expectations. We estimate the effect of the crash on the population average of expected returns, the population average of the uncertainty about returns (subjective standard deviation), and the cross-sectional heterogeneity in expected returns (disagreement). We show estimates from simple reduced-form regressions on probability answers as well as from a more structural model that focuses on the parameters of interest and separates survey noise from relevant heterogeneity. We find a temporary increase in the population average of expectations and uncertainty right after the crash. The effect on cross-sectional heterogeneity is more significant and longer lasting, which implies substantial long-term increase in disagreement. The increase in disagreement is larger among the stockholders, the more informed, and those with higher cognitive capacity, and disagreement co-moves with trading volume and volatility in the market. PMID:21547244

  19. STOCK MARKET CRASH AND EXPECTATIONS OF AMERICAN HOUSEHOLDS.

    PubMed

    Hudomiet, Péter; Kézdi, Gábor; Willis, Robert J

    2011-01-01

    This paper utilizes data on subjective probabilities to study the impact of the stock market crash of 2008 on households' expectations about the returns on the stock market index. We use data from the Health and Retirement Study that was fielded in February 2008 through February 2009. The effect of the crash is identified from the date of the interview, which is shown to be exogenous to previous stock market expectations. We estimate the effect of the crash on the population average of expected returns, the population average of the uncertainty about returns (subjective standard deviation), and the cross-sectional heterogeneity in expected returns (disagreement). We show estimates from simple reduced-form regressions on probability answers as well as from a more structural model that focuses on the parameters of interest and separates survey noise from relevant heterogeneity. We find a temporary increase in the population average of expectations and uncertainty right after the crash. The effect on cross-sectional heterogeneity is more significant and longer lasting, which implies substantial long-term increase in disagreement. The increase in disagreement is larger among the stockholders, the more informed, and those with higher cognitive capacity, and disagreement co-moves with trading volume and volatility in the market.

  20. General review of maximal aerobic velocity measurement at laboratory. Proposition of a new simplified protocol for maximal aerobic velocity assessment.

    PubMed

    Berthon, P; Fellmann, N

    2002-09-01

    The maximal aerobic velocity concept developed since eighties is considered as either the minimal velocity which elicits the maximal aerobic consumption or as the "velocity associated to maximal oxygen consumption". Different methods for measuring maximal aerobic velocity on treadmill in laboratory conditions have been elaborated, but all these specific protocols measure V(amax) either during a maximal oxygen consumption test or with an association of such a test. An inaccurate method presents a certain number of problems in the subsequent use of the results, for example in the elaboration of training programs, in the study of repeatability or in the determination of individual limit time. This study analyzes 14 different methods to understand their interests and limits in view to propose a general methodology for measuring V(amax). In brief, the test should be progressive and maximal without any rest period and of 17 to 20 min total duration. It should begin with a five min warm-up at 60-70% of the maximal aerobic power of the subjects. The beginning of the trial should be fixed so that four or five steps have to be run. The duration of the steps should be three min with a 1% slope and an increasing speed of 1.5 km x h(-1) until complete exhaustion. The last steps could be reduced at two min for a 1 km x h(-1) increment. The maximal aerobic velocity is adjusted in relation to duration of the last step.

  1. The Constrained Maximal Expression Level Owing to Haploidy Shapes Gene Content on the Mammalian X Chromosome

    PubMed Central

    Hurst, Laurence D.; Ghanbarian, Avazeh T.; Forrest, Alistair R. R.; Huminiecki, Lukasz

    2015-01-01

    X chromosomes are unusual in many regards, not least of which is their nonrandom gene content. The causes of this bias are commonly discussed in the context of sexual antagonism and the avoidance of activity in the male germline. Here, we examine the notion that, at least in some taxa, functionally biased gene content may more profoundly be shaped by limits imposed on gene expression owing to haploid expression of the X chromosome. Notably, if the X, as in primates, is transcribed at rates comparable to the ancestral rate (per promoter) prior to the X chromosome formation, then the X is not a tolerable environment for genes with very high maximal net levels of expression, owing to transcriptional traffic jams. We test this hypothesis using The Encyclopedia of DNA Elements (ENCODE) and data from the Functional Annotation of the Mammalian Genome (FANTOM5) project. As predicted, the maximal expression of human X-linked genes is much lower than that of genes on autosomes: on average, maximal expression is three times lower on the X chromosome than on autosomes. Similarly, autosome-to-X retroposition events are associated with lower maximal expression of retrogenes on the X than seen for X-to-autosome retrogenes on autosomes. Also as expected, X-linked genes have a lesser degree of increase in gene expression than autosomal ones (compared to the human/Chimpanzee common ancestor) if highly expressed, but not if lowly expressed. The traffic jam model also explains the known lower breadth of expression for genes on the X (and the Z of birds), as genes with broad expression are, on average, those with high maximal expression. As then further predicted, highly expressed tissue-specific genes are also rare on the X and broadly expressed genes on the X tend to be lowly expressed, both indicating that the trend is shaped by the maximal expression level not the breadth of expression per se. Importantly, a limit to the maximal expression level explains biased tissue of expression

  2. The Constrained Maximal Expression Level Owing to Haploidy Shapes Gene Content on the Mammalian X Chromosome.

    PubMed

    Hurst, Laurence D; Ghanbarian, Avazeh T; Forrest, Alistair R R; Huminiecki, Lukasz

    2015-12-01

    X chromosomes are unusual in many regards, not least of which is their nonrandom gene content. The causes of this bias are commonly discussed in the context of sexual antagonism and the avoidance of activity in the male germline. Here, we examine the notion that, at least in some taxa, functionally biased gene content may more profoundly be shaped by limits imposed on gene expression owing to haploid expression of the X chromosome. Notably, if the X, as in primates, is transcribed at rates comparable to the ancestral rate (per promoter) prior to the X chromosome formation, then the X is not a tolerable environment for genes with very high maximal net levels of expression, owing to transcriptional traffic jams. We test this hypothesis using The Encyclopedia of DNA Elements (ENCODE) and data from the Functional Annotation of the Mammalian Genome (FANTOM5) project. As predicted, the maximal expression of human X-linked genes is much lower than that of genes on autosomes: on average, maximal expression is three times lower on the X chromosome than on autosomes. Similarly, autosome-to-X retroposition events are associated with lower maximal expression of retrogenes on the X than seen for X-to-autosome retrogenes on autosomes. Also as expected, X-linked genes have a lesser degree of increase in gene expression than autosomal ones (compared to the human/Chimpanzee common ancestor) if highly expressed, but not if lowly expressed. The traffic jam model also explains the known lower breadth of expression for genes on the X (and the Z of birds), as genes with broad expression are, on average, those with high maximal expression. As then further predicted, highly expressed tissue-specific genes are also rare on the X and broadly expressed genes on the X tend to be lowly expressed, both indicating that the trend is shaped by the maximal expression level not the breadth of expression per se. Importantly, a limit to the maximal expression level explains biased tissue of expression

  3. Premenstrual symptoms and smoking-related expectancies.

    PubMed

    Pang, Raina D; Bello, Mariel S; Stone, Matthew D; Kirkpatrick, Matthew G; Huh, Jimi; Monterosso, John; Haselton, Martie G; Fales, Melissa R; Leventhal, Adam M

    2016-06-01

    Given that prior research implicates smoking abstinence in increased premenstrual symptoms, tobacco withdrawal, and smoking behaviors, it is possible that women with more severe premenstrual symptoms have stronger expectancies about the effects of smoking and abstaining from smoking on mood and withdrawal. However, such relations have not been previously explored. This study examined relations between premenstrual symptoms experienced in the last month and expectancies that abstaining from smoking results in withdrawal (i.e., smoking abstinence withdrawal expectancies), that smoking is pleasurable (i.e., positive reinforcement smoking expectancies), and smoking relieves negative mood (i.e., negative reinforcement smoking expectancies). In a cross-sectional design, 97 non-treatment seeking women daily smokers completed self-report measures of smoking reinforcement expectancies, smoking abstinence withdrawal expectancies, premenstrual symptoms, mood symptoms, and nicotine dependence. Affect premenstrual symptoms were associated with increased negative reinforcement smoking expectancies, but not over and above covariates. Affect and pain premenstrual symptoms were associated with increased positive reinforcement smoking expectancies, but only affect premenstrual symptoms remained significant in adjusted models. Affect, pain, and water retention premenstrual symptoms were associated with increased smoking abstinence withdrawal expectancies, but only affect premenstrual symptoms remained significant in adjusted models. Findings from this study suggest that addressing concerns about withdrawal and alternatives to smoking may be particularly important in women who experience more severe premenstrual symptoms, especially affect-related changes.

  4. Rapid Expectation Adaptation during Syntactic Comprehension

    PubMed Central

    Fine, Alex B.; Jaeger, T. Florian; Farmer, Thomas A.; Qian, Ting

    2013-01-01

    When we read or listen to language, we are faced with the challenge of inferring intended messages from noisy input. This challenge is exacerbated by considerable variability between and within speakers. Focusing on syntactic processing (parsing), we test the hypothesis that language comprehenders rapidly adapt to the syntactic statistics of novel linguistic environments (e.g., speakers or genres). Two self-paced reading experiments investigate changes in readers’ syntactic expectations based on repeated exposure to sentences with temporary syntactic ambiguities (so-called “garden path sentences”). These sentences typically lead to a clear expectation violation signature when the temporary ambiguity is resolved to an a priori less expected structure (e.g., based on the statistics of the lexical context). We find that comprehenders rapidly adapt their syntactic expectations to converge towards the local statistics of novel environments. Specifically, repeated exposure to a priori unexpected structures can reduce, and even completely undo, their processing disadvantage (Experiment 1). The opposite is also observed: a priori expected structures become less expected (even eliciting garden paths) in environments where they are hardly ever observed (Experiment 2). Our findings suggest that, when changes in syntactic statistics are to be expected (e.g., when entering a novel environment), comprehenders can rapidly adapt their expectations, thereby overcoming the processing disadvantage that mistaken expectations would otherwise cause. Our findings take a step towards unifying insights from research in expectation-based models of language processing, syntactic priming, and statistical learning. PMID:24204909

  5. Women's Retirement Expectations: How Stable Are They?

    PubMed Central

    Hardy, Melissa A.

    2009-01-01

    Objective Using the National Longitudinal Survey of Mature Women, we examine between- and within-person differences in expected retirement age as a key element of the retirement planning process. The expectation typologies of 1,626 women born between 1923 and 1937 were classified jointly on the basis of specificity and consistency. Methods Latent class analysis was used to determine retirement expectation patterns over a 7-year span. Multinomial logistic regression analyses were employed to estimate the effects of demographic and status characteristics on the likelihood of reporting 4 distinct longitudinal patterns of retirement expectations. Results Substantial heterogeneity in reports of expected retirement age between and within individuals over the 7-year span was found. Demographic and status characteristics, specifically age, race, marital status, job tenure, and recent job change, sorted respondents into different retirement expectation patterns. Conclusions The frequent within-person fluctuations and substantial between-person heterogeneity in retirement expectations indicate uncertainty and variability in both expectations and process of expectation formation. Variability in respondents' reports suggests that studying retirement expectations at multiple time points better captures the dynamics of preretirement planning. PMID:19176483

  6. Oxygen uptake in maximal effort constant rate and interval running.

    PubMed

    Pratt, Daniel; O'Brien, Brendan J; Clark, Bradley

    2013-01-01

    This study investigated differences in average VO2 of maximal effort interval running to maximal effort constant rate running at lactate threshold matched for time. The average VO2 and distance covered of 10 recreational male runners (VO2max: 4158 ± 390 mL · min(-1)) were compared between a maximal effort constant-rate run at lactate threshold (CRLT), a maximal effort interval run (INT) consisting of 2 min at VO2max speed with 2 minutes at 50% of VO2 repeated 5 times, and a run at the average speed sustained during the interval run (CR submax). Data are presented as mean and 95% confidence intervals. The average VO2 for INT, 3451 (3269-3633) mL · min(-1), 83% VO2max, was not significantly different to CRLT, 3464 (3285-3643) mL · min(-1), 84% VO2max, but both were significantly higher than CR sub-max, 3464 (3285-3643) mL · min(-1), 76% VO2max. The distance covered was significantly greater in CLRT, 4431 (4202-3731) metres, compared to INT and CR sub-max, 4070 (3831-4309) metres. The novel finding was that a 20-minute maximal effort constant rate run uses similar amounts of oxygen as a 20-minute maximal effort interval run despite the greater distance covered in the maximal effort constant-rate run.

  7. It's not what you expect: feedback negativity is independent of reward expectation and affective responsivity in a non-probabilistic task.

    PubMed

    Highsmith, Jonathan M; Wuensch, Karl L; Tran, Tuan; Stephenson, Alexandra J; Everhart, D Erik

    2017-03-01

    ERP studies commonly utilize gambling-based reinforcement tasks to elicit feedback negativity (FN) responses. This study used a pattern learning task in order to limit gambling-related fallacious reasoning and possible affective responses to gambling, while investigating relationships between the FN components between high and low reward expectation conditions. Eighteen undergraduates completed measures of reinforcement sensitivity, trait and state affect, and psychophysiological recording. The pattern learning task elicited a FN component for both high and low win expectancy conditions, which was found to be independent of reward expectation and showed little relationship with task and personality variables. We also observed a P3 component, which showed sensitivity to outcome expectancy variation and relationships to measures of anxiety, appetitive motivation, and cortical asymmetry, although these varied by electrode location and expectancy condition. Findings suggest that the FN reflected a binary reward-related signal, with little relationship to reward expectation found in previous studies, in the absence of positive affective responses.

  8. Stock Market Expectations of Dutch Households

    PubMed Central

    Hurd, Michael; van Rooij, Maarten; Winter, Joachim

    2013-01-01

    Despite its importance for the analysis of life-cycle behavior and, in particular, retirement planning, stock ownership by private households is poorly understood. Among other approaches to investigate this puzzle, recent research has started to elicit private households’ expectations of stock market returns. This paper reports findings from a study that collected data over a two-year period both on households’ stock market expectations (subjective probabilities of gains or losses) and on whether they own stocks. We document substantial heterogeneity in financial market expectations. Expectations are correlated with stock ownership. Over the two years of our data, stock market prices increased, and expectations of future stock market price changes also increased, lending support to the view that expectations are influenced by recent stock gains or losses. PMID:23997423

  9. Stock Market Expectations of Dutch Households.

    PubMed

    Hurd, Michael; van Rooij, Maarten; Winter, Joachim

    2011-04-01

    Despite its importance for the analysis of life-cycle behavior and, in particular, retirement planning, stock ownership by private households is poorly understood. Among other approaches to investigate this puzzle, recent research has started to elicit private households' expectations of stock market returns. This paper reports findings from a study that collected data over a two-year period both on households' stock market expectations (subjective probabilities of gains or losses) and on whether they own stocks. We document substantial heterogeneity in financial market expectations. Expectations are correlated with stock ownership. Over the two years of our data, stock market prices increased, and expectations of future stock market price changes also increased, lending support to the view that expectations are influenced by recent stock gains or losses.

  10. Impact of diabetes mellitus on life expectancy and health-adjusted life expectancy in Canada

    PubMed Central

    2012-01-01

    The objectives of this study were to estimate life expectancy (LE) and health-adjusted life expectancy (HALE) for Canadians with and without diabetes and to evaluate the impact of diabetes on population health using administrative and survey data. Mortality data from the Canadian Chronic Disease Surveillance System (2004 to 2006) and Health Utilities Index data from the Canadian Community Health Survey (2000 to 2005) were used. Life table analysis was applied to calculate LE, HALE, and their confidence intervals using the Chiang and the adapted Sullivan methods. LE and HALE were significantly lower among people with diabetes than for people without the disease. LE and HALE for females without diabetes were 85.0 and 73.3 years, respectively (males: 80.2 and 70.9 years). Diabetes was associated with a loss of LE and HALE of 6.0 years and 5.8 years, respectively, for females, and 5.0 years and 5.3 years, respectively, for males, living with diabetes at 55 years of age. The overall gains in LE and HALE after the hypothetical elimination of prevalent diagnosed diabetes cases in the population were 1.4 years and 1.2 years, respectively, for females, and 1.3 years for both LE and HALE for males. The results of the study confirm that diabetes is an important disease burden in Canada impacting the female and male populations differently. The methods can be used to calculate LE and HALE for other chronic conditions, providing useful information for public health researchers and policymakers. PMID:22531113

  11. Brain Mechanisms Supporting Violated Expectations of Pain

    PubMed Central

    Zeidan, Fadel; Lobanov, Oleg V.; Kraft, Robert A.; Coghill, Robert C.

    2015-01-01

    The subjective experience of pain is influenced by interactions between prior experiences, future predictions and incoming afferent information. Expectations of high pain can exacerbate pain while expectations of low pain during a consistently noxious stimulus can produce significant reductions in pain. However, the brain mechanisms associated with processing mismatches between expected and experienced pain are poorly understood, but are important for imparting salience to a sensory event in order to override erroneous top-down expectancy-mediated information. The present investigation examined pain-related brain activation when expectations of pain were abruptly violated. After conditioning participants to cues predicting low or high pain, ten incorrectly cued stimuli were administered across 56 stimulus trials to determine if expectations would be less influential on pain when there is a high discordance between pre-stimulus cues and corresponding thermal stimulation. Incorrectly cued stimuli produced pain ratings and pain-related brain activation consistent with placebo analgesia, nocebo hyperalgesia, and violated expectations. Violated expectations of pain were associated with activation in distinct regions of the inferior parietal lobe, including the supramarginal and angular gyrus, and intraparietal sulcus, the superior parietal lobe, cerebellum and occipital lobe. Thus, violated expectations of pain engage mechanisms supporting salience-driven sensory discrimination, working memory, and associative learning processes. By overriding the influence of expectations on pain, these brain mechanisms are likely engaged in clinical situations where patients’ unrealistic expectations for pain relief diminish the efficacy of pain treatments. Accordingly, these findings underscore the importance of maintaining realistic expectations to augment the effectiveness of pain management. PMID:26083664

  12. Expectancy and Repetition in Task Preparation

    NASA Technical Reports Server (NTRS)

    Ruthruff, E.; Remington, R. W.; Johnston, James C.; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    We studied the mechanisms of task preparation using a design that pitted task expectancy against task repetition. In one experiment, two simple cognitive tasks were presented in a predictable sequence containing both repetitions and non-repetitions. The typical task sequence was AABBAABB. Occasional violations of this sequence allowed us to measure the effects of valid versus invalid expectancy. With this design, we were able to study the effects of task expectancy, task repetition, and interaction.

  13. Increasing hope by addressing clients' outcome expectations.

    PubMed

    Swift, Joshua K; Derthick, Annie O

    2013-09-01

    Addressing clients' outcome expectations is an important clinical process that can lead to a strong therapeutic alliance, more positive treatment outcomes, and decreased rates of premature termination from psychotherapy. Five interventions designed to foster appropriate outcome expectations are discussed, including presenting a convincing treatment rationale, increasing clients' faith in their therapists, expressing faith in clients, providing outcome education, and comparing progress with expectations. Clinical examples and research support are provided for each.

  14. On the Ribosomal Density that Maximizes Protein Translation Rate

    PubMed Central

    Zarai, Yoram; Margaliot, Michael; Tuller, Tamir

    2016-01-01

    During mRNA translation, several ribosomes attach to the same mRNA molecule simultaneously translating it into a protein. This pipelining increases the protein translation rate. A natural and important question is what ribosomal density maximizes the protein translation rate. Using mathematical models of ribosome flow along both a linear and a circular mRNA molecules we prove that typically the steady-state protein translation rate is maximized when the ribosomal density is one half of the maximal possible density. We discuss the implications of our results to endogenous genes under natural cellular conditions and also to synthetic biology. PMID:27861564

  15. Building hospital TQM teams: effective polarity analysis and maximization.

    PubMed

    Hurst, J B

    1996-09-01

    Building and maintaining teams require careful attention to and maximization of such polar opposites (¿polarities¿) as individual and team, directive and participatory leadership, task and process, and stability and change. Analyzing systematic elements of any polarity and listing blocks, supports, and flexible ways to maximize it will prevent the negative consequences that occur when treating a polarity like a solvable problem. Flexible, well-timed shifts from pole to pole result in the maximization of upside and minimization of downside consequences.

  16. Affect-regulation expectancies among Gamblers.

    PubMed

    Will Shead, N; Hodgins, David C

    2009-09-01

    Factor scores on a gambling expectancy questionnaire (GEQ) were used to subtype 132 university students who gamble regularly (37.9% male; M age = 22.6 years, SD = 6.04) as: Reward Expectancy Gamblers (Reward EGs)-have strong expectations that gambling augments positive mood, Relief Expectancy Gamblers (Relief EGs)-have strong expectations that gambling relieves negative affect, and Non-Expectancy Gamblers (Non-EGs)-have neither strong expectation. Gambling on a high-low card game was compared across subtypes following priming for either "relief" or "reward" affect-regulation expectancies with the Scrambled Sentence Test (SST). The hypothesized Prime type x GEQ subtype interaction was not significant. When a more stringent set of criteria for GEQ subtyping was imposed, the "purified" sub-sample (n = 54) resulted in the hypothesized statistically significant Prime type x GEQ subtype interaction. Relief EGs gambled more after being primed with the construct "relief of negative emotions" compared to after being primed with the construct "augmentation of positive emotion." Planned orthogonal contrasts showed a significant linear increase in number of bets made across GEQ subtypes when prime type corresponded to GEQ subtype. The results suggest a need for components in gambling treatment programs that address clients' expectancies that gambling can provide a specific desirable emotional outcome.

  17. Affirmative Action and the Issue of Expectancies.

    ERIC Educational Resources Information Center

    Crosby, Faye; Clayton, Susan

    1990-01-01

    Discusses the impact of affirmative action policies on expectancies and interpersonal relations. Concludes that affirmative action policies offer clear advantages to the members of target groups. (DM)

  18. Cascade Reservoirs Floodwater Resources Utilization

    NASA Astrophysics Data System (ADS)

    Wang, Y.

    2015-12-01

    A reasonable floodwater resources utilization method is put forward by dynamic controlling of cascade reservoirs flood control limited level in this paper. According to the probability distribution of the beginning time of the first flood and the ending time of the final flood from July to September, the Fuzzy Statistic Analysis was used to divide the main flood season. By fitting the flood season membership functions of each period, the cascade reservoirs flood control limited water level for each period were computed according to the characteristic data of reservoirs. In terms of the benefit maximization and risk minimum principle, the reasonable combination of flood control limited water level of cascade reservoirs was put forward.

  19. Low versus High Expectations: A Review of Teacher Expectations Effects on Minority Students.

    ERIC Educational Resources Information Center

    McCormick, Theresa E.; Noriega, Tino

    1986-01-01

    The article describes different types of teacher expectations and expectation effects, particularly on minority students. Evidence for the existence of expectation effects is reviewed. Descriptions of behaviors associated with teacher expectations are summarized and recommendations are made for enhancing the learning environment for all students.…

  20. Temporal expectation and spectral expectation operate in distinct fashion on neuronal populations.

    PubMed

    Hsu, Yi-Fang; Hämäläinen, Jarmo A; Waszak, Florian

    2013-11-01

    The formation of temporal expectation (i.e., the prediction of "when") is of prime importance to sensory processing. It can modulate sensory processing at early processing stages probably via the entrainment of low-frequency neuronal oscillations in the brain. However, sensory predictions involve not only temporal expectation but also spectral expectation (i.e., the prediction of "what"). Here we investigated how temporal expectation may interrelate with spectral expectation by explicitly setting up temporal expectation and spectral expectation in a target detection task. We found that reaction time (RT) was shorter when targets were temporally expected than when they were temporally unexpected. The temporal expectation effect was larger with than without spectral expectation. However, this interaction in the behavioural data did not result from an interaction in the electroencephalography (EEG), where we observed independent main effects of temporal expectation and spectral expectation. More precisely, we found that the N1 and P2 event-related potential (ERP) components and the entrainment of low-frequency neuronal oscillations were exclusively modulated by temporal expectation, whilst only the P3 ERP component was modulated by spectral expectation. Our results, thus, support the idea that temporal expectation and spectral expectation operate in distinct fashion on neuronal populations.

  1. Lithium-thionyl chloride battery design concepts for maximized power applications

    NASA Astrophysics Data System (ADS)

    Kane, P.; Marincic, N.

    The need for primary batteries configured to deliver maximized power has been asserted by many different procuring activities. Battery Engineering Inc. has developed some specific design concepts and mastered some specialized techniques utilized in the production of this type of power source. The batteries have been successfully bench tested during the course of virtually all of these programs, with ultimate success coming in the form of two successful test launches under the USAF Plasma Effects Decoy Program. This paper briefly discusses some of these design concepts and the rationale behind them.

  2. Documenting and explaining the common AAB pattern in music and humor: establishing and breaking expectations.

    PubMed

    Rozin, Paul; Rozin, Alexander; Appel, Brian; Wachtel, Charles

    2006-08-01

    The AAB pattern consists of two similar events followed by a third dissimilar event. The prevalence of this pattern in the aesthetic domain may be explained as violation of expectation: A minimum of two iterations is required to establish a repetitive pattern; once established, it is most efficient to promptly violate the expected continuance of the pattern to produce the maximal aesthetic effect. We demonstrate the prevalence of this pattern (in comparison to AB or AAAB) in a representative sample of a variety of musical genres and in a representative sample of repetitive genre of jokes. We also provide experimental evidence that the AAB pattern in jokes is maximally effective in producing a humor response in participants.

  3. Maximized PUFA measurements improve insight in changes in fatty acid composition in response to temperature.

    PubMed

    van Dooremalen, Coby; Pel, Roel; Ellers, Jacintha

    2009-10-01

    A general mechanism underlying the response of ectotherms to environmental changes often involves changes in fatty acid composition. Theory predicts that a decrease in temperature causes an increase in unsaturation of fatty acids, with an important role for long-chain poly-unsaturated fatty acids (PUFAs). However, PUFAs are particularly unstable and susceptible to peroxidation, hence subtle differences in fatty acid composition can be challenging to detect. We determined the fatty acid composition in springtail (Collembola) in response to two temperatures (5 degrees C and 25 degrees C). First, we tested different sample preparation methods to maximize PUFAs. Treatments consisted of different solvents for primary lipid extraction, mixing with antioxidant, flushing with inert gas, and using different temperature exposures during saponification. Especially slow saponification at low temperature (90 min at 70 degrees C) in combination with replacement of headspace air with nitrogen during saponification and methylation maximized PUFAs for GC analysis. Applying these methods to measure thermal responses in fatty acid composition, the data showed that the (maximized) proportion of C(20) PUFAs increased at low acclimation temperature. However, C(18) PUFAs increased at high acclimation temperature, which is contrary to expectations. Our study illustrates that PUFA levels in lipids may often be underestimated and this may hamper a correct interpretation of differential responses of fatty acid composition.

  4. Assessment and mitigation of DNA loss utilizing centrifugal filtration devices.

    PubMed

    Doran, Ashley E; Foran, David R

    2014-11-01

    Maximizing DNA recovery during its isolation can be vital in forensic casework, particularly when DNA yields are expected to be low, such as from touch samples. Many forensic laboratories utilize centrifugal filtration devices to purify and concentrate the DNA; however, DNA loss has been reported when using them. In this study, all centrifugal filtration devices tested caused substantial DNA loss, affecting low molecular weight DNA (PCR product) somewhat more than high molecular weight DNA. Strategies for mitigating DNA loss were then examined, including pre-treatment with glucose, glycogen, silicone (RainX(®)), bovine serum albumin, yeast RNA, or high molecular weight DNA. The length of pre-treatment and UV irradiation of pre-treatment reagents were also investigated. Pre-treatments with glucose and glycogen resulted in little or no improvement in DNA recovery, and most or all DNA was lost after silicone pre-treatment. Devices pre-treated with BSA produced irregular and uninterpretable quantitative PCR amplification curves for the DNA and internal PCR control. On the other hand, nucleic acid pre-treatments greatly improved recovery of all DNAs. Pre-treatment time and its UV irradiation did not influence DNA recovery. Overall, the results show that centrifugal filtration devices trap DNA, yet their proper pre-treatment can circumvent that loss, which is critical in the case of low copy forensic DNA samples.

  5. Sensitivity to conversational maxims in deaf and hearing children.

    PubMed

    Surian, Luca; Tedoldi, Mariantonia; Siegal, Michael

    2010-09-01

    We investigated whether access to a sign language affects the development of pragmatic competence in three groups of deaf children aged 6 to 11 years: native signers from deaf families receiving bimodal/bilingual instruction, native signers from deaf families receiving oralist instruction and late signers from hearing families receiving oralist instruction. The performance of these children was compared to a group of hearing children aged 6 to 7 years on a test designed to assess sensitivity to violations of conversational maxims. Native signers with bimodal/bilingual instruction were as able as the hearing children to detect violations that concern truthfulness (Maxim of Quality) and relevance (Maxim of Relation). On items involving these maxims, they outperformed both the late signers and native signers attending oralist schools. These results dovetail with previous findings on mindreading in deaf children and underscore the role of early conversational experience and instructional setting in the development of pragmatics.

  6. Optimizing Air Force Depot Programming to Maximize Operational Capability

    DTIC Science & Technology

    2014-03-27

    34 vii LINGO Component... LINGO Code with Notional Data by Model .................................. 45 RAND Formulation to Maximize Operational Capability...Minimize Cost ...................................................................................... 49 Appendix B –Final LINGO Code by Model

  7. Interpreting Negative Results in an Angle Maximization Problem.

    ERIC Educational Resources Information Center

    Duncan, David R.; Litwiller, Bonnie H.

    1995-01-01

    Presents a situation in which differential calculus is used with inverse trigonometric tangent functions to maximize an angle measure. A negative distance measure ultimately results, requiring a reconsideration of assumptions inherent in the initial figure. (Author/MKR)

  8. Rising Tides: Faculty Expectations of Library Websites

    ERIC Educational Resources Information Center

    Nicol, Erica Carlson; O'English, Mark

    2012-01-01

    Looking at 2003-2009 LibQUAL+ responses at research-oriented universities in the United States, faculty library users report a significant and consistent rise in desires and expectations for library-provided online tools and websites, even as student user groups show declining or leveling expectations. While faculty, like students, also report…

  9. The Expectant Reader in Theory and Practice.

    ERIC Educational Resources Information Center

    Fowler, Lois Josephs; McCormick, Kathleen

    1986-01-01

    Offers a method of using reader response theory that emphasizes the expectations about a text and how those expectations are fulfilled or deflated. Specifically, students read traditional fables, fairy tales, and parables, and compare them to contemporary works such as Kafka's "Metamorphosis" and Marquez's "The Very Old Man With Enormous Wings."…

  10. Cognitive Processing and Expectancy Behavior in Hypnosis

    ERIC Educational Resources Information Center

    Dolby, Robyn M.; Sheehan, Peter W.

    1977-01-01

    Two independent studies were conducted to examine the expectancy behavior of unselected hypnotic, task-motivated, and control-imagination subjects on a slide task requiring response to ambiguous visual information. Results showed that hypnotic subjects consistently demonstrated expectancy behavior, whereas nonhypnotic subjects did not. (Editor/RK)

  11. Do Students Expect Compensation for Wage Risk?

    ERIC Educational Resources Information Center

    Schweri, Juerg; Hartog, Joop; Wolter, Stefan C.

    2011-01-01

    We use a unique data set about the wage distribution that Swiss students expect for themselves ex ante, deriving parametric and non-parametric measures to capture expected wage risk. These wage risk measures are unfettered by heterogeneity which handicapped the use of actual market wage dispersion as risk measure in earlier studies. Students in…

  12. Expectancy violations promote learning in young children.

    PubMed

    Stahl, Aimee E; Feigenson, Lisa

    2017-02-27

    Children, including infants, have expectations about the world around them, and produce reliable responses when these expectations are violated. However, little is known about how such expectancy violations affect subsequent cognition. Here we tested the hypothesis that violations of expectation enhance children's learning. In four experiments we compared 3- to 6-year-old children's ability to learn novel words in situations that defied versus accorded with their core knowledge of object behavior. In Experiments 1 and 2 we taught children novel words following one of two types of events. One event violated expectations about the spatiotemporal or featural properties of objects (e.g., an object appeared to magically change locations). The other event was almost identical, but did not violate expectations (e.g., an object was visibly moved from one location to another). In both experiments we found that children robustly learned when taught after the surprising event, but not following the expected event. In Experiment 3 we ruled out two alternative explanations for our results. Finally, in Experiment 4, we asked whether surprise affects children's learning in a targeted or a diffuse way. We found that surprise only enhanced children's learning about the entity that had behaved surprisingly, and not about unrelated objects. Together, these experiments show that core knowledge - and violations of expectations generated by core knowledge - shapes new learning.

  13. Course Expectations and Career Management Skills

    ERIC Educational Resources Information Center

    Kennedy, Marnie L.; Haines, Ben

    2008-01-01

    Course completion and student satisfaction is likely to be influenced by how realistic the expectations of students are when they enroll. This report explores the idea that students' expectations would be more realistic if students have well developed career management competencies. Recent research argues that lack of information is not the…

  14. International Variations in Measuring Customer Expectations.

    ERIC Educational Resources Information Center

    Calvert, Philip J.

    2001-01-01

    Discussion of customer expectations of library service quality and SERVQUAL as a measurement tool focuses on two studies: one that compared a survey of Chinese university students' expectations of service quality to New Zealand students; and one that investigated national culture as a source of attitudes to customer service. (Author/LRW)

  15. Expectancy and Phobic Level: Effects on Desensitization

    ERIC Educational Resources Information Center

    Sullivan, Bernard J.; Denney, Douglas R.

    1977-01-01

    Expectancy instructions were introduced six times during the four-week treatment, and effectiveness of these instructions was demonstrated with independent nonreactive measures of subjects' expectancies. An analysis of self-report, behavioral, and unobtrusive measures of snake anxiety revealed significant main effects for instructions, with…

  16. Parents' Role in Adolescents' Educational Expectations

    ERIC Educational Resources Information Center

    Rimkute, Laura; Hirvonen, Riikka; Tolvanen, Asko; Aunola, Kaisa; Nurmi, Jari-Erik

    2012-01-01

    The present study examined the extent to which mothers' and fathers' expectations for their offspring's future education, their level of education, and adolescents' academic achievement predict adolescents' educational expectations. To investigate this, 230 adolescents were examined twice while they were in comprehensive school (in the 7th and 9th…

  17. Grief Experiences and Expectance of Suicide

    ERIC Educational Resources Information Center

    Wojtkowiak, Joanna; Wild, Verena; Egger, Jos

    2012-01-01

    Suicide is generally viewed as an unexpected cause of death. However, some suicides might be expected to a certain extent, which needs to be further studied. The relationships between expecting suicide, feeling understanding for the suicide, and later grief experiences were explored. In total, 142 bereaved participants completed the Grief…

  18. Teacher Expectations and the Able Child.

    ERIC Educational Resources Information Center

    Lee-Corbin, Hilary

    1994-01-01

    Two middle school teachers and two students in each of the teacher's classes were assessed for field dependence-independence (FDI). The teachers were interviewed about their students. Found that one teacher had higher expectations and one had lower expectations for the student who had the same FDI orientation as the teacher than for the student…

  19. What Respondents Really Expect from Researchers

    ERIC Educational Resources Information Center

    Kolar, Tomaz; Kolar, Iztok

    2008-01-01

    This article addresses the issue of falling response rates in telephone surveys. To better understand and maintain respondent goodwill, concepts of psychological contract and respondent expectations are introduced and explored. Results of the qualitative study show that respondent expectations are not only socially contingent but also…

  20. On maximal parabolic regularity for non-autonomous parabolic operators

    NASA Astrophysics Data System (ADS)

    Disser, Karoline; ter Elst, A. F. M.; Rehberg, Joachim

    2017-02-01

    We consider linear inhomogeneous non-autonomous parabolic problems associated to sesquilinear forms, with discontinuous dependence of time. We show that for these problems, the property of maximal parabolic regularity can be extrapolated to time integrability exponents r ≠ 2. This allows us to prove maximal parabolic Lr-regularity for discontinuous non-autonomous second-order divergence form operators in very general geometric settings and to prove existence results for related quasilinear equations.

  1. [Maximal anaerobic capacity of man in a modified Wingate test].

    PubMed

    Ushakov, B B; Chelnokova, E V

    1992-01-01

    We studied the possibility of using a 380B Siemens-Elema (Sweden) bicycle ergometer to determine the maximal anaerobic capacity of healthy subjects during a modified Wingate test. Exercise was performed under stable moment conditions, with calculation of braking resistance on the basis of the subjects lean body mass. The values of total work performed and maximal power may be used for comparative evaluation of physical work capacity in participants of training and rehabilitation programs.

  2. Maximally Permissive Composition of Actors in Ptolemy II

    DTIC Science & Technology

    2013-03-20

    Maximally Permissive Composition of Actors in Ptolemy II Marten Lohstroh Electrical Engineering and Computer Sciences University of California at...3. DATES COVERED 00-00-2013 to 00-00-2013 4. TITLE AND SUBTITLE Maximally Permissive Composition of Actors in Ptolemy II 5a. CONTRACT NUMBER...addresses the problem of handling dynamic data, in the statically typed, actor-oriented modeling environment called Ptolemy II. It explores the possibilities

  3. A new augmentation based algorithm for extracting maximal chordal subgraphs

    SciTech Connect

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2014-10-18

    If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’ parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.

  4. A new augmentation based algorithm for extracting maximal chordal subgraphs

    DOE PAGES

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2014-10-18

    If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’more » parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.« less

  5. Complex and Reoperative Colorectal Surgery: Setting Expectations and Learning from Experience.

    PubMed

    Kin, Cindy

    2016-06-01

    A range of topics are covered in this issue dedicated to complex and reoperative colorectal surgery, from radiation-induced surgical problems, to enterocutaneous fistulas and locally advanced or recurrent rectal cancer. Common themes include the importance of operative planning and patient counseling on the expected functional outcomes. Experts in the field offer their technical tips and clinical lessons to maximize outcomes and minimize complications in these challenging cases.

  6. Maximizing the detection of near-Earth objects

    NASA Astrophysics Data System (ADS)

    Albin, T.; Albrecht, S.; Koschny, D.; Drolshagen, G.

    2014-07-01

    Planetary bodies with a perihelion equal or less than 1.3 astronomical units (au) are called near-Earth objects (NEOs). These objects are divided into 4 sub-families, two of them cross Earth's orbit and may be a potential hazard for the planet. The Tunguska event and the incident in Chelyabinsk last year have shown the devastating destructiveness of NEOs with a size of only approximately 40 and 20 meters, respectively. To predict and identify further threats, telescopic NEO surveys currently extend our knowledge of the population of these objects. Today (March 2014) approximately 10,700 NEOs are known. Based on an extrapolation of the current population, Bottke et al. (2002) predict a total number of N≈(1.0±0.5)×10^{8} NEOs up to an absolute magnitude of H = 30.5 mag. Additionally, Bottke et al. (2002) computed a de-biased model of the expected orbital elements distribution of the NEOs. They have investigated the theoretical distribution of NEOs by a dynamical simulation, following the orbital evolution of these objects from several source regions. Based on both models we performed simulations of the detectability of the theoretical NEO population for certain telescopes with certain properties. The goal of these simulations is to optimize the search strategies of NEO surveys. Our simulation models the optical telescope attributes (main and secondary mirror size, optical throughput, field-of-view), the electronics (CCD Camera, pixel size, quantum efficiency, gain, exposure time, pixel binning, dark / bias noise, Signal-to-Noise ratio), atmospheric effects (seeing, sky background illumination) and the brightness and angular velocity of the NEOs. We present exemplarily results for two telescopes, currently developed by the European Space Agency for a future NEO survey: the so-called Fly-Eye Telescope, a 1-m effective aperture telescope with a field of view of 6.5×6.5 deg^2 and the Test-Bed Telescope, with an aperture of 56 cm and a field of view of 2.2×2.2 deg^2

  7. How expectation works: psychologic and physiologic pathways.

    PubMed

    Brown, Walter A

    2015-05-01

    Although expectation has been the most widely studied of the mechanisms that drive the placebo effect, we still don't know how it works. We don't know how the thought that one will respond to a substance in a certain way is converted to symptom relief, intoxication, or airway resistance; the pathway between expectation of a response and the response itself remains uncharted. Nonetheless, in the last decade, brain-imaging studies have begun to uncover this pathway. This paper reviews both long-standing psychologic concepts about the underpinnings of expectation and some of the contemporary brain imaging research, which shows that when expectation alleviates depression, produces pain relief or improves parkinsonian symptoms, these effects come with relevant changes in brain activity and chemistry. These findings oblige us to reevaluate some of the traditional common sense notions of how expectation brings about its effects and how placebos work.

  8. Design and manufacturing rules for maximizing the performance of polycrystalline piezoelectric bending actuators

    NASA Astrophysics Data System (ADS)

    Jafferis, Noah T.; Smith, Michael J.; Wood, Robert J.

    2015-06-01

    Increasing the energy and power density of piezoelectric actuators is very important for any weight-sensitive application, and is especially crucial for enabling autonomy in micro/milli-scale robots and devices utilizing this technology. This is achieved by maximizing the mechanical flexural strength and electrical dielectric strength through the use of laser-induced melting or polishing, insulating edge coating, and crack-arresting features, combined with features for rigid ground attachments to maximize force output. Manufacturing techniques have also been developed to enable mass customization, in which sheets of material are pre-stacked to form a laminate from which nearly arbitrary planar actuator designs can be fabricated using only laser cutting. These techniques have led to a 70% increase in energy density and an increase in mean lifetime of at least 15× compared to prior manufacturing methods. In addition, measurements have revealed a doubling of the piezoelectric coefficient when operating at the high fields necessary to achieve maximal energy densities, along with an increase in the Young’s modulus at the high compressive strains encountered—these two effects help to explain the higher performance of our actuators as compared to that predicted by linear models.

  9. How fast-growing bacteria robustly tune their ribosome concentration to approximate growth-rate maximization.

    PubMed

    Bosdriesz, Evert; Molenaar, Douwe; Teusink, Bas; Bruggeman, Frank J

    2015-05-01

    Maximization of growth rate is an important fitness strategy for bacteria. Bacteria can achieve this by expressing proteins at optimal concentrations, such that resources are not wasted. This is exemplified for Escherichia coli by the increase of its ribosomal protein-fraction with growth rate, which precisely matches the increased protein synthesis demand. These findings and others have led to the hypothesis that E. coli aims to maximize its growth rate in environments that support growth. However, what kind of regulatory strategy is required for a robust, optimal adjustment of the ribosome concentration to the prevailing condition is still an open question. In the present study, we analyze the ppGpp-controlled mechanism of ribosome expression used by E. coli and show that this mechanism maintains the ribosomes saturated with its substrates. In this manner, overexpression of the highly abundant ribosomal proteins is prevented, and limited resources can be redirected to the synthesis of other growth-promoting enzymes. It turns out that the kinetic conditions for robust, optimal protein-partitioning, which are required for growth rate maximization across conditions, can be achieved with basic biochemical interactions. We show that inactive ribosomes are the most suitable 'signal' for tracking the intracellular nutritional state and for adjusting gene expression accordingly, as small deviations from optimal ribosome concentration cause a huge fractional change in ribosome inactivity. We expect to find this control logic implemented across fast-growing microbial species because growth rate maximization is a common selective pressure, ribosomes are typically highly abundant and thus costly, and the required control can be implemented by a small, simple network.

  10. Utilization of the terrestrial cyanobacterial sheet

    NASA Astrophysics Data System (ADS)

    Katoh, Hiroshi; Tomita-Yokotani, Kaori; Furukawa, Jun; Kimura, Shunta; Yamaguchi, Yuji; Takenaka, Hiroyuki; Kohno, Nobuyuki

    2016-07-01

    The terrestrial nitrogen-fixing cyanobacterium, Nostoc commune, is living ranging from polar to desert. N. commune makes visible colonies composed extracellular polymeric substances. N. commune has expected to utilize for agriculture, food and terraforming cause of its extracellular polysaccharide, desiccation tolerance and nitrogen fixation. To exhibit the potential abilities, the N. commune sheet is made to use convenient and evaluated by plant growth and radioactive accumulation. We will discuss utilization of terrestrial cyanobacteria under closed environment.

  11. Moving multiple sinks through wireless sensor networks for lifetime maximization.

    SciTech Connect

    Petrioli, Chiara; Carosi, Alessio; Basagni, Stefano; Phillips, Cynthia Ann

    2008-01-01

    Unattended sensor networks typically watch for some phenomena such as volcanic events, forest fires, pollution, or movements in animal populations. Sensors report to a collection point periodically or when they observe reportable events. When sensors are too far from the collection point to communicate directly, other sensors relay messages for them. If the collection point location is static, sensor nodes that are closer to the collection point relay far more messages than those on the periphery. Assuming all sensor nodes have roughly the same capabilities, those with high relay burden experience battery failure much faster than the rest of the network. However, since their death disconnects the live nodes from the collection point, the whole network is then dead. We consider the problem of moving a set of collectors (sinks) through a wireless sensor network to balance the energy used for relaying messages, maximizing the lifetime of the network. We show how to compute an upper bound on the lifetime for any instance using linear and integer programming. We present a centralized heuristic that produces sink movement schedules that produce network lifetimes within 1.4% of the upper bound for realistic settings. We also present a distributed heuristic that produces lifetimes at most 25:3% below the upper bound. More specifically, we formulate a linear program (LP) that is a relaxation of the scheduling problem. The variables are naturally continuous, but the LP relaxes some constraints. The LP has an exponential number of constraints, but we can satisfy them all by enforcing only a polynomial number using a separation algorithm. This separation algorithm is a p-median facility location problem, which we can solve efficiently in practice for huge instances using integer programming technology. This LP selects a set of good sensor configurations. Given the solution to the LP, we can find a feasible schedule by selecting a subset of these configurations, ordering them

  12. Expectation mismatch: differences between self-generated and cue-induced expectations.

    PubMed

    Gaschler, R; Schwager, S; Umbach, V J; Frensch, P A; Schubert, T

    2014-10-01

    Expectation of upcoming stimuli and tasks can lead to improved performance, if the anticipated situation occurs, while expectation mismatch can lead to less efficient processing. Researchers have used methodological approaches that rely on either self-generated expectations (predictions) or cue-induced expectations to investigate expectation mismatch effects. Differentiating these two types of expectations for different contents of expectation such as stimuli, responses, task sets and conflict level, we review evidence suggesting that self-generated expectations lead to larger facilitating effects and conflict effects on the behavioral and neural level - as compared to cue-based expectations. On a methodological level, we suggest that self-generated as compared to cue-induced expectations allow for a higher amount of experimental control in many experimental designs on expectation effects. On a theoretical level, we argue for qualitative differences in how cues vs. self-generated expectations influence performance. While self-generated expectations might generally involve representing the expected event in the focus of attention in working memory, cues might only lead to such representations under supportive circumstances (i.e., cue of high validity and attended).

  13. A Quantitative Three-Dimensional Image Analysis Tool for Maximal Acquisition of Spatial Heterogeneity Data.

    PubMed

    Allenby, Mark C; Misener, Ruth; Panoskaltsis, Nicki; Mantalaris, Athanasios

    2017-02-01

    Three-dimensional (3D) imaging techniques provide spatial insight into environmental and cellular interactions and are implemented in various fields, including tissue engineering, but have been restricted by limited quantification tools that misrepresent or underutilize the cellular phenomena captured. This study develops image postprocessing algorithms pairing complex Euclidean metrics with Monte Carlo simulations to quantitatively assess cell and microenvironment spatial distributions while utilizing, for the first time, the entire 3D image captured. Although current methods only analyze a central fraction of presented confocal microscopy images, the proposed algorithms can utilize 210% more cells to calculate 3D spatial distributions that can span a 23-fold longer distance. These algorithms seek to leverage the high sample cost of 3D tissue imaging techniques by extracting maximal quantitative data throughout the captured image.

  14. Expectations predict chronic pain treatment outcomes.

    PubMed

    Cormier, Stéphanie; Lavigne, Geneviève L; Choinière, Manon; Rainville, Pierre

    2016-02-01

    Accumulating evidence suggests an association between patient pretreatment expectations and numerous health outcomes. However, it remains unclear if and how expectations relate to outcomes after treatments in multidisciplinary pain programs. The present study aims at investigating the predictive association between expectations and clinical outcomes in a large database of chronic pain patients. In this observational cohort study, participants were 2272 patients treated in one of 3 university-affiliated multidisciplinary pain treatment centers. All patients received personalized care, including medical, psychological, and/or physical interventions. Patient expectations regarding pain relief and improvements in quality of life and functioning were measured before the first visit to the pain centers and served as predictor variables. Changes in pain intensity, depressive symptoms, pain interference, and tendency to catastrophize, as well as satisfaction with pain treatment and global impressions of change at 6-month follow-up, were considered as treatment outcomes. Structural equation modeling analyses showed significant positive relationships between expectations and most clinical outcomes, and this association was largely mediated by patients' global impressions of change. Similar patterns of relationships between variables were also observed in various subgroups of patients based on sex, age, pain duration, and pain classification. Such results emphasize the relevance of patient expectations as a determinant of outcomes in multimodal pain treatment programs. Furthermore, the results suggest that superior clinical outcomes are observed in individuals who expect high positive outcomes as a result of treatment.

  15. Expected and Experienced Pain Levels in Electromyography

    PubMed Central

    YALINAY DİKMEN, Pınar; ILGAZ AYDINLAR, Elif; KARLIKAYA, Geysu

    2013-01-01

    Introduction The aim of the present study was to assess pain using a visual analogue scale (VAS) in patients awaiting an EMG procedure (i.e., expected VAS) and after an EMG procedure (i.e., experienced VAS). Methods Expected and experienced pain in response to nerve conduction studies (NCS) and needle EMG were assessed in 108 patients (61 females, 47 males; mean age 43.2±11.6) using a VAS. Results No significant correlations were noted between the expected or the experienced VAS in response to EMG and demographic features of the patients. The expected VAS was significantly higher than the experienced VAS in response to needle EMG (p=0.005). The highest VAS level was noted in the expected VAS in response to needle EMG (4.7±2.2). The lowest VAS level was noted in the experienced VAS in response to NCS (3.6±2.5). Conclusion The present study demonstrated that neither the expected nor the experienced pain associated with EMG exceeded a moderate level. Interestingly, we found that expected pain levels in response to needle EMG were significantly higher than experienced pain levels. Therefore, it may be possible to increase compliance if patients are provided with this information before undergoing electrophysiological procedures.

  16. Solar energy research and utilization

    NASA Technical Reports Server (NTRS)

    Cherry, W. R.

    1974-01-01

    The role of solar energy is visualized in the heating and cooling of buildings, in the production of renewable gaseous, liquid and solid fuels, and in the production of electric power over the next 45 years. Potential impacts of solar energy on various energy markets, and estimated costs of such solar energy systems are discussed. Some typical solar energy utilization processes are described in detail. It is expected that at least 20% of the U.S. total energy requirements by 2020 will be delivered from solar energy.

  17. Orbiter electrical equipment utilization baseline

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The baseline for utilization of Orbiter electrical equipment in both electrical and Environmental Control and Life Support System (ECLSS) thermal analyses is established. It is a composite catalog of Space Shuttle equipment, as defined in the Shuttle Operational Data Book. The major functions and expected usage of each component type are described. Functional descriptions are designed to provide a fundamental understanding of the Orbiter electrical equipment, to insure correlation of equipment usage within nominal analyses, and to aid analysts in the formulation of off-nominal, contingency analyses.

  18. Middleware for Pervasive Spaces: Balancing Privacy and Utility

    NASA Astrophysics Data System (ADS)

    Massaguer, Daniel; Hore, Bijit; Diallo, Mamadou H.; Mehrotra, Sharad; Venkatasubramanian, Nalini

    Middleware for pervasive spaces has to meet conflicting requirements. It has to both maximize the utility of the information exposed and ensure that this information does not violate users' privacy. In order to resolve these conflicts, we propose a framework grounded in utility theory where users dynamically control the level of disclosure about their information. We begin by providing appropriate definitions of privacy and utility for the type of applications that would support collaborative work in an office environment—current definitions of privacy and anonymity do not apply in this context. We propose a distributed solution that, given a user's background knowledge, maximizes the utility of the information being disclosed to information recipients while meeting the privacy requirements of users. We implement our solution in the context of a real pervasive space middleware and provide experiments that demonstrate its behaviour.

  19. A utility oriented radio resource management algorithm for heterogenous network

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoyan; Dong, Yan; Huang, Zailu

    2007-11-01

    A utility oriented radio resource management algorithm is proposed for broadband nongeostationary satellite network which works in the heterogeneous network environment and provides access services for various customers on the ground. Based on the game theory, the problem for optimizing the network's performance is turned into the problem for maximizing the network's long term utility in the proposed algorithm. With evaluation to the traffic condition and dimensions of Qos for the network at the moment while the access service requirements changing, the influence of this service requirement to the long term utility of the satellite network is audited and then the resource assignment decision can be made according to the rule for maximizing the satellite network's long term utility. The process directed by game theory guaranteed both that the benefit of the network and the requirements of the customers could be considered synthetically. The simulation results demonstrate the effectiveness of the proposed algorithm.

  20. Utilization of the terrestrial cyanobacteria

    NASA Astrophysics Data System (ADS)

    Katoh, Hiroshi; Tomita-Yokotani, Kaori; Furukawa, Jun; Kimura, Shunta; Yokoshima, Mika; Yamaguchi, Yuji; Takenaka, Hiroyuki

    The terrestrial, N _{2}-fixing cyanobacterium, Nostoc commune has expected to utilize for agriculture, food and terraforming cause of its extracellular polysaccharide, desiccation tolerance and nitrogen fixation. Previously, the first author indicated that desiccation related genes were analyzed and the suggested that the genes were related to nitrogen fixation and metabolisms. In this report, we suggest possibility of agriculture, using the cyanobacterium. Further, we also found radioactive compounds accumulated N. commune (cyanobacterium) in Fukushima, Japan after nuclear accident. Thus, it is investigated to decontaminate radioactive compounds from the surface soil by the cyanobacterium and showed to accumulate radioactive compounds using the cyanobacterium. We will discuss utilization of terrestrial cyanobacteria under closed environment. Keyword: Desiccation, terrestrial cyanobacteria, bioremediation, agriculture

  1. Great Expectations: Brigadier General Haywood S. Hansell, Jr.and the XXI Bomber Command

    DTIC Science & Technology

    2015-07-01

    formation flying.46 Hansell led the first mission, but his aircraft became one of four to abort and never reached the target. Shortly thereafter, he...Seventeen aircraft aborted for fuel problems and six could not release their munitions for mechanical failures, but 88 aircraft placed bombs on...was the only wing available to Hansell in the fall of 1944 and maximizing its utilization became imperative.87 The number of aborts and losses began

  2. Utilizing Statistical Inference to Guide Expectations and Test Structuring During Operational Testing and Evaluation

    DTIC Science & Technology

    2011-04-30

    Commander, Naval Sea Systems Command • Army Contracting Command, U.S. Army Materiel Command • Program Manager, Airborne, Maritime and Fixed Station...are in the area of the Design and Acquisition of Military Assets. Specific domains of interests include the concept of value and its integration...inference may point to areas where the test may be modified or additional control measures may be introduced to increase the likelihood of obtaining

  3. Understanding the Hows and Whys of Decision-Making: From Expected Utility to Divisive Normalization.

    PubMed

    Glimcher, Paul

    2014-01-01

    Over the course of the last century, economists and ethologists have built detailed models from first principles of how humans and animals should make decisions. Over the course of the last few decades, psychologists and behavioral economists have gathered a wealth of data at variance with the predictions of these economic models. This has led to the development of highly descriptive models that can often predict what choices people or animals will make but without offering any insight into why people make the choices that they do--especially when those choices reduce a decision-maker's well-being. Over the course of the last two decades, neurobiologists working with economists and psychologists have begun to use our growing understanding of how the nervous system works to develop new models of how the nervous system makes decisions. The result, a growing revolution at the interdisciplinary border of neuroscience, psychology, and economics, is a new field called Neuroeconomics. Emerging neuroeconomic models stand to revolutionize our understanding of human and animal choice behavior by combining fundamental properties of neurobiological representation with decision-theoretic analyses. In this overview, one class of these models, based on the widely observed neural computation known as divisive normalization, is presented in detail. The work demonstrates not only that a discrete class of computation widely observed in the nervous system is fundamentally ubiquitous, but how that computation shapes behaviors ranging from visual perception to financial decision-making. It also offers the hope of reconciling economic analysis of what choices we should make with psychological observations of the choices we actually do make.

  4. Forecasting Complex Political and Military Events: The Application of Expected Utility to Crisis Situations

    DTIC Science & Technology

    2000-06-01

    useful to distinguish between an actor’s resources and veto power. When veto players are present, the median voter forecast is not necessarily the...predicted outcome. In the presence of veto players , the analysis requires that they all be at, or very near, the same position at the end of the analysis for...there to be an agreed upon outcome. Such predictions will hold even if the majority objects. Veto players are the only stakeholders who actually

  5. Maximizing the Impact of e-Therapy and Serious Gaming: Time for a Paradigm Shift.

    PubMed

    Fleming, Theresa M; de Beurs, Derek; Khazaal, Yasser; Gaggioli, Andrea; Riva, Giuseppe; Botella, Cristina; Baños, Rosa M; Aschieri, Filippo; Bavin, Lynda M; Kleiboer, Annet; Merry, Sally; Lau, Ho Ming; Riper, Heleen

    2016-01-01

    Internet interventions for mental health, including serious games, online programs, and apps, hold promise for increasing access to evidence-based treatments and prevention. Many such interventions have been shown to be effective and acceptable in trials; however, uptake and adherence outside of trials is seldom reported, and where it is, adherence at least, generally appears to be underwhelming. In response, an international Collaboration On Maximizing the impact of E-Therapy and Serious Gaming (COMETS) was formed. In this perspectives' paper, we call for a paradigm shift to increase the impact of internet interventions toward the ultimate goal of improved population mental health. We propose four pillars for change: (1) increased focus on user-centered approaches, including both user-centered design of programs and greater individualization within programs, with the latter perhaps utilizing increased modularization; (2) Increased emphasis on engagement utilizing processes such as gaming, gamification, telepresence, and persuasive technology; (3) Increased collaboration in program development, testing, and data sharing, across both sectors and regions, in order to achieve higher quality, more sustainable outcomes with greater reach; and (4) Rapid testing and implementation, including the measurement of reach, engagement, and effectiveness, and timely implementation. We suggest it is time for researchers, clinicians, developers, and end-users to collaborate on these aspects in order to maximize the impact of e-therapies and serious gaming.

  6. Maximizing the Impact of e-Therapy and Serious Gaming: Time for a Paradigm Shift

    PubMed Central

    Fleming, Theresa M.; de Beurs, Derek; Khazaal, Yasser; Gaggioli, Andrea; Riva, Giuseppe; Botella, Cristina; Baños, Rosa M.; Aschieri, Filippo; Bavin, Lynda M.; Kleiboer, Annet; Merry, Sally; Lau, Ho Ming; Riper, Heleen

    2016-01-01

    Internet interventions for mental health, including serious games, online programs, and apps, hold promise for increasing access to evidence-based treatments and prevention. Many such interventions have been shown to be effective and acceptable in trials; however, uptake and adherence outside of trials is seldom reported, and where it is, adherence at least, generally appears to be underwhelming. In response, an international Collaboration On Maximizing the impact of E-Therapy and Serious Gaming (COMETS) was formed. In this perspectives’ paper, we call for a paradigm shift to increase the impact of internet interventions toward the ultimate goal of improved population mental health. We propose four pillars for change: (1) increased focus on user-centered approaches, including both user-centered design of programs and greater individualization within programs, with the latter perhaps utilizing increased modularization; (2) Increased emphasis on engagement utilizing processes such as gaming, gamification, telepresence, and persuasive technology; (3) Increased collaboration in program development, testing, and data sharing, across both sectors and regions, in order to achieve higher quality, more sustainable outcomes with greater reach; and (4) Rapid testing and implementation, including the measurement of reach, engagement, and effectiveness, and timely implementation. We suggest it is time for researchers, clinicians, developers, and end-users to collaborate on these aspects in order to maximize the impact of e-therapies and serious gaming. PMID:27148094

  7. Bioengineering and Coordination of Regulatory Networks and Intracellular Complexes to Maximize Hydrogen Production by Phototrophic Microorganisms

    SciTech Connect

    Tabita, F. Robert

    2013-07-30

    In this study, the Principal Investigator, F.R. Tabita has teemed up with J. C. Liao from UCLA. This project's main goal is to manipulate regulatory networks in phototrophic bacteria to affect and maximize the production of large amounts of hydrogen gas under conditions where wild-type organisms are constrained by inherent regulatory mechanisms from allowing this to occur. Unrestrained production of hydrogen has been achieved and this will allow for the potential utilization of waste materials as a feed stock to support hydrogen production. By further understanding the means by which regulatory networks interact, this study will seek to maximize the ability of currently available “unrestrained” organisms to produce hydrogen. The organisms to be utilized in this study, phototrophic microorganisms, in particular nonsulfur purple (NSP) bacteria, catalyze many significant processes including the assimilation of carbon dioxide into organic carbon, nitrogen fixation, sulfur oxidation, aromatic acid degradation, and hydrogen oxidation/evolution. Moreover, due to their great metabolic versatility, such organisms highly regulate these processes in the cell and since virtually all such capabilities are dispensable, excellent experimental systems to study aspects of molecular control and biochemistry/physiology are available.

  8. What to Expect After Heart Surgery

    MedlinePlus

    ... on Twitter. What To Expect After Heart Surgery Recovery in the Hospital You may spend a day ... heart rate, blood pressure, breathing, and incision site(s). Recovery at Home People respond differently to heart surgery. ...

  9. What to Expect After Pulmonary Rehabilitation

    MedlinePlus

    ... Expect After Benefits & Risks Links Related Topics Bronchitis COPD Cystic Fibrosis Idiopathic Pulmonary Fibrosis Sarcoidosis Send a ... answer questions. Some of these tests, such as exercise tests, will be the same ones you had ...

  10. What to Expect Before Pulmonary Rehabilitation

    MedlinePlus

    ... Expect After Benefits & Risks Links Related Topics Bronchitis COPD Cystic Fibrosis Idiopathic Pulmonary Fibrosis Sarcoidosis Send a ... how well you're able to breathe and exercise. You'll have lung function tests to check ...

  11. Parental outcome expectations on children's TV viewing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Children's TV viewing has been associated with increased sedentary behavior and poor eating habits. Positive intervention effects have been observed when addressing outcome expectations as a mediator in interventions targeting children's dietary behavior. Little is known about parental outcome expec...

  12. Classics in the Classroom: Great Expectations Fulfilled.

    ERIC Educational Resources Information Center

    Pearl, Shela

    1986-01-01

    Describes how an English teacher in a Queens, New York, ghetto school introduced her grade nine students to Charles Dickens's "Great Expectations." Focuses on students' responses, which eventually became enthusiastic, and discusses the use of classics within the curriculum. (KH)

  13. Chemical structure elucidation from ¹³C NMR chemical shifts: efficient data processing using bipartite matching and maximal clique algorithms.

    PubMed

    Koichi, Shungo; Arisaka, Masaki; Koshino, Hiroyuki; Aoki, Atsushi; Iwata, Satoru; Uno, Takeaki; Satoh, Hiroko

    2014-04-28

    Computer-assisted chemical structure elucidation has been intensively studied since the first use of computers in chemistry in the 1960s. Most of the existing elucidators use a structure-spectrum database to obtain clues about the correct structure. Such a structure-spectrum database is expected to grow on a daily basis. Hence, the necessity to develop an efficient structure elucidation system that can adapt to the growth of a database has been also growing. Therefore, we have developed a new elucidator using practically efficient graph algorithms, including the convex bipartite matching, weighted bipartite matching, and Bron-Kerbosch maximal clique algorithms. The utilization of the two matching algorithms especially is a novel point of our elucidator. Because of these sophisticated algorithms, the elucidator exactly produces a correct structure if all of the fragments are included in the database. Even if not all of the fragments are in the database, the elucidator proposes relevant substructures that can help chemists to identify the actual chemical structures. The elucidator, called the CAST/CNMR Structure Elucidator, plays a complementary role to the CAST/CNMR Chemical Shift Predictor, and together these two functions can be used to analyze the structures of organic compounds.

  14. Domain Parking: Not as Malicious as Expected

    DTIC Science & Technology

    2014-08-01

    Domain Parking: Not as Malicious as Expected Leigh Metcalf , Jonathan Spring netsa-contact@cert.org CERT® Coordination Center, Software Engineering...Parking: Not as Malicious as Expected 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Metcalf /Jonathan Spring Leigh 5d...Tech. Rep. RFC 3927, May 2005. [11] L. B. Metcalf and J. M. Spring, “Everything you wanted to know about black- lists but were afraid to ask

  15. Converting customer expectations into achievable results.

    PubMed

    Landis, G A

    1999-11-01

    It is not enough in today's environment to just meet customers' expectations--we must exceed them. Therefore, one must learn what constitutes expectations. These needs have expanded during the past few years from just manufacturing the product and looking at the outcome from a provincial standpoint. Now we must understand and satisfy the entire supply chain. To manage this process and satisfy the customer, the process now involves the supplier, the manufacturer, and the entire distribution system.

  16. Disk Density Tuning of a Maximal Random Packing

    PubMed Central

    Ebeida, Mohamed S.; Rushdi, Ahmad A.; Awad, Muhammad A.; Mahmoud, Ahmed H.; Yan, Dong-Ming; English, Shawn A.; Owens, John D.; Bajaj, Chandrajit L.; Mitchell, Scott A.

    2016-01-01

    We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations. PMID:27563162

  17. A multistate analysis of active life expectancy.

    PubMed

    Rogers, A; Rogers, R G; Branch, L G

    1989-01-01

    With today's lower mortality rates, longer expectations of life, and new medical technologies, the nation's health policy focus has shifted from emphasis on individual survival to emphasis on personal health and independent living. Using longitudinal data sets and new methodological techniques, researchers have begun to assess active life expectancies, estimating not only how long a subpopulation can expect to live beyond each age, but what fractions of the expected remaining lifetime will be lived as independent, dependent, or institutionalized. New ideas are addressed, applying recently developed multistate life table methods to Waves One and Two of the Massachusetts Health Care Panel Study. Expectations of active life are presented for those 65 and older who initially are in one of two functional states of well-being. Included are expectations of life, for those, for example, who were independent and remained so, or those who were dependent and became independent. Although public health officials are concerned about the number of elderly who cease being independent, preliminary analysis shows that a significant number of the dependent elderly regain their independence, a situation which needs to be addressed in health care planning.

  18. Consumer expectations for future gasoline prices

    SciTech Connect

    Not Available

    1983-05-01

    A key finding of this study is that consumer expectations respond to the volatility in retail gasoline prices. Historical price trends can be characterized as a series of rapid steps separated by varying periods of relative stability. The behavorial models investigated here imply that consumer response to downward steps is constrained by the effect of the longer-term (less volatile) trends. The response to upward steps is, however, essentially unconstrained in that it can be driven directly through the short-term price trend. The rough trajectory by which retail gasoline prices have climbed historically thus appears to sustain relatively high consumer expectations for future price levels. In a second point of divergence, analytical efforts also generally employ real price trends. While arguably the basis for decision-making by business planners and analysts, it does not necessarily hold that consumers discount nominal price trends for expected inflation rates in developing their year-ahead price forecasts. An initial review of the SRC data for expected real price increases (expected prices adjusted for the expected inflation rate) identified no consistent relationships, and generally lower correlations, with historical trends in real prices.

  19. Expectation and attention in hierarchical auditory prediction.

    PubMed

    Chennu, Srivas; Noreika, Valdas; Gueorguiev, David; Blenkmann, Alejandro; Kochen, Silvia; Ibáñez, Agustín; Owen, Adrian M; Bekinschtein, Tristan A

    2013-07-03

    Hierarchical predictive coding suggests that attention in humans emerges from increased precision in probabilistic inference, whereas expectation biases attention in favor of contextually anticipated stimuli. We test these notions within auditory perception by independently manipulating top-down expectation and attentional precision alongside bottom-up stimulus predictability. Our findings support an integrative interpretation of commonly observed electrophysiological signatures of neurodynamics, namely mismatch negativity (MMN), P300, and contingent negative variation (CNV), as manifestations along successive levels of predictive complexity. Early first-level processing indexed by the MMN was sensitive to stimulus predictability: here, attentional precision enhanced early responses, but explicit top-down expectation diminished it. This pattern was in contrast to later, second-level processing indexed by the P300: although sensitive to the degree of predictability, responses at this level were contingent on attentional engagement and in fact sharpened by top-down expectation. At the highest level, the drift of the CNV was a fine-grained marker of top-down expectation itself. Source reconstruction of high-density EEG, supported by intracranial recordings, implicated temporal and frontal regions differentially active at early and late levels. The cortical generators of the CNV suggested that it might be involved in facilitating the consolidation of context-salient stimuli into conscious perception. These results provide convergent empirical support to promising recent accounts of attention and expectation in predictive coding.

  20. Maximally symmetric stabilizer MUBs in even prime-power dimensions

    NASA Astrophysics Data System (ADS)

    Carmeli, Claudio; Schultz, Jussi; Toigo, Alessandro

    2017-03-01

    One way to construct a maximal set of mutually unbiased bases (MUBs) in a prime-power dimensional Hilbert space is by means of finite phase-space methods. MUBs obtained in this way are covariant with respect to some subgroup of the group of all affine symplectic phase-space transformations. However, this construction is not canonical: as a consequence, many different choices of covariance subgroups are possible. In particular, when the Hilbert space is 2n dimensional, it is known that covariance with respect to the full group of affine symplectic phase-space transformations can never be achieved. Here we show that in this case there exist two essentially different choices of maximal subgroups admitting covariant MUBs. For both of them, we explicitly construct a family of 2n covariant MUBs. We thus prove that, contrary to the odd dimensional case, maximally covariant MUBs are very far from being unique in even prime-power dimensions.

  1. Evaluation of maximal respiratory pressures in myasthenia gravis. Prognostic value.

    PubMed

    Muñoz Fernández, Carmen; Díez Tejedor, Exuperio; Frank Garcia, Ana; Pino, Jose María; Pérez Conde, Concepción; Barreiro Tella, Pablo

    2004-01-01

    We assess the prognosis of mild forms of myasthenia gravis (MG) by maximal respiratory pressures (MRP) and single fiber electromyography (SFEMG). Fifty MG patients (12 form I, 21 form IIa and 17 form IIb) are valued by MRP [maximal expiratory pressure (MEP) and maximal inspiratory pressure (MIP)] and SFEMG, and are followed-up clinically. We have found in form I patients developing form IIa and form IIa worsening to form IIb, MEP and MIP mean relative values significantly lower than the rest. Inversely, IIb form patients improving to IIa form display MIP mean relative values higher than the rest; no difference appears with MEP. A reduction under 50% of fifth-percentile implies clinical deterioration in forms I and IIa, while its surpassing in IIb form suggests a tendency to improvement. No evident differences are found by SFEMG. MRP allow the follow-up of MG patients and could warn us of a clinical prognosis.

  2. Viscosity and density dependence during maximal flow in man.

    PubMed

    Staats, B A; Wilson, T A; Lai-Fook, S J; Rodarte, J R; Hyatt, R E

    1980-02-01

    Maximal expiratory flow curves were obtained from ten healthy subjects white breathing air and three other gas mixtures with different densities and viscosities. From these data, the magnitudes of the dependence of maximal flow on gas density and viscosity were obtained. The scaling laws of fluid mechanics, together with a model for the flow-limiting mechanism, were used to obtain a prediction of the relationship between the density dependence and the viscosity dependence of maximal flow. Although the data for individual subjects were too variable to allow a precise comparison with this prediction, the relationship between the mean density dependence and the mean viscosity dependence of all usbjects agreed with the theoretic prediction. This agreement supports the assumption, which is frequently made, that flow resistance rather than tissue visoelasticity is the dominant contributor to peripheral resistance. Information on the relationships between the pressure drop to the flow-limiting segment and flow, gas density and viscosity, and lung volume were also obtained.

  3. Matching and maximizing with variable-time schedules.

    PubMed Central

    DeCarlo, L T

    1985-01-01

    Pigeons were offered choices between a variable-time schedule that arranged reinforcers throughout the session and a variable-time schedule that arranged reinforcers only when the pigeon was spending time on it. The subjects could maximize the overall rate of reinforcement in this situation by biasing their time allocation towards the latter schedule. This arrangement provides an alternative to concurrent variable-interval variable-ratio schedules for testing whether animals maximize overall rates or match relative rates, and has the advantage of being free of the asymmetrical response requirements present with those schedules. The results were contrary to those predicted by maximizing: The bias it predicts did not appear. PMID:3981085

  4. Influence of central obesity in estimating maximal oxygen uptake

    PubMed Central

    de Souza e Silva, Christina Grüne; Franklin, Barry A.; de Araújo, Claudio Gil Soares

    2016-01-01

    OBJECTIVE: To assess the influence of central obesity on the magnitude of the error of estimate of maximal oxygen uptake in maximal cycling exercise testing. METHOD: A total of 1,715 adults (68% men) between 18-91 years of age underwent cardiopulmonary exercise testing using a progressive protocol to volitional fatigue. Subjects were stratified by central obesity into three quartile ranges: Q1, Q2-3 and Q4. Maximal oxygen uptake [mL.(kg.min)-1] was estimated by the attained maximal workload and body weight using gender- and population-specific equations. The error of estimate [mL.(kg.min)-1] and percent error between measured and estimated maximal oxygen uptake values were compared among obesity quartile ranges. RESULTS: The error of estimate and percent error differed (mean ± SD) for men (Q1=1.3±3.7 and 2.0±10.4; Q2-3=0.5±3.1 and -0.5±13.0; and Q4=-0.3±2.8 and -4.5±15.8 (p<0.05)) and for women (Q1=1.6±3.3 and 3.6±10.2; Q2-3=0.4±2.7 and -0.4±11.8; and Q4=-0.9±2.3 and -10.0±22.7 (p<0.05)). CONCLUSION: Central obesity directly influences the magnitude of the error of estimate of maximal oxygen uptake and should be considered when direct expired gas analysis is unavailable. PMID:27982162

  5. Cardiovascular consequences of bed rest: effect on maximal oxygen uptake

    NASA Technical Reports Server (NTRS)

    Convertino, V. A.

    1997-01-01

    Maximal oxygen uptake (VO2max) is reduced in healthy individuals confined to bed rest, suggesting it is independent of any disease state. The magnitude of reduction in VO2max is dependent on duration of bed rest and the initial level of aerobic fitness (VO2max), but it appears to be independent of age or gender. Bed rest induces an elevated maximal heart rate which, in turn, is associated with decreased cardiac vagal tone, increased sympathetic catecholamine secretion, and greater cardiac beta-receptor sensitivity. Despite the elevation in heart rate, VO2max is reduced primarily from decreased maximal stroke volume and cardiac output. An elevated ejection fraction during exercise following bed rest suggests that the lower stroke volume is not caused by ventricular dysfunction but is primarily the result of decreased venous return associated with lower circulating blood volume, reduced central venous pressure, and higher venous compliance in the lower extremities. VO2max, stroke volume, and cardiac output are further compromised by exercise in the upright posture. The contribution of hypovolemia to reduced cardiac output during exercise following bed rest is supported by the close relationship between the relative magnitude (% delta) and time course of change in blood volume and VO2max during bed rest, and also by the fact that retention of plasma volume is associated with maintenance of VO2max after bed rest. Arteriovenous oxygen difference during maximal exercise is not altered by bed rest, suggesting that peripheral mechanisms may not contribute significantly to the decreased VO2max. However reduction in baseline and maximal muscle blood flow, red blood cell volume, and capillarization in working muscles represent peripheral mechanisms that may contribute to limited oxygen delivery and, subsequently, lowered VO2max. Thus, alterations in cardiac and vascular functions induced by prolonged confinement to bed rest contribute to diminution of maximal oxygen uptake

  6. Bison distribution under conflicting foraging strategies: site fidelity vs. energy maximization.

    PubMed

    Merkle, Jerod A; Cherry, Seth G; Fortin, Daniel

    2015-07-01

    Foraging strategies based on site fidelity and maximization of energy intake rate are two adaptive forces shaping animal behavior. Whereas these strategies can both be evolutionarily stable, they predict conflicting optimal behaviors when population abundance is in decline. In such a case, foragers employing an energy-maximizing strategy should reduce their use of low-quality patches as interference competition becomes less intense for high-quality patches. Foragers using a site fidelity strategy, however, should continue to use familiar patches. Because natural fluctuations in population abundance provide the only non-manipulative opportunity to evaluate adaptation to these evolutionary forces, few studies have examined these foraging strategies simultaneously. Using abundance and space use data from a free-ranging bison (Bison bison) population living in a meadow-forest matrix in Prince Albert National Park, Canada, we determined how individuals balance the trade-off between site fidelity and energy-maximizing patch choice strategies with respect to changes in population abundance. From 1996 to 2005, bison abundance increased from 225 to 475 and then decreased to 225 by 2013. During the period of population increase, population range size increased. This expansion involved the addition of relatively less profitable areas and patches, leading to a decrease in the mean expected profitability of the range. Yet, during the period of population decline, we detected neither a subsequent retraction in population range size nor an increase in mean expected profitability of the range. Further, patch selection models. during the population decline indicated that, as density decreased, bison portrayed stronger fidelity to previously visited meadows, but no increase in selection strength for profitable meadows. Our analysis reveals that an energy-maximizing patch choice strategy alone cannot explain the distribution ofindividuals and populations, and site fidelity is an

  7. Expectation and surprise determine neural population responses in the ventral visual stream.

    PubMed

    Egner, Tobias; Monti, Jim M; Summerfield, Christopher

    2010-12-08

    Visual cortex is traditionally viewed as a hierarchy of neural feature detectors, with neural population responses being driven by bottom-up stimulus features. Conversely, "predictive coding" models propose that each stage of the visual hierarchy harbors two computationally distinct classes of processing unit: representational units that encode the conditional probability of a stimulus and provide predictions to the next lower level; and error units that encode the mismatch between predictions and bottom-up evidence, and forward prediction error to the next higher level. Predictive coding therefore suggests that neural population responses in category-selective visual regions, like the fusiform face area (FFA), reflect a summation of activity related to prediction ("face expectation") and prediction error ("face surprise"), rather than a homogenous feature detection response. We tested the rival hypotheses of the feature detection and predictive coding models by collecting functional magnetic resonance imaging data from the FFA while independently varying both stimulus features (faces vs houses) and subjects' perceptual expectations regarding those features (low vs medium vs high face expectation). The effects of stimulus and expectation factors interacted, whereby FFA activity elicited by face and house stimuli was indistinguishable under high face expectation and maximally differentiated under low face expectation. Using computational modeling, we show that these data can be explained by predictive coding but not by feature detection models, even when the latter are augmented with attentional mechanisms. Thus, population responses in the ventral visual stream appear to be determined by feature expectation and surprise rather than by stimulus features per se.

  8. Classification of maximally supersymmetric backgrounds in supergravity theories

    NASA Astrophysics Data System (ADS)

    Louis, Jan; Lüst, Severin

    2017-02-01

    We study maximally supersymmetric solutions of all gauged or deformed supergravity theories in D ≥ 3 space-time dimensions. For vanishing background fluxes the space-time background has to be either Minkowski or anti-de Sitter. We derive a simple criterion for the existence of solutions with non-trivial fluxes and determine all supergravities that satisfy it. We show that their solutions coincide with those of the corresponding ungauged theories and conclude that the known list of maximally supersymmetric solutions is exhaustive.

  9. Maximizing the Divergence from a Hierarchical Model of Quantum States

    NASA Astrophysics Data System (ADS)

    Weis, Stephan; Knauf, Andreas; Ay, Nihat; Zhao, Ming-Jing

    2015-03-01

    We study many-party correlations quantified in terms of the Umegaki relative entropy (divergence) from a Gibbs family known as a hierarchical model. We derive these quantities from the maximum-entropy principle which was used earlier to define the closely related irreducible correlation. We point out the differences between quantum states and probability vectors which exist in hierarchical models, in the divergence from a hierarchical model and in local maximizers of this divergence. The differences are, respectively, missing factorization, discontinuity and reduction of uncertainty. We discuss global maximizers of the mutual information of separable qubit states.

  10. Remarks on the information entropy maximization method and extended thermodynamics

    NASA Astrophysics Data System (ADS)

    Eu, Byung Chan

    1998-04-01

    The information entropy maximization method was applied by Jou et al. [J. Phys. A 17, 2799 (1984)] to heat conduction in the past. Advancing this method one more step, Nettleton [J. Chem. Phys. 106, 10311 (1997)] combined the method with a projection operator technique to derive a set of evolution equations for macroscopic variables from the Liouville equation for a simple liquid, and a claim was made that the method provides a statistical mechanical theory basis of irreversible processes and, in particular, of extended thermodynamics which is consistent with the laws of thermodynamics. This line of information entropy maximization method is analyzed from the viewpoint of the laws of thermodynamics in this paper.

  11. [The effects of teacher expectancy and self-expectancy on performance].

    PubMed

    Choi, K S

    1987-08-01

    The present study was designed to investigate the effect on performance of the relationship between teacher expectancy and self-expectancy. For the induced expectancy, a random half of 96 high school students enrolled in a four-week summer language course of a Christian association were described to the instructors as having high success potential. The remaining trainees served as controls. Correct scores on the learning task, instructor ratings of behavior and attitude of the instructors were measured on three sessions of the course. Ratings of teacher's behavior were factor-analyzed and four interpretable factors emerged: Support, Caring, Attention, and Tutoring. The induced expectancy and specific levels of self-expectancy had significant effects on the subjects' performance and ratings of the instructor. It was concluded that self-expectancy mediates the effects of teacher expectancy on learning performance. Implications of these results for the Pygmalion effect were discussed.

  12. Utilization and utility of immunohistochemistry in dermatopathology.

    PubMed

    Naert, Karen A; Trotter, Martin J

    2013-02-01

    Immunohistochemistry (IHC) is considered a valuable ancillary tool for dermatopathology diagnosis, but few studies have measured IHC utilization by dermatopathologists or assessed its diagnostic utility. In a regionalized, community-based dermatopathology practice, we measured IHC utilization (total requests, specific antibodies requested, and final diagnosis) over a 12-month period. Next, we assessed diagnostic utility by comparing a preliminary "pre-IHC" diagnosis based on routine histochemical staining with the final diagnosis rendered after consideration of IHC results. The dermatopathology IHC utilization rate was 1.2%, averaging 3.6 stains requested per case. Melanocytic, hematolymphoid, and fibrohistiocytic lesions made up 23%, 18%, and 16%, respectively, of the total cases requiring IHC. S100 and Melan A were the most frequently requested stains, ordered on 50% and 34% of IHC cases, respectively. The utility study revealed that IHC changed the diagnosis in 11%, confirmed a diagnosis, or excluded a differential diagnosis in 77%, and was noncontributory in 4% of cases. Where IHC results prompted a change in diagnosis, 14% were a change from a benign to malignant lesion, whereas 32% changed from one malignant entity to another. IHC is most commonly used in cutaneous melanocytic and hematolymphoid lesions. In 11% of dermatopathology cases in which IHC is used, information is provided that changes the H&E diagnosis. Such changes may have significant treatment implications. IHC is noncontributory in only a small percentage of cases.

  13. Components of attention modulated by temporal expectation.

    PubMed

    Sørensen, Thomas Alrik; Vangkilde, Signe; Bundesen, Claus

    2015-01-01

    By varying the probabilities that a stimulus would appear at particular times after the presentation of a cue and modeling the data by the theory of visual attention (Bundesen, 1990), Vangkilde, Coull, and Bundesen (2012) provided evidence that the speed of encoding a singly presented stimulus letter into visual short-term memory (VSTM) is modulated by the observer's temporal expectations. We extended the investigation from single-stimulus recognition to whole report (Experiment 1) and partial report (Experiment 2). Cue-stimulus foreperiods were distributed geometrically using time steps of 500 ms. In high expectancy conditions, the probability that the stimulus would appear on the next time step, given that it had not yet appeared, was high, whereas in low expectancy conditions, the probability was low. The speed of encoding the stimuli into VSTM was higher in the high expectancy conditions. In line with the Easterbrook (1959) hypothesis, under high temporal expectancy, the processing was also more focused (selective). First, the storage capacity of VSTM was lower, so that fewer stimuli were encoded into VSTM. Second, the distribution of attentional weights across stimuli was less even: The efficiency of selecting targets rather than distractors for encoding into VSTM was higher, as was the spread of the attentional weights of the target letters.

  14. Utilities Expense Report.

    ERIC Educational Resources Information Center

    Moore, Deborah P.

    2001-01-01

    Examines how deregulation has affected school district utility costs. Offers ideas that can help school districts save money and energy. Provides several examples of state-wide initiatives intended to help school districts control utility costs. (GR)

  15. Major League Baseball Players’ Life Expectancies*

    PubMed Central

    Saint Onge, Jarron M.; Rogers, Richard G.; Krueger, Patrick M.

    2009-01-01

    Objective We examine the importance of anthropometric and performance measures, and age, period, and cohort effects in explaining life expectancies among major league baseball (MLB) players over the past century. Methods We use discrete time hazard models to calculate life tables with covariates with data from Total Baseball, a rich source of information on all players who played in the major league. Results Compared to 20-year-old U.S. males, MLB players can expect almost five additional years of life. Height, weight, handedness, and player ratings are unassociated with the risk of death in this population of highly active and successful adults. Career length is inversely associated with the risk of death, likely because those who play longer gain additional incomes, physical fitness, and training. Conclusions Our results indicate improvements in life expectancies with time for all age groups and indicate possible improvements in longevity in the general U.S. population. PMID:19756205

  16. Information structure expectations in sentence comprehension

    PubMed Central

    Carlson, Katy; Dickey, Michael Walsh; Frazier, Lyn; Clifton, Charles

    2009-01-01

    In English, new information typically appears late in the sentence, as does primary accent. Because of this tendency, perceivers might expect the final constituent or constituents of a sentence to contain informational focus. This expectation should in turn affect how they comprehend focus-sensitive constructions such as ellipsis sentences. Results from four experiments on sluicing sentences (e.g., The mobster implicated the thug, but we can’t find out who else) suggest that perceivers do prefer to place focus late in the sentence, though that preference can be mitigated by prosodic information (pitch accents, Experiment 2) or syntactic information (clefted sentences, Experiment 3) indicating that focus is located elsewhere. Furthermore, it is not necessarily the direct object, but the informationally-focused constituent that is the preferred antecedent (Experiment 4). Expectations regarding the information structure of a sentence, which are only partly cancelable by means of overt focus markers, may explain persistent biases in ellipsis resolution. PMID:18609404

  17. A Global Information Utility.

    ERIC Educational Resources Information Center

    Block, Robert S.

    1984-01-01

    High-powered satellites, along with other existing technologies, make possible a world information utility that could distribute virtually limitless information to every point on earth. The utility could distribute information for business, government, education, and entertainment. How the utility would work is discussed. (RM)

  18. Sourcebook on Research Utilization.

    ERIC Educational Resources Information Center

    Rubin, Allen, Ed.; Rosenblatt, Aaron, Ed.

    Major papers presented at the Conference on Research Utilization in Social Work Education are compiled in this sourcebook. The conference focused on six topics that reviewed the state of the art of research utilization and suggested directions for the future. The papers included are: Understanding Research Utilization in Social Work (Stuart A.…

  19. Different types of compression clothing do not increase sub-maximal and maximal endurance performance in well-trained athletes.

    PubMed

    Sperlich, Billy; Haegele, Matthias; Achtzehn, Silvia; Linville, John; Holmberg, Hans-Christer; Mester, Joachim

    2010-04-01

    Three textiles with increasing compressive surface were compared with non-compressive conventional clothing on physiological and perceptual variables during sub-maximal and maximal running. Fifteen well-trained endurance athletes (mean+/-s: age 27.1+/-4.8 years, VO(2max) 63.7+/-4.9 ml x min(-1) x kg(-1)) performed four sub-maximal (approximately 70% VO(2max)) and maximal tests with and without different compression stockings, tights, and whole-body compression suits. Arterial lactate concentration, oxygen saturation and partial pressure, pH, oxygen uptake, and ratings of muscle soreness were recorded before, during, and after all tests. In addition, we assessed time to exhaustion. Sub-maximal (P=0.22) and maximal oxygen uptake (P=0.26), arterial lactate concentration (P=0.16; 0.20), pH (P=0.23; 0.46), oxygen saturation (P=0.13; 0.26), and oxygen partial pressure (P=0.09; 0.20) did not differ between the types of clothing (effect sizes=0.00-0.45). Ratings of perceived exertion (P=0.10; 0.15), muscle soreness (P=0.09; 0.10) and time to exhaustion (P=0.16) were also unaffected by the different clothing (effect sizes=0.28-0.85). This was the first study to evaluate the effect on endurance performance of different types of compression clothing with increasing amounts of compressive surface. Overall, there were no performance benefits when using the compression garments.

  20. Dietary 2-oxoglutarate prevents bone loss caused by neonatal treatment with maximal dexamethasone dose.

    PubMed

    Dobrowolski, Piotr; Tomaszewska, Ewa; Muszyński, Siemowit; Blicharski, Tomasz; Pierzynowski, Stefan G

    2017-04-01

    -oxoglutarate (2-Ox) administered postnatally has a potential to improve/maintain bone structure of animals simultaneously treated with maximal therapeutic doses of dexamethasone (Dex). It may open the new direction in searching and developing combined treatment for children treated with glucocorticoids (GCs) since growing group of children is exposed to synthetic GCs and adverse effects such as glucocorticoid-induced osteoporosis and growth retardation are recognized. Currently proposed combined therapies have numerous side effects. Thus, this study proposed a new direction in combined therapies utilizing dietary supplementation with glutamine derivative. Impairment caused by Dex in presented long bones animal model was prevented by dietary supplementation with 2-Ox and vast majority of assessed bone parameters were restored almost to the control level. These results support previous thesis on the regulatory mechanism of nutrient utilization regulated by glutamine derivatives and enrich the nutritional science.

  1. Great expectations. Eating expectancies as mediators of reinforcement sensitivity and eating.

    PubMed

    Hennegan, Julie M; Loxton, Natalie J; Mattar, Ameerah

    2013-12-01

    Eating expectancies are proposed as cognitive pathways linking reinforcement (reward and punishment) sensitivities and the tendency to over-eat in response to appetitive and emotional cues. In Study One (N=243 university women) explicit eating expectancies were tested as potential mediators of reinforcement sensitivities and eating styles. Broadly, expectancies that eating alleviates negative affect/boredom mediated both reward and punishment sensitivity and emotional eating. The expectancy that eating is pleasurable and rewarding mediated reward sensitivity and external eating. In Study Two (N=109), using an implicit eating expectancy task, reward sensitivity and external eating was mediated via positive expectancy statements, notably, that eating is pleasurable and rewarding. Reward sensitivity and emotional eating was mediated specifically by expectancies that eating manages boredom. Punishment sensitivity was not associated with any implicit expectancies. Findings support the role of expectancies as cognitive mediators in the relationship between reinforcement sensitivities and emotionally-driven versus externally-driven eating styles. However, the largely appetitive implicit expectancies task only supported an association with reward sensitivity.

  2. Maximizing the Online Learning Experience: Suggestions for Educators and Students

    ERIC Educational Resources Information Center

    Cicco, Gina

    2011-01-01

    This article will discuss ways of maximizing the online course experience for teachers- and counselors-in-training. The widespread popularity of online instruction makes it a necessary learning experience for future teachers and counselors (Ash, 2011). New teachers and counselors take on the responsibility of preparing their students for real-life…

  3. Maximizing Thermal Efficiency and Optimizing Energy Management (Fact Sheet)

    SciTech Connect

    Not Available

    2012-03-01

    Researchers at the Thermal Test Facility (TTF) on the campus of the U.S. Department of Energy's National Renewable Energy Laboratory (NREL) in Golden, Colorado, are addressing maximizing thermal efficiency and optimizing energy management through analysis of efficient heating, ventilating, and air conditioning (HVAC) strategies, automated home energy management (AHEM), and energy storage systems.

  4. On Adaptation, Maximization, and Reinforcement Learning among Cognitive Strategies

    ERIC Educational Resources Information Center

    Erev, Ido; Barron, Greg

    2005-01-01

    Analysis of binary choice behavior in iterated tasks with immediate feedback reveals robust deviations from maximization that can be described as indications of 3 effects: (a) a payoff variability effect, in which high payoff variability seems to move choice behavior toward random choice; (b) underweighting of rare events, in which alternatives…

  5. Optoelectronic plethysmography compared to spirometry during maximal exercise.

    PubMed

    Layton, Aimee M; Moran, Sienna L; Garber, Carol Ewing; Armstrong, Hilary F; Basner, Robert C; Thomashow, Byron M; Bartels, Matthew N

    2013-01-15

    The purpose of this study was to compare simultaneous measurements of tidal volume (Vt) by optoelectronic plethysmography (OEP) and spirometry during a maximal cycling exercise test to quantify possible differences between methods. Vt measured simultaneously by OEP and spirometry was collected during a maximal exercise test in thirty healthy participants. The two methods were compared by linear regression and Bland-Altman analysis at submaximal and maximal exercise. The average difference between the two methods and the mean percentage discrepancy were calculated. Submaximal exercise (SM) and maximal exercise (M) Vt measured by OEP and spirometry had very good correlation, SM R=0.963 (p<0.001), M R=0.982 (p<0.001) and high degree of common variance, SM R(2)=0.928, M R(2)=0.983. Bland-Altman analysis demonstrated that during SM, OEP could measure exercise Vt as much as 0.134 L above and -0.025 L below that of spirometry. OEP could measure exercise Vt as much as 0.188 L above and -0.017 L below that of spirometry. The discrepancy between measurements was -2.0 ± 7.2% at SM and -2.4 ± 3.9% at M. In conclusion, Vt measurements at during exercise by OEP and spirometry are closely correlated and the difference between measurements was insignificant.

  6. Fertilizer placement to maximize nitrogen use by fescue

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The method of fertilizer nitrogen(N) application can affect N uptake in tall fescue and therefore its yield and quality. Subsurface-banding (knife) of fertilizer maximizes fescue N uptake in the poorly-drained clay–pan soils of southeastern Kansas. This study was conducted to determine if knifed N r...

  7. Curriculum and Testing Strategies to Maximize Special Education STAAR Achievement

    ERIC Educational Resources Information Center

    Johnson, William L.; Johnson, Annabel M.; Johnson, Jared W.

    2015-01-01

    This document is from a presentation at the 2015 annual conference of the Science Teachers Association of Texas (STAT). The two sessions (each listed as feature sessions at the state conference) examined classroom strategies the presenter used in his chemistry classes to maximize Texas end-of-course chemistry test scores for his special population…

  8. Modifying Softball for Maximizing Learning Outcomes in Physical Education

    ERIC Educational Resources Information Center

    Brian, Ali; Ward, Phillip; Goodway, Jacqueline D.; Sutherland, Sue

    2014-01-01

    Softball is taught in many physical education programs throughout the United States. This article describes modifications that maximize learning outcomes and that address the National Standards and safety recommendations. The modifications focus on tasks and equipment, developmentally appropriate motor-skill acquisition, increasing number of…

  9. Maximal regularity for perturbed integral equations on periodic Lebesgue spaces

    NASA Astrophysics Data System (ADS)

    Lizama, Carlos; Poblete, Verónica

    2008-12-01

    We characterize the maximal regularity of periodic solutions for an additive perturbed integral equation with infinite delay in the vector-valued Lebesgue spaces. Our method is based on operator-valued Fourier multipliers. We also study resonances, characterizing the existence of solutions in terms of a compatibility condition on the forcing term.

  10. Apportioning Program Evaluation Resources to Maximize Information Yield.

    ERIC Educational Resources Information Center

    Sadler, D. Royce

    1982-01-01

    Not all data for a program evaluation may be equally valuable, and costs of collection may vary when using several methods to obtain data from several sources. An approach to maximize information yield for a fixed, limited budget using a mathematical technique known as linear programming is described and generalized. (Author/CM)

  11. Formulation of Maximized Weighted Averages in URTURIP Technique

    DTIC Science & Technology

    2001-10-25

    Formulation of Maximized Weighted Averages in URTURIP Technique Bruno Migeon, Philippe Deforge, Pierre Marché Laboratoire Vision et Robotique ...Organization Name(s) and Address(es) Laboratoire Vision et Robotique 63, avenue de Lattre de Tassigny, 18020 Bourges Cedex - France Performing Organization

  12. A Maximal Flow Approach to Dynamic Routing in Communication Networks,

    DTIC Science & Technology

    1980-05-01

    of nodes. In Appendix B we provide a computer program in Fortran for finding the maximal flow in these networks, based on the algorithm of Edmons and... Edmons and Karp is implemented by a Fortran Subroutine called MAXFL. The algorithm finds the shortest path between source and destination on which an

  13. Maximizing the Benefits of an Administrative Internship: Some Practical Advice.

    ERIC Educational Resources Information Center

    Oldfield, Kenneth; Ayers, Nancy

    Recommendations to help student interns in administrative positions maximize their educational opportunities vis-a-vis the "real world" and to also help them avoid certain placement-associated problems. The suggestions may be helpful to both new and established internship directors as well. Attention is focused on governmental administrative…

  14. Maximizing grain sorghum water use efficiency under deficit irrigation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Development and evaluation of sustainable and efficient irrigation strategies is a priority for producers faced with water shortages resulting from aquifer depletion, reduced base flows, and reallocation of water to non-agricultural sectors. Under a limited water supply, yield maximization may not b...

  15. Maximizing Access, Equity, and Inclusion in General and Special Education

    ERIC Educational Resources Information Center

    Obiakor, Festus E.

    2011-01-01

    The goal of any educational program is to help its students to maximize their fullest potential in inclusive environments. For many students with disabilities, having an inclusive environment seems to be an ideal policy. Ironically, this policy continues to be debatable and controversial. Sometimes, the controversy or debate dominates the real…

  16. Optimal technique for maximal forward rotating vaults in men's gymnastics.

    PubMed

    Hiley, Michael J; Jackson, Monique I; Yeadon, Maurice R

    2015-08-01

    In vaulting a gymnast must generate sufficient linear and angular momentum during the approach and table contact to complete the rotational requirements in the post-flight phase. This study investigated the optimization of table touchdown conditions and table contact technique for the maximization of rotation potential for forwards rotating vaults. A planar seven-segment torque-driven computer simulation model of the contact phase in vaulting was evaluated by varying joint torque activation time histories to match three performances of a handspring double somersault vault by an elite gymnast. The closest matching simulation was used as a starting point to maximize post-flight rotation potential (the product of angular momentum and flight time) for a forwards rotating vault. It was found that the maximized rotation potential was sufficient to produce a handspring double piked somersault vault. The corresponding optimal touchdown configuration exhibited hip flexion in contrast to the hyperextended configuration required for maximal height. Increasing touchdown velocity and angular momentum lead to additional post-flight rotation potential. By increasing the horizontal velocity at table touchdown, within limits obtained from recorded performances, the handspring double somersault tucked with one and a half twists, and the handspring triple somersault tucked became theoretically possible.

  17. Maximizing plant density affects broccoli yield and quality

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Increased demand for fresh market bunch broccoli (Brassica oleracea L. var. italica) has led to increased production along the United States east coast. Maximizing broccoli yields is a primary concern for quickly expanding southeastern commercial markets. This broccoli plant density study was carr...

  18. Mentoring as Professional Development for Novice Entrepreneurs: Maximizing the Learning

    ERIC Educational Resources Information Center

    St-Jean, Etienne

    2012-01-01

    Mentoring can be seen as relevant if not essential in the continuing professional development of entrepreneurs. In the present study, we seek to understand how to maximize the learning that occurs through the mentoring process. To achieve this, we consider various elements that the literature suggested are associated with successful mentoring and…

  19. Emotional Control and Instructional Effectiveness: Maximizing a Timeout

    ERIC Educational Resources Information Center

    Andrews, Staci R.

    2015-01-01

    This article provides recommendations for best practices for basketball coaches to maximize the instructional effectiveness of a timeout during competition. Practical applications are derived from research findings linking emotional intelligence to effective coaching behaviors. Additionally, recommendations are based on the implications of the…

  20. How to Maximize Learning for Gifted Math Students

    ERIC Educational Resources Information Center

    Chamberlin, Scott A.

    2008-01-01

    Having a gifted math or science student in the family or classroom is a fascination as well as a significant challenge and responsibility for many parents and teachers. In order to help maximize student learning, several questions need to be asked. What should be the role of technology? How well do traditional schools serve gifted students? What…

  1. Nursing Students' Awareness and Intentional Maximization of Their Learning Styles

    ERIC Educational Resources Information Center

    Mayfield, Linda Riggs

    2012-01-01

    This small, descriptive, pilot study addressed survey data from four levels of nursing students who had been taught to maximize their learning styles in a first-semester freshman success skills course. Bandura's Agency Theory supports the design. The hypothesis was that without reinforcing instruction, the students' recall and application of that…

  2. The Profit-Maximizing Firm: Old Wine in New Bottles.

    ERIC Educational Resources Information Center

    Felder, Joseph

    1990-01-01

    Explains and illustrates a simplified use of graphical analysis for analyzing the profit-maximizing firm. Believes that graphical analysis helps college students gain a deeper understanding of marginalism and an increased ability to formulate economic problems in marginalist terms. (DB)

  3. Maximally entangled mixed-state generation via local operations

    SciTech Connect

    Aiello, A.; Puentes, G.; Voigt, D.; Woerdman, J. P.

    2007-06-15

    We present a general theoretical method to generate maximally entangled mixed states of a pair of photons initially prepared in the singlet polarization state. This method requires only local operations upon a single photon of the pair and exploits spatial degrees of freedom to induce decoherence. We report also experimental confirmation of these theoretical results.

  4. A Method for Maximizing Split-Half Reliability Coefficients

    ERIC Educational Resources Information Center

    Callender, John C.; Osburn, H. G.

    1977-01-01

    An efficient algorithm for maximizing split-half reliability coefficients is described. Coefficients derived by the algorithm were found to be generally larger than odd-even split-half coefficients or other internal consistency measures and nearly as large as the largest split half coefficients. MSPLIT, Odd-Even, and Kuder-Richardson-20…

  5. [Normal or high maximal oxygen uptake: a target in health].

    PubMed

    Ben-Dov, Issahar; Segel, Michael

    2012-02-01

    To achieve the predicted maximal oxygen consumption, many organs need to increase their output in a synchronized fashion. Therefore, maximal oxygen consumption is the single most reliable parameter predicting fitness, morbidity and mortality. Peak O2 uptake can be measured from noninvasive ventilatory parameters during short, incremental, cardiopulmonary exercise test (CPET) on a cycle ergometer or on a treadmill. Commercial systems are available and all enable breath by breath measurement of ventilation, exhaled gas concentration, oxygen saturation and additional cardiorespiratory parameters. Performance of the test requires adherence to strict guidelines and experienced technicians and physicians, although their qualification has not yet been defined by the national health authorities. There are well defined indications and benefits from CPET, among them are, determination of the anaerobic threshold, defining the cause of dyspnea, the timing for heart transplantation, exercise prescription for training and rehabilitation purposes and follow-up on disease progression or response to pharmacological or other modes of therapy. Measuring maximal oxygen consumption should be encouraged in health and disease and normal maximal oxygen consumption should be defined as a health target.

  6. Hardy-Littlewood maximal operator in generalized grand Lebesgue spaces

    NASA Astrophysics Data System (ADS)

    Umarkhadzhiev, Salaudin M.

    2014-12-01

    We obtain sufficient conditions and necessary conditions for the maximal operator to be bounded in the generalized grand Lebesgue space on an open set Ω ∈ Rn which is not necessarily bounded. The sufficient conditions coincide with necessary conditions for instance in the case where Ω is bounded and the standard definition of the grand space is used.

  7. Teacher Praise: Maximizing the Motivational Impact. Teaching Strategies.

    ERIC Educational Resources Information Center

    McVey, Mary D.

    2001-01-01

    Recognizes the influence of praise on human behavior, and provides specific suggestions on how to maximize the positive effects of praise when intended as positive reinforcement. Examines contingency, specificity, and selectivity aspects of praise. Cautions teachers to avoid the controlling effects of praise and the possibility that praise may…

  8. Quantum state space as a maximal consistent set

    NASA Astrophysics Data System (ADS)

    Tabia, Gelo Noel

    2012-02-01

    Measurement statistics in quantum theory are obtained from the Born rule and the uniqueness of the probability measure it assigns through quantum states is guaranteed by Gleason's theorem. Thus, a possible systematic way of exploring the geometry of quantum state space expresses quantum states in terms of outcome probabilities of a symmetric informationally complete measurement. This specific choice for representing quantum states is motivated by how the associated probability space provides a natural venue for characterizing the set of quantum states as a geometric construct called a maximal consistent set. We define the conditions for consistency and maximality of a set, provide some examples of maximal consistent sets and attempt to deduce the steps for building up a maximal consistent set of probability distributions equivalent to Hilbert space. In particular, we demonstrate how the reconstruction procedure works for qutrits and observe how it reveals an elegant underlying symmetry among five SIC-POVMs and a complete set of mutually unbiased bases, known in finite affine geometry as the Hesse configuration.

  9. Adolescent Expectations of Early Death Predict Young Adult Socioeconomic Status

    PubMed Central

    Nguyen, Quynh C.; Hussey, Jon M.; Halpern, Carolyn T.; Villaveces, Andres; Marshall, Stephen W.; Siddiqi, Arjumand; Poole, Charles

    2013-01-01

    Among adolescents, expectations of early death have been linked to future risk behaviors. These expectations may also reduce personal investment in education and training, thereby lowering adult socioeconomic status attainment. The importance of socioeconomic status is highlighted by pervasive health inequities and dramatic differences in life expectancy among education and income groups. The objectives of this study were to investigate patterns of change in perceived chances of living to age 35 (Perceived Survival Expectations; PSE), predictors of PSE, and associations between PSE and future socioeconomic status attainment. We utilized the U.S. National Longitudinal Study of Adolescent Health (Add Health) initiated in 1994-95 among 20,745 adolescents in grades 7-12 with follow-up interviews in 1996 (Wave II), 2001-02 (Wave III) and 2008 (Wave IV; ages 24-32). At Wave I, 14% reported ≤ 50% chance of living to age 35 and older adolescents reported lower PSE than younger adolescents. At Wave III, PSE were similar across age. Changes in PSE from Wave I to III were moderate, with 89% of respondents reporting no change (56%), one level higher (22%) or one level lower (10%) in a 5-level PSE variable. Higher block group poverty rate, perceptions that the neighborhood is unsafe, and less time in the U.S. (among the foreign-born) were related to low PSE at Waves I and III. Low PSE at Waves I and III predicted lower education attainment and personal earnings at Wave IV in multinomial logistic regression models controlling for confounding factors such as previous family socioeconomic status, individual demographic characteristics, and depressive symptoms. Anticipation of an early death is prevalent among adolescents and predictive of lower future socioeconomic status. Low PSE reported early in life may be a marker for worse health trajectories. PMID:22405687

  10. Home care technology through an ability expectation lens.

    PubMed

    Wolbring, Gregor; Lashewicz, Bonnie

    2014-06-20

    Home care is on the rise, and its delivery is increasingly reliant on an expanding variety of health technologies ranging from computers to telephone "health apps" to social robots. These technologies are most often predicated on expectations that people in their homes (1) can actively interact with these technologies and (2) are willing to submit to the action of the technology in their home. Our purpose is to use an "ability expectations" lens to bring together, and provide some synthesis of, the types of utility and disadvantages that can arise for people with disabilities in relation to home care technology development and use. We searched the academic databases Scopus, Web of Science, EBSCO ALL, IEEE Xplore, and Compendex to collect articles that had the term "home care technology" in the abstract or as a topic (in the case of Web of Science). We also used our background knowledge and related academic literature pertaining to self-diagnosis, health monitoring, companionship, health information gathering, and care. We examined background articles and articles collected through our home care technology search in terms of ability expectations assumed in the presentation of home care technologies, or discussed in relation to home care technologies. While advances in health care support are made possible through emerging technologies, we urge critical examination of such technologies in terms of implications for the rights and dignity of people with diverse abilities. Specifically, we see potential for technologies to result in new forms of exclusion and powerlessness. Ableism influences choices made by funders, policy makers, and the public in the development and use of home health technologies and impacts how people with disabilities are served and how useful health support technologies will be for them. We urge continued critical examination of technology development and use according to ability expectations, and we recommend increasing incorporation of

  11. Batch Mode Active Learning for Regression With Expected Model Change.

    PubMed

    Cai, Wenbin; Zhang, Muhan; Zhang, Ya

    2016-04-20

    While active learning (AL) has been widely studied for classification problems, limited efforts have been done on AL for regression. In this paper, we introduce a new AL framework for regression, expected model change maximization (EMCM), which aims at choosing the unlabeled data instances that result in the maximum change of the current model once labeled. The model change is quantified as the difference between the current model parameters and the updated parameters after the inclusion of the newly selected examples. In light of the stochastic gradient descent learning rule, we approximate the change as the gradient of the loss function with respect to each single candidate instance. Under the EMCM framework, we propose novel AL algorithms for the linear and nonlinear regression models. In addition, by simulating the behavior of the sequential AL policy when applied for k iterations, we further extend the algorithms to batch mode AL to simultaneously choose a set of k most informative instances at each query time. Extensive experimental results on both UCI and StatLib benchmark data sets have demonstrated that the proposed algorithms are highly effective and efficient.

  12. Great Expectations: How Role Expectations and Role Experiences Relate to Perceptions of Group Cohesion.

    PubMed

    Benson, Alex J; Eys, Mark A; Irving, P Gregory

    2016-04-01

    Many athletes experience a discrepancy between the roles they expect to fulfill and the roles they eventually occupy. Drawing from met expectations theory, we applied response surface methodology to examine how role expectations, in relation to role experiences, influence perceptions of group cohesion among Canadian Interuniversity Sport athletes (N = 153). On the basis of data from two time points, as athletes approached and exceeded their role contribution expectations, they reported higher perceptions of task cohesion. Furthermore, as athletes approached and exceeded their social involvement expectations, they reported higher perceptions of social cohesion. These response surface patterns-pertaining to task and social cohesion-were driven by the positive influence of role experiences. On the basis of the interplay between athletes' role experiences and their perception of the group environment, efforts to improve team dynamics may benefit from focusing on improving the quality of role experiences, in conjunction with developing realistic role expectations.

  13. College for some to college for all: social background, occupational expectations, and educational expectations over time.

    PubMed

    Goyette, Kimberly A

    2008-06-01

    The educational expectations of 10th-graders have dramatically increased from 1980 to 2002. Their rise is attributable in part to the changing educational composition of students' parents and related to the educational profiles of their expected occupations. Students whose parents have gone to college are more likely to attend college themselves, and students expect occupations that are more prestigious in 2002 than in 1980. The educational requirements of particular occupation categories have risen only slightly. These analyses also reveal that educational expectations in recent cohorts are more loosely linked to social background and occupational plans than they were in 1980. The declining importance of parents' background and the decoupling of educational and occupational plans, in addition to a strong and significant effect of cohort on educational expectations, suggest that the expectation of four-year college attainment is indeed becoming the norm.

  14. Interindividual variation in thermal sensitivity of maximal sprint speed, thermal behavior, and resting metabolic rate in a lizard.

    PubMed

    Artacho, Paulina; Jouanneau, Isabelle; Le Galliard, Jean-François

    2013-01-01

    Studies of the relationship of performance and behavioral traits with environmental factors have tended to neglect interindividual variation even though quantification of this variation is fundamental to understanding how phenotypic traits can evolve. In ectotherms, functional integration of locomotor performance, thermal behavior, and energy metabolism is of special interest because of the potential for coadaptation among these traits. For this reason, we analyzed interindividual variation, covariation, and repeatability of the thermal sensitivity of maximal sprint speed, preferred body temperature, thermal precision, and resting metabolic rate measured in ca. 200 common lizards (Zootoca vivipara) that varied by sex, age, and body size. We found significant interindividual variation in selected body temperatures and in the thermal performance curve of maximal sprint speed for both the intercept (expected trait value at the average temperature) and the slope (measure of thermal sensitivity). Interindividual differences in maximal sprint speed across temperatures, preferred body temperature, and thermal precision were significantly repeatable. A positive relationship existed between preferred body temperature and thermal precision, implying that individuals selecting higher temperatures were more precise. The resting metabolic rate was highly variable but was not related to thermal sensitivity of maximal sprint speed or thermal behavior. Thus, locomotor performance, thermal behavior, and energy metabolism were not directly functionally linked in the common lizard.

  15. Lowered Expectations: How Schools Reward Incompetence.

    ERIC Educational Resources Information Center

    Jackson, Bruce

    1985-01-01

    Playing "dumb" can earn students easier classes, lower expectations, reduced pressure, and individual attention. Schools can stop rewarding failure by making remedial classes difficult, backing up homework policies with unappealing alternatives, providing penalties for attendance violations, and deglamorizing alternatives to regular programs. (PGD)

  16. Developing expectations regarding the boundaries of expertise.

    PubMed

    Landrum, Asheley R; Mills, Candice M

    2015-01-01

    Three experiments examined elementary school-aged children's and adults' expectations regarding what specialists (i.e., those with narrow domains of expertise) and generalists (i.e., those with broad domains of expertise) are likely to know. Experiment 1 demonstrated developmental differences in the ability to differentiate between generalists and specialists, with younger children believing generalists have more specific trivia knowledge than older children and adults believed. Experiment 2 demonstrated that children and adults expected generalists to have more underlying principles knowledge than specific trivia knowledge about unfamiliar animals. However, they believed that generalists would have more of both types of knowledge than themselves. Finally, Experiment 3 demonstrated that children and adults recognized that underlying principles knowledge can be generalized between topics closely related to the specialists' domains of expertise. However, they did not recognize when this knowledge was generalizable to topics slightly less related, expecting generalists to know only as much as they would. Importantly, this work contributes to the literature by showing how much of and what kinds of knowledge different types of experts are expected to have. In sum, this work provides insight into some of the ways children's notions of expertise change over development. The current research demonstrates that between the ages of 5 and 10, children are developing the ability to recognize how experts' knowledge is likely to be limited. That said, even older children at times struggle to determine the breadth of an experts' knowledge.

  17. Parental Expectations about Adapted Physical Education Services

    ERIC Educational Resources Information Center

    Chaapel, Holly; Columna, Luis; Lytle, Rebecca; Bailey, JoEllen

    2013-01-01

    The purpose of this study was to characterize the expectations of parents of children with disabilities regarding adapted physical education services. Participants ("N" = 10) were parents of children with disabilities. Parents participated in one-on-one semistructured interviews. Transcripts were analyzed through a constant comparative…

  18. Characterizing Student Expectations: A Small Empirical Study

    ERIC Educational Resources Information Center

    Warwick, Jonathan

    2016-01-01

    This paper describes the results of a small empirical study (n = 130), in which undergraduate students in the Business Faculty of a UK university were asked to express views and expectations relating to the study of a mathematics. Factor analysis is used to identify latent variables emerging from clusters of the measured variables and these are…

  19. Men's Alcohol Expectancies at Selected Community Colleges

    ERIC Educational Resources Information Center

    Derby, Dustin C.

    2011-01-01

    Men's alcohol expectancies are an important cognitive-behavioral component of their consumption; yet, sparse research details such behaviors for men in two-year colleges. Selected for inclusion with the current study were 563 men from seven Illinois community colleges. Logistic regression analysis indicated four significant, positive relationships…

  20. Effects of Syntactic Expectations on Speech Segmentation

    ERIC Educational Resources Information Center

    Mattys, Sven L.; Melhorn, James F.; White, Laurence

    2007-01-01

    Although the effect of acoustic cues on speech segmentation has been extensively investigated, the role of higher order information (e.g., syntax) has received less attention. Here, the authors examined whether syntactic expectations based on subject-verb agreement have an effect on segmentation and whether they do so despite conflicting acoustic…