Expected Power-Utility Maximization Under Incomplete Information and with Cox-Process Observations
Fujimoto, Kazufumi; Nagai, Hideo; Runggaldier, Wolfgang J.
2013-02-15
We consider the problem of maximization of expected terminal power utility (risk sensitive criterion). The underlying market model is a regime-switching diffusion model where the regime is determined by an unobservable factor process forming a finite state Markov process. The main novelty is due to the fact that prices are observed and the portfolio is rebalanced only at random times corresponding to a Cox process where the intensity is driven by the unobserved Markovian factor process as well. This leads to a more realistic modeling for many practical situations, like in markets with liquidity restrictions; on the other hand it considerably complicates the problem to the point that traditional methodologies cannot be directly applied. The approach presented here is specific to the power-utility. For log-utilities a different approach is presented in Fujimoto et al. (Preprint, 2012).
Why Contextual Preference Reversals Maximize Expected Value
2016-01-01
Contextual preference reversals occur when a preference for one option over another is reversed by the addition of further options. It has been argued that the occurrence of preference reversals in human behavior shows that people violate the axioms of rational choice and that people are not, therefore, expected value maximizers. In contrast, we demonstrate that if a person is only able to make noisy calculations of expected value and noisy observations of the ordinal relations among option features, then the expected value maximizing choice is influenced by the addition of new options and does give rise to apparent preference reversals. We explore the implications of expected value maximizing choice, conditioned on noisy observations, for a range of contextual preference reversal types—including attraction, compromise, similarity, and phantom effects. These preference reversal types have played a key role in the development of models of human choice. We conclude that experiments demonstrating contextual preference reversals are not evidence for irrationality. They are, however, a consequence of expected value maximization given noisy observations. PMID:27337391
Classical subjective expected utility
Cerreia-Vioglio, Simone; Maccheroni, Fabio; Marinacci, Massimo; Montrucchio, Luigi
2013-01-01
We consider decision makers who know that payoff-relevant observations are generated by a process that belongs to a given class M, as postulated in Wald [Wald A (1950) Statistical Decision Functions (Wiley, New York)]. We incorporate this Waldean piece of objective information within an otherwise subjective setting à la Savage [Savage LJ (1954) The Foundations of Statistics (Wiley, New York)] and show that this leads to a two-stage subjective expected utility model that accounts for both state and model uncertainty. PMID:23559375
Classical subjective expected utility.
Cerreia-Vioglio, Simone; Maccheroni, Fabio; Marinacci, Massimo; Montrucchio, Luigi
2013-04-23
We consider decision makers who know that payoff-relevant observations are generated by a process that belongs to a given class M, as postulated in Wald [Wald A (1950) Statistical Decision Functions (Wiley, New York)]. We incorporate this Waldean piece of objective information within an otherwise subjective setting à la Savage [Savage LJ (1954) The Foundations of Statistics (Wiley, New York)] and show that this leads to a two-stage subjective expected utility model that accounts for both state and model uncertainty. PMID:23559375
Robust estimation by expectation maximization algorithm
NASA Astrophysics Data System (ADS)
Koch, Karl Rudolf
2013-02-01
A mixture of normal distributions is assumed for the observations of a linear model. The first component of the mixture represents the measurements without gross errors, while each of the remaining components gives the distribution for an outlier. Missing data are introduced to deliver the information as to which observation belongs to which component. The unknown location parameters and the unknown scale parameter of the linear model are estimated by the EM algorithm, which is iteratively applied. The E (expectation) step of the algorithm determines the expected value of the likelihood function given the observations and the current estimate of the unknown parameters, while the M (maximization) step computes new estimates by maximizing the expectation of the likelihood function. In comparison to Huber's M-estimation, the EM algorithm does not only identify outliers by introducing small weights for large residuals but also estimates the outliers. They can be corrected by the parameters of the linear model freed from the distortions by gross errors. Monte Carlo methods with random variates from the normal distribution then give expectations, variances, covariances and confidence regions for functions of the parameters estimated by taking care of the outliers. The method is demonstrated by the analysis of measurements with gross errors of a laser scanner.
Steganalysis feature improvement using expectation maximization
NASA Astrophysics Data System (ADS)
Rodriguez, Benjamin M.; Peterson, Gilbert L.; Agaian, Sos S.
2007-04-01
Images and data files provide an excellent opportunity for concealing illegal or clandestine material. Currently, there are over 250 different tools which embed data into an image without causing noticeable changes to the image. From a forensics perspective, when a system is confiscated or an image of a system is generated the investigator needs a tool that can scan and accurately identify files suspected of containing malicious information. The identification process is termed the steganalysis problem which focuses on both blind identification, in which only normal images are available for training, and multi-class identification, in which both the clean and stego images at several embedding rates are available for training. In this paper an investigation of a clustering and classification technique (Expectation Maximization with mixture models) is used to determine if a digital image contains hidden information. The steganalysis problem is for both anomaly detection and multi-class detection. The various clusters represent clean images and stego images with between 1% and 10% embedding percentage. Based on the results it is concluded that the EM classification technique is highly suitable for both blind detection and the multi-class problem.
Expectation maximization applied to GMTI convoy tracking
NASA Astrophysics Data System (ADS)
Koch, Wolfgang
2002-08-01
Collectively moving ground targets are typical of a military ground situation and have to be treated as separate aggregated entities. For a long-range ground surveillance application with airborne GMTI radar we inparticular address the task of track maintenance for ground moving convoys consisting of a small number of individual vehicles. In the proposed approach the identity of the individual vehicles within the convoy is no longer stressed. Their kinematical state vectors are rather treated as internal degrees of freedom characterizing the convoy, which is considered as a collective unit. In this context, the Expectation Maximization technique (EM), originally developed for incomplete data problems in statistical inference and first applied to tracking applications by STREIT et al. seems to be a promising approach. We suggest to embed the EM algorithm into a more traditional Bayesian tracking framework for dealing with false or unwanted sensor returns. The proposed distinction between external and internal data association conflicts (i.e. those among the convoy vehicles) should also enable the application of sequential track extraction techniques introduced by Van Keuk for aircraft formations, providing estimates of the number of the individual convoy vehicles involved. Even with sophisticated signal processing methods (STAP: Space-Time Adaptive Processing), ground moving vehicles can well be masked by the sensor specific clutter notch (Doppler blinding). This physical phenomenon results in interfering fading effects, which can well last over a longer series of sensor updates and therefore will seriously affect the track quality unless properly handled. Moreover, for ground moving convoys the phenomenon of Doppler blindness often superposes the effects induced by the finite resolution capability of the sensor. In many practical cases a separate modeling of resolution phenomena for convoy targets can therefore be omitted, provided the GMTI detection model is used
Deber, R B; Goel, V
1990-01-01
Concepts of justice, risk, and ethics can be merged with decision analysis by requiring the analyst to specify explicity a decision rule or sequence of rules. Decision rules are categorized by whether they consider: 1) aspects of outcome distributions beyond central tendencies; 2) probabilities as well as utilities of outcomes; and 3) means as well as ends. This formulation suggests that distribution-based decision rules could address both risk (for an individual) and justice (for the population). Rational choice under risk if choices are one-time only (vs. repeated events) or if one branch contains unlikely but disastrous outcomes might ignore probability information. Incorporating risk attitude into decision rules rather than utilities could facilitate use of multiattribute approaches to measuring outcomes. Certain ethical concerns could be addressed by prior specification of rules for allowing particular branches. Examples, including selection of polio vaccine strategies, are discussed, and theoretical and practical implications of a decision rule approach noted. PMID:2196412
Maximizing Resource Utilization in Video Streaming Systems
ERIC Educational Resources Information Center
Alsmirat, Mohammad Abdullah
2013-01-01
Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…
Blood detection in wireless capsule endoscopy using expectation maximization clustering
NASA Astrophysics Data System (ADS)
Hwang, Sae; Oh, JungHwan; Cox, Jay; Tang, Shou Jiang; Tibbals, Harry F.
2006-03-01
Wireless Capsule Endoscopy (WCE) is a relatively new technology (FDA approved in 2002) allowing doctors to view most of the small intestine. Other endoscopies such as colonoscopy, upper gastrointestinal endoscopy, push enteroscopy, and intraoperative enteroscopy could be used to visualize up to the stomach, duodenum, colon, and terminal ileum, but there existed no method to view most of the small intestine without surgery. With the miniaturization of wireless and camera technologies came the ability to view the entire gestational track with little effort. A tiny disposable video capsule is swallowed, transmitting two images per second to a small data receiver worn by the patient on a belt. During an approximately 8-hour course, over 55,000 images are recorded to a worn device and then downloaded to a computer for later examination. Typically, a medical clinician spends more than two hours to analyze a WCE video. Research has been attempted to automatically find abnormal regions (especially bleeding) to reduce the time needed to analyze the videos. The manufacturers also provide the software tool to detect the bleeding called Suspected Blood Indicator (SBI), but its accuracy is not high enough to replace human examination. It was reported that the sensitivity and the specificity of SBI were about 72% and 85%, respectively. To address this problem, we propose a technique to detect the bleeding regions automatically utilizing the Expectation Maximization (EM) clustering algorithm. Our experimental results indicate that the proposed bleeding detection method achieves 92% and 98% of sensitivity and specificity, respectively.
Inexact Matching of Ontology Graphs Using Expectation-Maximization
Doshi, Prashant; Kolli, Ravikanth; Thomas, Christopher
2009-01-01
We present a new method for mapping ontology schemas that address similar domains. The problem of ontology matching is crucial since we are witnessing a decentralized development and publication of ontological data. We formulate the problem of inferring a match between two ontologies as a maximum likelihood problem, and solve it using the technique of expectation-maximization (EM). Specifically, we adopt directed graphs as our model for ontology schemas and use a generalized version of EM to arrive at a map between the nodes of the graphs. We exploit the structural, lexical and instance similarity between the graphs, and differ from the previous approaches in the way we utilize them to arrive at, a possibly inexact, match. Inexact matching is the process of finding a best possible match between the two graphs when exact matching is not possible or is computationally difficult. In order to scale the method to large ontologies, we identify the computational bottlenecks and adapt the generalized EM by using a memory bounded partitioning scheme. We provide comparative experimental results in support of our method on two well-known ontology alignment benchmarks and discuss their implications. PMID:20160892
Expected Utility Distributions for Flexible, Contingent Execution
NASA Technical Reports Server (NTRS)
Bresina, John L.; Washington, Richard
2000-01-01
This paper presents a method for using expected utility distributions in the execution of flexible, contingent plans. A utility distribution maps the possible start times of an action to the expected utility of the plan suffix starting with that action. The contingent plan encodes a tree of possible courses of action and includes flexible temporal constraints and resource constraints. When execution reaches a branch point, the eligible option with the highest expected utility at that point in time is selected. The utility distributions make this selection sensitive to the runtime context, yet still efficient. Our approach uses predictions of action duration uncertainty as well as expectations of resource usage and availability to determine when an action can execute and with what probability. Execution windows and probabilities inevitably change as execution proceeds, but such changes do not invalidate the cached utility distributions, thus, dynamic updating of utility information is minimized.
An Expectation-Maximization Method for Calibrating Synchronous Machine Models
Meng, Da; Zhou, Ning; Lu, Shuai; Lin, Guang
2013-07-21
The accuracy of a power system dynamic model is essential to its secure and efficient operation. Lower confidence in model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, this paper proposes an expectation-maximization (EM) method to calibrate the synchronous machine model using phasor measurement unit (PMU) data. First, an extended Kalman filter (EKF) is applied to estimate the dynamic states using measurement data. Then, the parameters are calculated based on the estimated states using maximum likelihood estimation (MLE) method. The EM method iterates over the preceding two steps to improve estimation accuracy. The proposed EM method’s performance is evaluated using a single-machine infinite bus system and compared with a method where both state and parameters are estimated using an EKF method. Sensitivity studies of the parameter calibration using EM method are also presented to show the robustness of the proposed method for different levels of measurement noise and initial parameter uncertainty.
Expectation-Maximization Binary Clustering for Behavioural Annotation.
Garriga, Joan; Palmer, John R B; Oltra, Aitana; Bartumeus, Frederic
2016-01-01
The growing capacity to process and store animal tracks has spurred the development of new methods to segment animal trajectories into elementary units of movement. Key challenges for movement trajectory segmentation are to (i) minimize the need of supervision, (ii) reduce computational costs, (iii) minimize the need of prior assumptions (e.g. simple parametrizations), and (iv) capture biologically meaningful semantics, useful across a broad range of species. We introduce the Expectation-Maximization binary Clustering (EMbC), a general purpose, unsupervised approach to multivariate data clustering. The EMbC is a variant of the Expectation-Maximization Clustering (EMC), a clustering algorithm based on the maximum likelihood estimation of a Gaussian mixture model. This is an iterative algorithm with a closed form step solution and hence a reasonable computational cost. The method looks for a good compromise between statistical soundness and ease and generality of use (by minimizing prior assumptions and favouring the semantic interpretation of the final clustering). Here we focus on the suitability of the EMbC algorithm for behavioural annotation of movement data. We show and discuss the EMbC outputs in both simulated trajectories and empirical movement trajectories including different species and different tracking methodologies. We use synthetic trajectories to assess the performance of EMbC compared to classic EMC and Hidden Markov Models. Empirical trajectories allow us to explore the robustness of the EMbC to data loss and data inaccuracies, and assess the relationship between EMbC output and expert label assignments. Additionally, we suggest a smoothing procedure to account for temporal correlations among labels, and a proper visualization of the output for movement trajectories. Our algorithm is available as an R-package with a set of complementary functions to ease the analysis. PMID:27002631
Expectation-Maximization Binary Clustering for Behavioural Annotation
2016-01-01
The growing capacity to process and store animal tracks has spurred the development of new methods to segment animal trajectories into elementary units of movement. Key challenges for movement trajectory segmentation are to (i) minimize the need of supervision, (ii) reduce computational costs, (iii) minimize the need of prior assumptions (e.g. simple parametrizations), and (iv) capture biologically meaningful semantics, useful across a broad range of species. We introduce the Expectation-Maximization binary Clustering (EMbC), a general purpose, unsupervised approach to multivariate data clustering. The EMbC is a variant of the Expectation-Maximization Clustering (EMC), a clustering algorithm based on the maximum likelihood estimation of a Gaussian mixture model. This is an iterative algorithm with a closed form step solution and hence a reasonable computational cost. The method looks for a good compromise between statistical soundness and ease and generality of use (by minimizing prior assumptions and favouring the semantic interpretation of the final clustering). Here we focus on the suitability of the EMbC algorithm for behavioural annotation of movement data. We show and discuss the EMbC outputs in both simulated trajectories and empirical movement trajectories including different species and different tracking methodologies. We use synthetic trajectories to assess the performance of EMbC compared to classic EMC and Hidden Markov Models. Empirical trajectories allow us to explore the robustness of the EMbC to data loss and data inaccuracies, and assess the relationship between EMbC output and expert label assignments. Additionally, we suggest a smoothing procedure to account for temporal correlations among labels, and a proper visualization of the output for movement trajectories. Our algorithm is available as an R-package with a set of complementary functions to ease the analysis. PMID:27002631
Generalized expectation-maximization segmentation of brain MR images
NASA Astrophysics Data System (ADS)
Devalkeneer, Arnaud A.; Robe, Pierre A.; Verly, Jacques G.; Phillips, Christophe L. M.
2006-03-01
Manual segmentation of medical images is unpractical because it is time consuming, not reproducible, and prone to human error. It is also very difficult to take into account the 3D nature of the images. Thus, semi- or fully-automatic methods are of great interest. Current segmentation algorithms based on an Expectation- Maximization (EM) procedure present some limitations. The algorithm by Ashburner et al., 2005, does not allow multichannel inputs, e.g. two MR images of different contrast, and does not use spatial constraints between adjacent voxels, e.g. Markov random field (MRF) constraints. The solution of Van Leemput et al., 1999, employs a simplified model (mixture coefficients are not estimated and only one Gaussian is used by tissue class, with three for the image background). We have thus implemented an algorithm that combines the features of these two approaches: multichannel inputs, intensity bias correction, multi-Gaussian histogram model, and Markov random field (MRF) constraints. Our proposed method classifies tissues in three iterative main stages by way of a Generalized-EM (GEM) algorithm: (1) estimation of the Gaussian parameters modeling the histogram of the images, (2) correction of image intensity non-uniformity, and (3) modification of prior classification knowledge by MRF techniques. The goal of the GEM algorithm is to maximize the log-likelihood across the classes and voxels. Our segmentation algorithm was validated on synthetic data (with the Dice metric criterion) and real data (by a neurosurgeon) and compared to the original algorithms by Ashburner et al. and Van Leemput et al. Our combined approach leads to more robust and accurate segmentation.
PEM-PCA: A Parallel Expectation-Maximization PCA Face Recognition Architecture
Rujirakul, Kanokmon; Arnonkijpanich, Banchar
2014-01-01
Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA. PMID:24955405
The Noisy Expectation-Maximization Algorithm for Multiplicative Noise Injection
NASA Astrophysics Data System (ADS)
Osoba, Osonde; Kosko, Bart
2016-03-01
We generalize the noisy expectation-maximization (NEM) algorithm to allow arbitrary modes of noise injection besides just adding noise to the data. The noise must still satisfy a NEM positivity condition. This generalization includes the important special case of multiplicative noise injection. A generalized NEM theorem shows that all measurable modes of injecting noise will speed the average convergence of the EM algorithm if the noise satisfies a generalized NEM positivity condition. This noise-benefit condition has a simple quadratic form for Gaussian and Cauchy mixture models in the case of multiplicative noise injection. Simulations show a multiplicative-noise EM speed-up of more than 27% in a simple Gaussian mixture model. Injecting blind noise only slowed convergence. A related theorem gives a sufficient condition for an average EM noise benefit for arbitrary modes of noise injection if the data model comes from the general exponential family of probability density functions. A final theorem shows that injected noise slows EM convergence on average if the NEM inequalities reverse and the noise satisfies a negativity condition.
Expectation maximization reconstruction for circular orbit cone-beam CT
NASA Astrophysics Data System (ADS)
Dong, Baoyu
2008-03-01
Cone-beam computed tomography (CBCT) is a technique for imaging cross-sections of an object using a series of X-ray measurements taken from different angles around the object. It has been widely applied in diagnostic medicine and industrial non-destructive testing. Traditional CT reconstructions are limited by many kinds of artifacts, and they give dissatisfactory image. To reduce image noise and artifacts, we propose a statistical iterative approach for cone-beam CT reconstruction. First the theory of maximum likelihood estimation is extended to X-ray scan, and an expectation-maximization (EM) formula is deduced for direct reconstruction of circular orbit cone-beam CT. Then the EM formula is implemented in cone-beam geometry for artifact reduction. EM algorithm is a feasible iterative method, which is based on the statistical properties of Poisson distribution. It can provide good quality reconstructions after a few iterations for cone-beam CT. In the end, experimental results with computer simulated data and real CT data are presented to verify our method is effective.
Robust Utility Maximization Under Convex Portfolio Constraints
Matoussi, Anis; Mezghani, Hanen Mnif, Mohamed
2015-04-15
We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle.
Matching Pupils and Teachers to Maximize Expected Outcomes.
ERIC Educational Resources Information Center
Ward, Joe H., Jr.; And Others
To achieve a good teacher-pupil match, it is necessary (1) to predict the learning outcomes that will result when each student is instructed by each teacher, (2) to use the predicted performance to compute an Optimality Index for each teacher-pupil combination to indicate the quality of each combination toward maximizing learning for all students,…
Price of oil and OPEC behavior: a utility maximization model
Adeinat, M.K.
1985-01-01
There is growing evidence that OPEC has neither behaved as a cartel, at least in the last decade, nor maximized the discounted value of its profits as would be suggested by the theory of exhaustible resources. This dissertation attempts to find a way out of this dead end by proposing a utility maximization model. According to the utility maximization model, the decisions of how much crude oil each country produces is determined by a country's budgetary needs. The objective of each country is to choose present consumption and future consumption (which must be financed by its future income which can, in turn, be generated either by its investment out of current income or the proceeds of its oil reserves) at time t to maximize its utility function subject to its budget and absorptive capacity constraints. The model predicted that whenever the amount of savings is greater than the country's absorptive capacity as a result of higher prices of oil, it would respond by cutting back its production of oil. This prediction is supported by the following empirical findings: (1) that the marginal propensity to save (MPS) exceeded the marginal propensity to invest (MPI) during the period of study (1967-1981), implying that OPEC countries were facing an absorptive capacity constraint and (2) the quantity of oil production responded negatively to the permanent income in all three countries, the response being highly significant for those countries with the greatest budget surpluses.
An expected utility maximizer walks into a bar…
Glimcher, Paul W.; Lazzaro, Stephanie C.
2013-01-01
We conducted field experiments at a bar to test whether blood alcohol concentration (BAC) correlates with violations of the generalized axiom of revealed preference (GARP) and the independence axiom. We found that individuals with BACs well above the legal limit for driving adhere to GARP and independence at rates similar to those who are sober. This finding led to the fielding of a third experiment to explore how risk preferences might vary as a function of BAC. We found gender-specific effects: Men did not exhibit variations in risk preferences across BACs. In contrast, women were more risk averse than men at low BACs but exhibited increasing tolerance towards risks as BAC increased. Based on our estimates, men and women’s risk preferences are predicted to be identical at BACs nearly twice the legal limit for driving. We discuss the implications for policy-makers. PMID:24244072
An expected utility maximizer walks into a bar…
Burghart, Daniel R; Glimcher, Paul W; Lazzaro, Stephanie C
2013-06-01
We conducted field experiments at a bar to test whether blood alcohol concentration (BAC) correlates with violations of the generalized axiom of revealed preference (GARP) and the independence axiom. We found that individuals with BACs well above the legal limit for driving adhere to GARP and independence at rates similar to those who are sober. This finding led to the fielding of a third experiment to explore how risk preferences might vary as a function of BAC. We found gender-specific effects: Men did not exhibit variations in risk preferences across BACs. In contrast, women were more risk averse than men at low BACs but exhibited increasing tolerance towards risks as BAC increased. Based on our estimates, men and women's risk preferences are predicted to be identical at BACs nearly twice the legal limit for driving. We discuss the implications for policy-makers. PMID:24244072
Coding for Parallel Links to Maximize the Expected Value of Decodable Messages
NASA Technical Reports Server (NTRS)
Klimesh, Matthew A.; Chang, Christopher S.
2011-01-01
When multiple parallel communication links are available, it is useful to consider link-utilization strategies that provide tradeoffs between reliability and throughput. Interesting cases arise when there are three or more available links. Under the model considered, the links have known probabilities of being in working order, and each link has a known capacity. The sender has a number of messages to send to the receiver. Each message has a size and a value (i.e., a worth or priority). Messages may be divided into pieces arbitrarily, and the value of each piece is proportional to its size. The goal is to choose combinations of messages to send on the links so that the expected value of the messages decodable by the receiver is maximized. There are three parts to the innovation: (1) Applying coding to parallel links under the model; (2) Linear programming formulation for finding the optimal combinations of messages to send on the links; and (3) Algorithms for assisting in finding feasible combinations of messages, as support for the linear programming formulation. There are similarities between this innovation and methods developed in the field of network coding. However, network coding has generally been concerned with either maximizing throughput in a fixed network, or robust communication of a fixed volume of data. In contrast, under this model, the throughput is expected to vary depending on the state of the network. Examples of error-correcting codes that are useful under this model but which are not needed under previous models have been found. This model can represent either a one-shot communication attempt, or a stream of communications. Under the one-shot model, message sizes and link capacities are quantities of information (e.g., measured in bits), while under the communications stream model, message sizes and link capacities are information rates (e.g., measured in bits/second). This work has the potential to increase the value of data returned from
AREM: Aligning Short Reads from ChIP-Sequencing by Expectation Maximization
NASA Astrophysics Data System (ADS)
Newkirk, Daniel; Biesinger, Jacob; Chon, Alvin; Yokomori, Kyoko; Xie, Xiaohui
High-throughput sequencing coupled to chromatin immunoprecipitation (ChIP-Seq) is widely used in characterizing genome-wide binding patterns of transcription factors, cofactors, chromatin modifiers, and other DNA binding proteins. A key step in ChIP-Seq data analysis is to map short reads from high-throughput sequencing to a reference genome and identify peak regions enriched with short reads. Although several methods have been proposed for ChIP-Seq analysis, most existing methods only consider reads that can be uniquely placed in the reference genome, and therefore have low power for detecting peaks located within repeat sequences. Here we introduce a probabilistic approach for ChIP-Seq data analysis which utilizes all reads, providing a truly genome-wide view of binding patterns. Reads are modeled using a mixture model corresponding to K enriched regions and a null genomic background. We use maximum likelihood to estimate the locations of the enriched regions, and implement an expectation-maximization (E-M) algorithm, called AREM (aligning reads by expectation maximization), to update the alignment probabilities of each read to different genomic locations. We apply the algorithm to identify genome-wide binding events of two proteins: Rad21, a component of cohesin and a key factor involved in chromatid cohesion, and Srebp-1, a transcription factor important for lipid/cholesterol homeostasis. Using AREM, we were able to identify 19,935 Rad21 peaks and 1,748 Srebp-1 peaks in the mouse genome with high confidence, including 1,517 (7.6%) Rad21 peaks and 227 (13%) Srebp-1 peaks that were missed using only uniquely mapped reads. The open source implementation of our algorithm is available at http://sourceforge.net/projects/arem
Optimal weight based on energy imbalance and utility maximization
NASA Astrophysics Data System (ADS)
Sun, Ruoyan
2016-01-01
This paper investigates the optimal weight for both male and female using energy imbalance and utility maximization. Based on the difference of energy intake and expenditure, we develop a state equation that reveals the weight gain from this energy gap. We construct an objective function considering food consumption, eating habits and survival rate to measure utility. Through applying mathematical tools from optimal control methods and qualitative theory of differential equations, we obtain some results. For both male and female, the optimal weight is larger than the physiologically optimal weight calculated by the Body Mass Index (BMI). We also study the corresponding trajectories to steady state weight respectively. Depending on the value of a few parameters, the steady state can either be a saddle point with a monotonic trajectory or a focus with dampened oscillations.
A compact formulation for maximizing the expected number of transplants in kidney exchange programs
NASA Astrophysics Data System (ADS)
Alvelos, Filipe; Klimentova, Xenia; Rais, Abdur; Viana, Ana
2015-05-01
Kidney exchange programs (KEPs) allow the exchange of kidneys between incompatible donor-recipient pairs. Optimization approaches can help KEPs in defining which transplants should be made among all incompatible pairs according to some objective. The most common objective is to maximize the number of transplants. In this paper, we propose an integer programming model which addresses the objective of maximizing the expected number of transplants, given that there are equal probabilities of failure associated with vertices and arcs. The model is compact, i.e. has a polynomial number of decision variables and constraints, and therefore can be solved directly by a general purpose integer programming solver (e.g. Cplex).
Chen, Wei; Chang, Chunqi; Hu, Yong
2016-01-01
It is of great importance for intraoperative monitoring to accurately extract somatosensory evoked potentials (SEPs) and track its changes fast. Currently, multi-trial averaging is widely adopted for SEP signal extraction. However, because of the loss of variations related to SEP features across different trials, the estimated SEPs in such a way are not suitable for the purpose of real-time monitoring of every single trial of SEP. In order to handle this issue, a number of single-trial SEP extraction approaches have been developed in the literature, such as ARX and SOBI, but most of them have their performance limited due to not sufficient utilization of multi-trial and multi-condition structures of the signals. In this paper, a novel Bayesian model of SEP signals is proposed to make systemic use of multi-trial and multi-condition priors and other structural information in the signal by integrating both a cortical source propagation model and a SEP basis components model, and an Expectation Maximization (EM) algorithm is developed for single-trial SEP estimation under this model. Numerical simulations demonstrate that the developed method can provide reasonably good single-trial estimations of SEP as long as signal-to-noise ratio (SNR) of the measurements is no worse than -25 dB. The effectiveness of the proposed method is further verified by its application to real SEP measurements of a number of different subjects during spinal surgeries. It is observed that using the proposed approach the main SEP features (i.e., latencies) can be reliably estimated at single-trial basis, and thus the variation of latencies in different trials can be traced, which provides a solid support for surgical intraoperative monitoring. PMID:26742104
The predictive validity of prospect theory versus expected utility in health utility measurement.
Abellan-Perpiñan, Jose Maria; Bleichrodt, Han; Pinto-Prades, Jose Luis
2009-12-01
Most health care evaluations today still assume expected utility even though the descriptive deficiencies of expected utility are well known. Prospect theory is the dominant descriptive alternative for expected utility. This paper tests whether prospect theory leads to better health evaluations than expected utility. The approach is purely descriptive: we explore how simple measurements together with prospect theory and expected utility predict choices and rankings between more complex stimuli. For decisions involving risk prospect theory is significantly more consistent with rankings and choices than expected utility. This conclusion no longer holds when we use prospect theory utilities and expected utilities to predict intertemporal decisions. The latter finding cautions against the common assumption in health economics that health state utilities are transferable across decision contexts. Our results suggest that the standard gamble and algorithms based on, should not be used to value health. PMID:19833400
Disconfirmation of Expectations of Utility in e-Learning
ERIC Educational Resources Information Center
Cacao, Rosario
2013-01-01
Using pre-training and post-training paired surveys in e-learning based training courses, we have compared the "expectations of utility," measured at the beginning of an e-learning course, with the "perceptions of utility," measured at the end of the course, and related it with the trainees' motivation. We have concluded…
Power Dependence in Individual Bargaining: The Expected Utility of Influence.
ERIC Educational Resources Information Center
Lawler, Edward J.; Bacharach, Samuel B.
1979-01-01
This study uses power-dependence theory as a framework for examining whether and how parties use information on each other's dependence to estimate the utility of an influence attempt. The effect of dependence in expected utilities is investigated (by role playing) in bargaining between employer and employee for a pay raise. (MF)
Gaussian beam decomposition of high frequency wave fields using expectation-maximization
Ariel, Gil; Engquist, Bjoern; Tanushev, Nicolay M.; Tsai, Richard
2011-03-20
A new numerical method for approximating highly oscillatory wave fields as a superposition of Gaussian beams is presented. The method estimates the number of beams and their parameters automatically. This is achieved by an expectation-maximization algorithm that fits real, positive Gaussians to the energy of the highly oscillatory wave fields and its Fourier transform. Beam parameters are further refined by an optimization procedure that minimizes the difference between the Gaussian beam superposition and the highly oscillatory wave field in the energy norm.
NASA Astrophysics Data System (ADS)
Aslan, Serdar; Taylan Cemgil, Ali; Akın, Ata
2016-08-01
Objective. In this paper, we aimed for the robust estimation of the parameters and states of the hemodynamic model by using blood oxygen level dependent signal. Approach. In the fMRI literature, there are only a few successful methods that are able to make a joint estimation of the states and parameters of the hemodynamic model. In this paper, we implemented a maximum likelihood based method called the particle smoother expectation maximization (PSEM) algorithm for the joint state and parameter estimation. Main results. Former sequential Monte Carlo methods were only reliable in the hemodynamic state estimates. They were claimed to outperform the local linearization (LL) filter and the extended Kalman filter (EKF). The PSEM algorithm is compared with the most successful method called square-root cubature Kalman smoother (SCKS) for both state and parameter estimation. SCKS was found to be better than the dynamic expectation maximization (DEM) algorithm, which was shown to be a better estimator than EKF, LL and particle filters. Significance. PSEM was more accurate than SCKS for both the state and the parameter estimation. Hence, PSEM seems to be the most accurate method for the system identification and state estimation for the hemodynamic model inversion literature. This paper do not compare its results with Tikhonov-regularized Newton—CKF (TNF-CKF), a recent robust method which works in filtering sense.
Wobbling and LSF-based maximum likelihood expectation maximization reconstruction for wobbling PET
NASA Astrophysics Data System (ADS)
Kim, Hang-Keun; Son, Young-Don; Kwon, Dae-Hyuk; Joo, Yohan; Cho, Zang-Hee
2016-04-01
Positron emission tomography (PET) is a widely used imaging modality; however, the PET spatial resolution is not yet satisfactory for precise anatomical localization of molecular activities. Detector size is the most important factor because it determines the intrinsic resolution, which is approximately half of the detector size and determines the ultimate PET resolution. Detector size, however, cannot be made too small because both the decreased detection efficiency and the increased septal penetration effect degrade the image quality. A wobbling and line spread function (LSF)-based maximum likelihood expectation maximization (WL-MLEM) algorithm, which combined the MLEM iterative reconstruction algorithm with wobbled sampling and LSF-based deconvolution using the system matrix, was proposed for improving the spatial resolution of PET without reducing the scintillator or detector size. The new algorithm was evaluated using a simulation, and its performance was compared with that of the existing algorithms, such as conventional MLEM and LSF-based MLEM. Simulations demonstrated that the WL-MLEM algorithm yielded higher spatial resolution and image quality than the existing algorithms. The WL-MLEM algorithm with wobbling PET yielded substantially improved resolution compared with conventional algorithms with stationary PET. The algorithm can be easily extended to other iterative reconstruction algorithms, such as maximum a priori (MAP) and ordered subset expectation maximization (OSEM). The WL-MLEM algorithm with wobbling PET may offer improvements in both sensitivity and resolution, the two most sought-after features in PET design.
Gong, Zongyi; Klanian, Kelly; Patel, Tushita; Sullivan, Olivia; Williams, Mark B.
2012-01-01
Purpose: We are developing a dual modality tomosynthesis breast scanner in which x-ray transmission tomosynthesis and gamma emission tomosynthesis are performed sequentially with the breast in a common configuration. In both modalities projection data are obtained over an angular range of less than 180° from one side of the mildly compressed breast resulting in incomplete and asymmetrical sampling. The objective of this work is to implement and evaluate a maximum likelihood expectation maximization (MLEM) reconstruction algorithm for gamma emission breast tomosynthesis (GEBT). Methods: A combination of Monte Carlo simulations and phantom experiments was used to test the MLEM algorithm for GEBT. The algorithm utilizes prior information obtained from the x-ray breast tomosynthesis scan to partially compensate for the incomplete angular sampling and to perform attenuation correction (AC) and resolution recovery (RR). System spatial resolution, image artifacts, lesion contrast, and signal to noise ratio (SNR) were measured as image quality figures of merit. To test the robustness of the reconstruction algorithm and to assess the relative impacts of correction techniques with changing angular range, simulations and experiments were both performed using acquisition angular ranges of 45°, 90° and 135°. For comparison, a single projection containing the same total number of counts as the full GEBT scan was also obtained to simulate planar breast scintigraphy. Results: The in-plane spatial resolution of the reconstructed GEBT images is independent of source position within the reconstructed volume and independent of acquisition angular range. For 45° acquisitions, spatial resolution in the depth dimension (the direction of breast compression) is degraded with increasing source depth (increasing distance from the collimator surface). Increasing the acquisition angular range from 45° to 135° both greatly reduces this depth dependence and improves the average depth
Clustering performance comparison using K-means and expectation maximization algorithms
Jung, Yong Gyu; Kang, Min Soo; Heo, Jun
2014-01-01
Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K-means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K-means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results. PMID:26019610
Maximizing Light Utilization Efficiency and Hydrogen Production in Microalgal Cultures
Melis, Anastasios
2014-12-31
The project addressed the following technical barrier from the Biological Hydrogen Production section of the Fuel Cell Technologies Program Multi-Year Research, Development and Demonstration Plan: Low Sunlight Utilization Efficiency in Photobiological Hydrogen Production is due to a Large Photosystem Chlorophyll Antenna Size in Photosynthetic Microorganisms (Barrier AN: Light Utilization Efficiency).
Subjective Expected Utility: A Model of Decision-Making.
ERIC Educational Resources Information Center
Fischoff, Baruch; And Others
1981-01-01
Outlines a model of decision making known to researchers in the field of behavioral decision theory (BDT) as subjective expected utility (SEU). The descriptive and predictive validity of the SEU model, probability and values assessment using SEU, and decision contexts are examined, and a 54-item reference list is provided. (JL)
A simple test of expected utility theory using professional traders.
List, John A; Haigh, Michael S
2005-01-18
We compare behavior across students and professional traders from the Chicago Board of Trade in a classic Allais paradox experiment. Our experiment tests whether independence, a necessary condition in expected utility theory, is systematically violated. We find that both students and professionals exhibit some behavior consistent with the Allais paradox, but the data pattern does suggest that the trader population falls prey to the Allais paradox less frequently than the student population. PMID:15634739
Liu, Haiguang; Spence, John C.H.
2014-01-01
Crystallographic auto-indexing algorithms provide crystal orientations and unit-cell parameters and assign Miller indices based on the geometric relations between the Bragg peaks observed in diffraction patterns. However, if the Bravais symmetry is higher than the space-group symmetry, there will be multiple indexing options that are geometrically equivalent, and hence many ways to merge diffraction intensities from protein nanocrystals. Structure factor magnitudes from full reflections are required to resolve this ambiguity but only partial reflections are available from each XFEL shot, which must be merged to obtain full reflections from these ‘stills’. To resolve this chicken-and-egg problem, an expectation maximization algorithm is described that iteratively constructs a model from the intensities recorded in the diffraction patterns as the indexing ambiguity is being resolved. The reconstructed model is then used to guide the resolution of the indexing ambiguity as feedback for the next iteration. Using both simulated and experimental data collected at an X-ray laser for photosystem I in the P63 space group (which supports a merohedral twinning indexing ambiguity), the method is validated. PMID:25485120
Awate, Suyash P; Radhakrishnan, Thyagarajan
2015-01-01
In microscopy imaging, colocalization between two biological entities (e.g., protein-protein or protein-cell) refers to the (stochastic) dependencies between the spatial locations of the two entities in the biological specimen. Measuring colocalization between two entities relies on fluorescence imaging of the specimen using two fluorescent chemicals, each of which indicates the presence/absence of one of the entities at any pixel location. State-of-the-art methods for estimating colocalization rely on post-processing image data using an adhoc sequence of algorithms with many free parameters that are tuned visually. This leads to loss of reproducibility of the results. This paper proposes a brand-new framework for estimating the nature and strength of colocalization directly from corrupted image data by solving a single unified optimization problem that automatically deals with noise, object labeling, and parameter tuning. The proposed framework relies on probabilistic graphical image modeling and a novel inference scheme using variational Bayesian expectation maximization for estimating all model parameters, including colocalization, from data. Results on simulated and real-world data demonstrate improved performance over the state of the art. PMID:26221663
An online expectation maximization algorithm for exploring general structure in massive networks
NASA Astrophysics Data System (ADS)
Chai, Bianfang; Jia, Caiyan; Yu, Jian
2015-11-01
Mixture model and stochastic block model (SBM) for structure discovery employ a broad and flexible definition of vertex classes such that they are able to explore a wide variety of structure. Compared to the existing algorithms based on the SBM (their time complexities are O(mc2) , where m and c are the number of edges and clusters), the algorithms of mixture model are capable of dealing with networks with a large number of communities more efficiently due to their O(mc) time complexity. However, the algorithms of mixture model using expectation maximization (EM) technique are still too slow to deal with real million-node networks, since they compute hidden variables on the entire network in each iteration. In this paper, an online variational EM algorithm is designed to improve the efficiency of the EM algorithms. In each iteration, our online algorithm samples a node and estimates its cluster memberships only by its adjacency links, and model parameters are then estimated by the memberships of the sampled node and old model parameters obtained in the previous iteration. The provided online algorithm updates model parameters subsequently by the links of a new sampled node and explores the general structure of massive and growing networks with millions of nodes and hundreds of clusters in hours. Compared to the relevant algorithms on synthetic and real networks, the proposed online algorithm costs less with little or no degradation of accuracy. Results illustrate that the presented algorithm offers a good trade-off between precision and efficiency.
A Local Scalable Distributed Expectation Maximization Algorithm for Large Peer-to-Peer Networks
NASA Technical Reports Server (NTRS)
Bhaduri, Kanishka; Srivastava, Ashok N.
2009-01-01
This paper offers a local distributed algorithm for expectation maximization in large peer-to-peer environments. The algorithm can be used for a variety of well-known data mining tasks in a distributed environment such as clustering, anomaly detection, target tracking to name a few. This technology is crucial for many emerging peer-to-peer applications for bioinformatics, astronomy, social networking, sensor networks and web mining. Centralizing all or some of the data for building global models is impractical in such peer-to-peer environments because of the large number of data sources, the asynchronous nature of the peer-to-peer networks, and dynamic nature of the data/network. The distributed algorithm we have developed in this paper is provably-correct i.e. it converges to the same result compared to a similar centralized algorithm and can automatically adapt to changes to the data and the network. We show that the communication overhead of the algorithm is very low due to its local nature. This monitoring algorithm is then used as a feedback loop to sample data from the network and rebuild the model when it is outdated. We present thorough experimental results to verify our theoretical claims.
NASA Astrophysics Data System (ADS)
Choi, Joonsung; Kim, Dongchan; Oh, Changhyun; Han, Yeji; Park, HyunWook
2013-05-01
In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method.
Choi, Joonsung; Kim, Dongchan; Oh, Changhyun; Han, Yeji; Park, HyunWook
2013-05-01
In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method. PMID:23588215
Statistical models of synaptic transmission evaluated using the expectation-maximization algorithm.
Stricker, C; Redman, S
1994-01-01
Amplitude fluctuations of evoked synaptic responses can be used to extract information on the probabilities of release at the active sites, and on the amplitudes of the synaptic responses generated by transmission at each active site. The parameters that describe this process must be obtained from an incomplete data set represented by the probability density of the evoked synaptic response. In this paper, the equations required to calculate these parameters using the Expectation-Maximization algorithm and the maximum likelihood criterion have been derived for a variety of statistical models of synaptic transmission. These models are ones where the probabilities associated with the different discrete amplitudes in the evoked responses are a) unconstrained, b) binomial, and c) compound binomial. The discrete amplitudes may be separated by equal (quantal) or unequal amounts, with or without quantal variance. Alternative models have been considered where the variance associated with the discrete amplitudes is sufficiently large such that no quantal amplitudes can be detected. These models involve the sum of a normal distribution (to represent failures) and a unimodal distribution (to represent the evoked responses). The implementation of the algorithm is described in each case, and its accuracy and convergence have been demonstrated. PMID:7948679
Bandwidth utilization maximization of scientific RF communication systems
Rey, D.; Ryan, W.; Ross, M.
1997-01-01
A method for more efficiently utilizing the frequency bandwidth allocated for data transmission is presented. Current space and range communication systems use modulation and coding schemes that transmit 0.5 to 1.0 bits per second per Hertz of radio frequency bandwidth. The goal in this LDRD project is to increase the bandwidth utilization by employing advanced digital communications techniques. This is done with little or no increase in the transmit power which is usually very limited on airborne systems. Teaming with New Mexico State University, an implementation of trellis coded modulation (TCM), a coding and modulation scheme pioneered by Ungerboeck, was developed for this application and simulated on a computer. TCM provides a means for reliably transmitting data while simultaneously increasing bandwidth efficiency. The penalty is increased receiver complexity. In particular, the trellis decoder requires high-speed, application-specific digital signal processing (DSP) chips. A system solution based on the QualComm Viterbi decoder and the Graychip DSP receiver chips is presented.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-17
... FR 8452-8460), pursuant to section 515 of the Treasury and General Government Appropriations Act for... FR 8452-8460) that direct each federal agency to (1) Issue its own guidelines ensuring and maximizing... June 2011 (76 FR 37376) intended to ensure and maximize the quality, objectivity, utility,...
Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.
Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management. PMID:25538868
Association Studies with Imputed Variants Using Expectation-Maximization Likelihood-Ratio Tests
Huang, Kuan-Chieh; Sun, Wei; Wu, Ying; Chen, Mengjie; Mohlke, Karen L.; Lange, Leslie A.; Li, Yun
2014-01-01
Genotype imputation has become standard practice in modern genetic studies. As sequencing-based reference panels continue to grow, increasingly more markers are being well or better imputed but at the same time, even more markers with relatively low minor allele frequency are being imputed with low imputation quality. Here, we propose new methods that incorporate imputation uncertainty for downstream association analysis, with improved power and/or computational efficiency. We consider two scenarios: I) when posterior probabilities of all potential genotypes are estimated; and II) when only the one-dimensional summary statistic, imputed dosage, is available. For scenario I, we have developed an expectation-maximization likelihood-ratio test for association based on posterior probabilities. When only imputed dosages are available (scenario II), we first sample the genotype probabilities from its posterior distribution given the dosages, and then apply the EM-LRT on the sampled probabilities. Our simulations show that type I error of the proposed EM-LRT methods under both scenarios are protected. Compared with existing methods, EM-LRT-Prob (for scenario I) offers optimal statistical power across a wide spectrum of MAF and imputation quality. EM-LRT-Dose (for scenario II) achieves a similar level of statistical power as EM-LRT-Prob and, outperforms the standard Dosage method, especially for markers with relatively low MAF or imputation quality. Applications to two real data sets, the Cebu Longitudinal Health and Nutrition Survey study and the Women’s Health Initiative Study, provide further support to the validity and efficiency of our proposed methods. PMID:25383782
NASA Astrophysics Data System (ADS)
Papaconstadopoulos, P.; Levesque, I. R.; Maglieri, R.; Seuntjens, J.
2016-02-01
Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size (0.5× 0.5 cm2). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.
Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J
2016-02-01
Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect. PMID:26758232
Expected utility theory and risky choices with health outcomes.
Hellinger, F J
1989-03-01
Studies of people's attitude towards risk in the health sector often involve a comparison of the desirability of alternative medical treatments. Since the outcome of a medical treatment cannot be known with certainty, patients and physicians must make a choice that involves risk. Each medical treatment may be characterized as a gamble (or risky option) with a set of outcomes and associated probabilities. Expected utility theory (EUT) is the standard method to predict people's choices under uncertainty. The author presents the results of a survey that suggests people are very risk averse towards gambles involving health-related outcomes. The survey also indicates that there is significant variability in the risk attitudes across individuals for any given gamble and that there is significant variability in the risk attitudes of a given individual across gambles. The variability of risk attitudes of a given individual suggests that risk attitudes are not absolute but are functions of the parameters in the gamble. PMID:2927183
Guo, Jingyu; Tian, Dehua; McKinney, Brett A; Hartman, John L
2010-06-01
Interactions between genetic and/or environmental factors are ubiquitous, affecting the phenotypes of organisms in complex ways. Knowledge about such interactions is becoming rate-limiting for our understanding of human disease and other biological phenomena. Phenomics refers to the integrative analysis of how all genes contribute to phenotype variation, entailing genome and organism level information. A systems biology view of gene interactions is critical for phenomics. Unfortunately the problem is intractable in humans; however, it can be addressed in simpler genetic model systems. Our research group has focused on the concept of genetic buffering of phenotypic variation, in studies employing the single-cell eukaryotic organism, S. cerevisiae. We have developed a methodology, quantitative high throughput cellular phenotyping (Q-HTCP), for high-resolution measurements of gene-gene and gene-environment interactions on a genome-wide scale. Q-HTCP is being applied to the complete set of S. cerevisiae gene deletion strains, a unique resource for systematically mapping gene interactions. Genetic buffering is the idea that comprehensive and quantitative knowledge about how genes interact with respect to phenotypes will lead to an appreciation of how genes and pathways are functionally connected at a systems level to maintain homeostasis. However, extracting biologically useful information from Q-HTCP data is challenging, due to the multidimensional and nonlinear nature of gene interactions, together with a relative lack of prior biological information. Here we describe a new approach for mining quantitative genetic interaction data called recursive expectation-maximization clustering (REMc). We developed REMc to help discover phenomic modules, defined as sets of genes with similar patterns of interaction across a series of genetic or environmental perturbations. Such modules are reflective of buffering mechanisms, i.e., genes that play a related role in the maintenance
NASA Astrophysics Data System (ADS)
Guo, Jingyu; Tian, Dehua; McKinney, Brett A.; Hartman, John L.
2010-06-01
Interactions between genetic and/or environmental factors are ubiquitous, affecting the phenotypes of organisms in complex ways. Knowledge about such interactions is becoming rate-limiting for our understanding of human disease and other biological phenomena. Phenomics refers to the integrative analysis of how all genes contribute to phenotype variation, entailing genome and organism level information. A systems biology view of gene interactions is critical for phenomics. Unfortunately the problem is intractable in humans; however, it can be addressed in simpler genetic model systems. Our research group has focused on the concept of genetic buffering of phenotypic variation, in studies employing the single-cell eukaryotic organism, S. cerevisiae. We have developed a methodology, quantitative high throughput cellular phenotyping (Q-HTCP), for high-resolution measurements of gene-gene and gene-environment interactions on a genome-wide scale. Q-HTCP is being applied to the complete set of S. cerevisiae gene deletion strains, a unique resource for systematically mapping gene interactions. Genetic buffering is the idea that comprehensive and quantitative knowledge about how genes interact with respect to phenotypes will lead to an appreciation of how genes and pathways are functionally connected at a systems level to maintain homeostasis. However, extracting biologically useful information from Q-HTCP data is challenging, due to the multidimensional and nonlinear nature of gene interactions, together with a relative lack of prior biological information. Here we describe a new approach for mining quantitative genetic interaction data called recursive expectation-maximization clustering (REMc). We developed REMc to help discover phenomic modules, defined as sets of genes with similar patterns of interaction across a series of genetic or environmental perturbations. Such modules are reflective of buffering mechanisms, i.e., genes that play a related role in the maintenance
Deriving the Expected Utility of a Predictive Model When the Utilities Are Uncertain
Cooper, Gregory F.; Visweswaran, Shyam
2005-01-01
Predictive models are often constructed from clinical databases with the goal of eventually helping make better clinical decisions. Evaluating models using decision theory is therefore natural. When constructing a model using statistical and machine learning methods, however, we are often uncertain about precisely how a model will be used. Thus, decision-independent measures of classification performance, such as the area under an ROC curve, are popular. As a complementary method of evaluation, we investigate techniques for deriving the expected utility of a model under uncertainty about the model's utilities. We demonstrate an example of the application of this approach to the evaluation of two models that diagnose coronary artery disease. PMID:16779022
Karakatsanis, Nicolas A; Casey, Michael E; Lodge, Martin A; Rahmim, Arman; Zaidi, Habib
2016-08-01
Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible (18)F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published (18)F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10-20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were
Hild, Kenneth E.; Attias, Hagai T.; Nagarajan, Srikantan S.
2009-01-01
In this paper, we develop a maximum-likelihood (ML) spatio-temporal blind source separation (BSS) algorithm, where the temporal dependencies are explained by assuming that each source is an autoregressive (AR) process and the distribution of the associated independent identically distributed (i.i.d.) inovations process is described using a mixture of Gaussians. Unlike most ML methods, the proposed algorithm takes into account both spatial and temporal information, optimization is performed using the expectation-maximization (EM) method, the source model is adapted to maximize the likelihood, and the update equations have a simple, analytical form. The proposed method, which we refer to as autoregressive mixture of Gaussians (AR-MOG), outperforms nine other methods for artificial mixtures of real audio. We also show results for using AR-MOG to extract the fetal cardiac signal from real magnetocardiographic (MCG) data. PMID:18334368
OPTUM : Optimum Portfolio Tool for Utility Maximization documentation and user's guide.
VanKuiken, J. C.; Jusko, M. J.; Samsa, M. E.; Decision and Information Sciences
2008-09-30
The Optimum Portfolio Tool for Utility Maximization (OPTUM) is a versatile and powerful tool for selecting, optimizing, and analyzing portfolios. The software introduces a compact interface that facilitates problem definition, complex constraint specification, and portfolio analysis. The tool allows simple comparisons between user-preferred choices and optimized selections. OPTUM uses a portable, efficient, mixed-integer optimization engine (lp-solve) to derive the optimal mix of projects that satisfies the constraints and maximizes the total portfolio utility. OPTUM provides advanced features, such as convenient menus for specifying conditional constraints and specialized graphical displays of the optimal frontier and alternative solutions to assist in sensitivity visualization. OPTUM can be readily applied to other nonportfolio, resource-constrained optimization problems.
Eberhardt, Christiane S.; Blanchard-Rohner, Geraldine; Lemaître, Barbara; Boukrid, Meriem; Combescure, Christophe; Othenin-Girard, Véronique; Chilin, Antonina; Petre, Jean; de Tejada, Begoña Martinez; Siegrist, Claire-Anne
2016-01-01
Background. Maternal immunization against pertussis is currently recommended after the 26th gestational week (GW). Data on the optimal timing of maternal immunization are inconsistent. Methods. We conducted a prospective observational noninferiority study comparing the influence of second-trimester (GW 13–25) vs third-trimester (≥GW 26) tetanus-diphtheria-acellular pertussis (Tdap) immunization in pregnant women who delivered at term. Geometric mean concentrations (GMCs) of cord blood antibodies to recombinant pertussis toxin (PT) and filamentous hemagglutinin (FHA) were assessed by enzyme-linked immunosorbent assay. The primary endpoint were GMCs and expected infant seropositivity rates, defined by birth anti-PT >30 enzyme-linked immunosorbent assay units (EU)/mL to confer seropositivity until 3 months of age. Results. We included 335 women (mean age, 31.0 ± 5.1 years; mean gestational age, 39.3 ± 1.3 GW) previously immunized with Tdap in the second (n = 122) or third (n = 213) trimester. Anti-PT and anti-FHA GMCs were higher following second- vs third-trimester immunization (PT: 57.1 EU/mL [95% confidence interval {CI}, 47.8–68.2] vs 31.1 EU/mL [95% CI, 25.7–37.7], P < .001; FHA: 284.4 EU/mL [95% CI, 241.3–335.2] vs 140.2 EU/mL [95% CI, 115.3–170.3], P < .001). The adjusted GMC ratios after second- vs third-trimester immunization differed significantly (PT: 1.9 [95% CI, 1.4–2.5]; FHA: 2.2 [95% CI, 1.7–3.0], P < .001). Expected infant seropositivity rates reached 80% vs 55% following second- vs third-trimester immunization (adjusted odds ratio, 3.7 [95% CI, 2.1–6.5], P < .001). Conclusions. Early second-trimester maternal Tdap immunization significantly increased neonatal antibodies. Recommending immunization from the second trimester onward would widen the immunization opportunity window and could improve seroprotection. PMID:26797213
Chow, Sy-Miin; Lu, Zhaohua; Sherwood, Andrew; Zhu, Hongtu
2016-03-01
The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation-maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed. PMID:25416456
NASA Astrophysics Data System (ADS)
Reynolds, William R.; Talcott, Denise; Hilgers, John W.
2002-07-01
A new iterative algorithm (EMLS) via the expectation maximization method is derived for extrapolating a non- negative object function from noisy, diffraction blurred image data. The algorithm has the following desirable attributes; fast convergence is attained for high frequency object components, is less sensitive to constraint parameters, and will accommodate randomly missing data. Speed and convergence results are presented. Field test imagery was obtained with a passive millimeter wave imaging sensor having a 30.5 cm aperture. The algorithm was implemented and tested in near real time using field test imagery. Theoretical results and experimental results using the field test imagery will be compared using an effective aperture measure of resolution increase. The effective aperture measure, based on examination of the edge-spread function, will be detailed.
Liu, Mengyuan; Kitsch, Averi; Miller, Steven; Chau, Vann; Poskitt, Kenneth; Rousseau, Francois; Shaw, Dennis; Studholme, Colin
2016-02-15
Accurate automated tissue segmentation of premature neonatal magnetic resonance images is a crucial task for quantification of brain injury and its impact on early postnatal growth and later cognitive development. In such studies it is common for scans to be acquired shortly after birth or later during the hospital stay and therefore occur at arbitrary gestational ages during a period of rapid developmental change. It is important to be able to segment any of these scans with comparable accuracy. Previous work on brain tissue segmentation in premature neonates has focused on segmentation at specific ages. Here we look at solving the more general problem using adaptations of age specific atlas based methods and evaluate this using a unique manually traced database of high resolution images spanning 20 gestational weeks of development. We examine the complimentary strengths of age specific atlas-based Expectation-Maximization approaches and patch-based methods for this problem and explore the development of two new hybrid techniques, patch-based augmentation of Expectation-Maximization with weighted fusion and a spatial variability constrained patch search. The former approach seeks to combine the advantages of both atlas- and patch-based methods by learning from the performance of the two techniques across the brain anatomy at different developmental ages, while the latter technique aims to use anatomical variability maps learnt from atlas training data to locally constrain the patch-based search range. The proposed approaches were evaluated using leave-one-out cross-validation. Compared with the conventional age specific atlas-based segmentation and direct patch based segmentation, both new approaches demonstrate improved accuracy in the automated labeling of cortical gray matter, white matter, ventricles and sulcal cortical-spinal fluid regions, while maintaining comparable results in deep gray matter. PMID:26702777
Expected Utility Illustrated: A Graphical Analysis of Gambles with More than Two Possible Outcomes
ERIC Educational Resources Information Center
Chen, Frederick H.
2010-01-01
The author presents a simple geometric method to graphically illustrate the expected utility from a gamble with more than two possible outcomes. This geometric result gives economics students a simple visual aid for studying expected utility theory and enables them to analyze a richer set of decision problems under uncertainty compared to what…
Gantet, Pierre; Payoux, Pierre; Celler, Anna; Majorel, Cynthia; Gourion, Daniel; Noll, Dominikus; Esquerre, Jean-Paul
2006-01-15
Single photon emission computed tomography imaging suffers from poor spatial resolution and high statistical noise. Consequently, the contrast of small structures is reduced, the visual detection of defects is limited and precise quantification is difficult. To improve the contrast, it is possible to include the spatially variant point spread function of the detection system into the iterative reconstruction algorithm. This kind of method is well known to be effective, but time consuming. We have developed a faster method to account for the spatial resolution loss in three dimensions, based on a postreconstruction restoration method. The method uses two steps. First, a noncorrected iterative ordered subsets expectation maximization (OSEM) reconstruction is performed and, in the second step, a three-dimensional (3D) iterative maximum likelihood expectation maximization (ML-EM) a posteriori spatial restoration of the reconstructed volume is done. In this paper, we compare to the standard OSEM-3D method, in three studies (two in simulation and one from experimental data). In the two first studies, contrast, noise, and visual detection of defects are studied. In the third study, a quantitative analysis is performed from data obtained with an anthropomorphic striatal phantom filled with 123-I. From the simulations, we demonstrate that contrast as a function of noise and lesion detectability are very similar for both OSEM-3D and OSEM-R methods. In the experimental study, we obtained very similar values of activity-quantification ratios for different regions in the brain. The advantage of OSEM-R compared to OSEM-3D is a substantial gain of processing time. This gain depends on several factors. In a typical situation, for a 128x128 acquisition of 120 projections, OSEM-R is 13 or 25 times faster than OSEM-3D, depending on the calculation method used in the iterative restoration. In this paper, the OSEM-R method is tested with the approximation of depth independent
Maximizing precipitation utilization in dryland agriculture in South Africa — a review
NASA Astrophysics Data System (ADS)
Bennie, A. T. P.; Hensley, M.
2001-01-01
Agricultural systems in South Africa have been developed under primarily arid and semi-arid climatic conditions where droughts are common. Adoption of agricultural practices by farmers maximizes precipitation utilization, ensure production, economic and social sustainability. Precipitation use efficiency (PUE, kg produce ha -1 mm -1 rainfall plus the change in soil water content of the root zone) proved to be a valuable parameter for comparing the level of precipitation utilization of different production or management practices for dryland crop production or rangeland utilization. Increasing the length of the fallow period before planting increased the amount of pre-plant stored water in the soil thereby reducing the risk of drought damage to crops that resulted also in better yields. Deep drainage occurred only on sandy soils during wet seasons and values as high as 20% of the annual precipitation were measured during years of above average precipitation. In the experiments reported soil cultivation generally increased runoff. The retention of large amounts (>6 t ha -1) crop residue on the soil surface is required to decrease runoff from cultivated fields. Between 50 and 75% of the annual precipitation is lost through evaporation from the soil surface thus resulting in relatively low PUE-values.
Lu, Chia-Feng; Guo, Wan-Yuo; Chang, Feng-Chi; Huang, Shang-Ran; Chou, Yen-Chun; Wu, Yu-Te
2013-01-01
Automatic identification of various perfusion compartments from dynamic susceptibility contrast magnetic resonance brain images can assist in clinical diagnosis and treatment of cerebrovascular diseases. The principle of segmentation methods was based on the clustering of bolus transit-time profiles to discern areas of different tissues. However, the cerebrovascular diseases may result in a delayed and dispersed local perfusion and therefore alter the hemodynamic signal profiles. Assessing the accuracy of the segmentation technique under delayed/dispersed circumstance is critical to accurately evaluate the severity of the vascular disease. In this study, we improved the segmentation method of expectation-maximization algorithm by using the results of hierarchical clustering on whitened perfusion data as initial parameters for a mixture of multivariate Gaussians model. In addition, Monte Carlo simulations were conducted to evaluate the performance of proposed method under different levels of delay, dispersion, and noise of signal profiles in tissue segmentation. The proposed method was used to classify brain tissue types using perfusion data from five normal participants, a patient with unilateral stenosis of the internal carotid artery, and a patient with moyamoya disease. Our results showed that the normal, delayed or dispersed hemodynamics can be well differentiated for patients, and therefore the local arterial input function for impaired tissues can be recognized to minimize the error when estimating the cerebral blood flow. Furthermore, the tissue in the risk of infarct and the tissue with or without the complementary blood supply from the communicating arteries can be identified. PMID:23894386
Lee, Youngrok
2013-05-15
Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates of nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.
Huda, Shamsul; Yearwood, John; Togneri, Roberto
2014-10-01
The expectation maximization (EM) is the standard training algorithm for hidden Markov model (HMM). However, EM faces a local convergence problem in HMM estimation. This paper attempts to overcome this problem of EM and proposes hybrid metaheuristic approaches to EM for HMM. In our earlier research, a hybrid of a constraint-based evolutionary learning approach to EM (CEL-EM) improved HMM estimation. In this paper, we propose a hybrid simulated annealing stochastic version of EM (SASEM) that combines simulated annealing (SA) with EM. The novelty of our approach is that we develop a mathematical reformulation of HMM estimation by introducing a stochastic step between the EM steps and combine SA with EM to provide better control over the acceptance of stochastic and EM steps for better HMM estimation. We also extend our earlier work and propose a second hybrid which is a combination of an EA and the proposed SASEM, (EA-SASEM). The proposed EA-SASEM uses the best constraint-based EA strategies from CEL-EM and stochastic reformulation of HMM. The complementary properties of EA and SA and stochastic reformulation of HMM of SASEM provide EA-SASEM with sufficient potential to find better estimation for HMM. To the best of our knowledge, this type of hybridization and mathematical reformulation have not been explored in the context of EM and HMM training. The proposed approaches have been evaluated through comprehensive experiments to justify their effectiveness in signal modeling using the speech corpus: TIMIT. Experimental results show that proposed approaches obtain higher recognition accuracies than the EM algorithm and CEL-EM as well. PMID:24686310
Yousefi, Siamak; Balasubramanian, Madhusudhanan; Goldbaum, Michael H.; Medeiros, Felipe A.; Zangwill, Linda M.; Weinreb, Robert N.; Liebmann, Jeffrey M.; Girkin, Christopher A.; Bowd, Christopher
2016-01-01
Purpose To validate Gaussian mixture-model with expectation maximization (GEM) and variational Bayesian independent component analysis mixture-models (VIM) for detecting glaucomatous progression along visual field (VF) defect patterns (GEM–progression of patterns (POP) and VIM-POP). To compare GEM-POP and VIM-POP with other methods. Methods GEM and VIM models separated cross-sectional abnormal VFs from 859 eyes and normal VFs from 1117 eyes into abnormal and normal clusters. Clusters were decomposed into independent axes. The confidence limit (CL) of stability was established for each axis with a set of 84 stable eyes. Sensitivity for detecting progression was assessed in a sample of 83 eyes with known progressive glaucomatous optic neuropathy (PGON). Eyes were classified as progressed if any defect pattern progressed beyond the CL of stability. Performance of GEM-POP and VIM-POP was compared to point-wise linear regression (PLR), permutation analysis of PLR (PoPLR), and linear regression (LR) of mean deviation (MD), and visual field index (VFI). Results Sensitivity and specificity for detecting glaucomatous VFs were 89.9% and 93.8%, respectively, for GEM and 93.0% and 97.0%, respectively, for VIM. Receiver operating characteristic (ROC) curve areas for classifying progressed eyes were 0.82 for VIM-POP, 0.86 for GEM-POP, 0.81 for PoPLR, 0.69 for LR of MD, and 0.76 for LR of VFI. Conclusions GEM-POP was significantly more sensitive to PGON than PoPLR and linear regression of MD and VFI in our sample, while providing localized progression information. Translational Relevance Detection of glaucomatous progression can be improved by assessing longitudinal changes in localized patterns of glaucomatous defect identified by unsupervised machine learning. PMID:27152250
The role of data assimilation in maximizing the utility of geospace observations (Invited)
NASA Astrophysics Data System (ADS)
Matsuo, T.
2013-12-01
Data assimilation can facilitate maximizing the utility of existing geospace observations by offering an ultimate marriage of inductive (data-driven) and deductive (first-principles based) approaches to addressing critical questions in space weather. Assimilative approaches that incorporate dynamical models are, in particular, capable of making a diverse set of observations consistent with physical processes included in a first-principles model, and allowing unobserved physical states to be inferred from observations. These points will be demonstrated in the context of the application of an ensemble Kalman filter (EnKF) to a thermosphere and ionosphere general circulation model. An important attribute of this approach is that the feedback between plasma and neutral variables is self-consistently treated both in the forecast model as well as in the assimilation scheme. This takes advantage of the intimate coupling between the thermosphere and ionosphere described in general circulation models to enable the inference of unobserved thermospheric states from the relatively plentiful observations of the ionosphere. Given the ever-growing infrastructure for the global navigation satellite system, this is indeed a promising prospect for geospace data assimilation. In principle, similar approaches can be applied to any geospace observing systems to extract more geophysical information from a given set of observations than would otherwise be possible.
2014-01-01
Background Recovering individual genomes from metagenomic datasets allows access to uncultivated microbial populations that may have important roles in natural and engineered ecosystems. Understanding the roles of these uncultivated populations has broad application in ecology, evolution, biotechnology and medicine. Accurate binning of assembled metagenomic sequences is an essential step in recovering the genomes and understanding microbial functions. Results We have developed a binning algorithm, MaxBin, which automates the binning of assembled metagenomic scaffolds using an expectation-maximization algorithm after the assembly of metagenomic sequencing reads. Binning of simulated metagenomic datasets demonstrated that MaxBin had high levels of accuracy in binning microbial genomes. MaxBin was used to recover genomes from metagenomic data obtained through the Human Microbiome Project, which demonstrated its ability to recover genomes from real metagenomic datasets with variable sequencing coverages. Application of MaxBin to metagenomes obtained from microbial consortia adapted to grow on cellulose allowed genomic analysis of new, uncultivated, cellulolytic bacterial populations, including an abundant myxobacterial population distantly related to Sorangium cellulosum that possessed a much smaller genome (5 MB versus 13 to 14 MB) but has a more extensive set of genes for biomass deconstruction. For the cellulolytic consortia, the MaxBin results were compared to binning using emergent self-organizing maps (ESOMs) and differential coverage binning, demonstrating that it performed comparably to these methods but had distinct advantages in automation, resolution of related genomes and sensitivity. Conclusions The automatic binning software that we developed successfully classifies assembled sequences in metagenomic datasets into recovered individual genomes. The isolation of dozens of species in cellulolytic microbial consortia, including a novel species of
Barbee, David L; Flynn, Ryan T; Holden, James E; Nickles, Robert J; Jeraj, Robert
2010-01-01
Tumor heterogeneities observed in positron emission tomography (PET) imaging are frequently compromised of partial volume effects which may affect treatment prognosis, assessment, or future implementations such as biologically optimized treatment planning (dose painting). This paper presents a method for partial volume correction of PET-imaged heterogeneous tumors. A point source was scanned on a GE Discover LS at positions of increasing radii from the scanner’s center to obtain the spatially varying point spread function (PSF). PSF images were fit in three dimensions to Gaussian distributions using least squares optimization. Continuous expressions were devised for each Gaussian width as a function of radial distance, allowing for generation of the system PSF at any position in space. A spatially varying partial volume correction (SV-PVC) technique was developed using expectation maximization (EM) and a stopping criterion based on the method’s correction matrix generated for each iteration. The SV-PVC was validated using a standard tumor phantom and a tumor heterogeneity phantom, and was applied to a heterogeneous patient tumor. SV-PVC results were compared to results obtained from spatially invariant partial volume correction (SINV-PVC), which used directionally uniform three dimensional kernels. SV-PVC of the standard tumor phantom increased the maximum observed sphere activity by 55 and 40% for 10 and 13 mm diameter spheres, respectively. Tumor heterogeneity phantom results demonstrated that as net changes in the EM correction matrix decreased below 35%, further iterations improved overall quantitative accuracy by less than 1%. SV-PVC of clinically observed tumors frequently exhibited changes of ±30% in regions of heterogeneity. The SV-PVC method implemented spatially varying kernel widths and automatically determined the number of iterations for optimal restoration, parameters which are arbitrarily chosen in SINV-PVC. Comparing SV-PVC to SINV
Maximizing coupling-efficiency of high-power diode lasers utilizing hybrid assembly technology
NASA Astrophysics Data System (ADS)
Zontar, D.; Dogan, M.; Fulghum, S.; Müller, T.; Haag, S.; Brecher, C.
2015-03-01
In this paper, we present hybrid assembly technology to maximize coupling efficiency for spatially combined laser systems. High quality components, such as center-turned focusing units, as well as suitable assembly strategies are necessary to obtain highest possible output ratios. Alignment strategies are challenging tasks due to their complexity and sensitivity. Especially in low-volume production fully automated systems are economically at a disadvantage, as operator experience is often expensive. However reproducibility and quality of automatically assembled systems can be superior. Therefore automated and manual assembly techniques are combined to obtain high coupling efficiency while preserving maximum flexibility. The paper will describe necessary equipment and software to enable hybrid assembly processes. Micromanipulator technology with high step-resolution and six degrees of freedom provide a large number of possible evaluation points. Automated algorithms are necess ary to speed-up data gathering and alignment to efficiently utilize available granularity for manual assembly processes. Furthermore, an engineering environment is presented to enable rapid prototyping of automation tasks with simultaneous data ev aluation. Integration with simulation environments, e.g. Zemax, allows the verification of assembly strategies in advance. Data driven decision making ensures constant high quality, documents the assembly process and is a basis for further improvement. The hybrid assembly technology has been applied on several applications for efficiencies above 80% and will be discussed in this paper. High level coupling efficiency has been achieved with minimized assembly as a result of semi-automated alignment. This paper will focus on hybrid automation for optimizing and attaching turning mirrors and collimation lenses.
ERIC Educational Resources Information Center
Lawson, Laura; McNally, Marcia
1995-01-01
Including teens' needs in the planning and maintenance of urban space suggests new methods of layering utility and maximizing benefit to teens and community. Discusses the Berkeley Youth Alternatives (BYA) Youth Employment Landscape Program and BYA Community Garden Patch. Program descriptions and evaluation provide future direction. (LZ)
Foxall, Gordon R; Oliveira-Castro, Jorge M; Schrezenmaier, Teresa C
2004-06-30
Purchasers of fast-moving consumer goods generally exhibit multi-brand choice, selecting apparently randomly among a small subset or "repertoire" of tried and trusted brands. Their behavior shows both matching and maximization, though it is not clear just what the majority of buyers are maximizing. Each brand attracts, however, a small percentage of consumers who are 100%-loyal to it during the period of observation. Some of these are exclusively buyers of premium-priced brands who are presumably maximizing informational reinforcement because their demand for the brand is relatively price-insensitive or inelastic. Others buy exclusively the cheapest brands available and can be assumed to maximize utilitarian reinforcement since their behavior is particularly price-sensitive or elastic. Between them are the majority of consumers whose multi-brand buying takes the form of selecting a mixture of economy -- and premium-priced brands. Based on the analysis of buying patterns of 80 consumers for 9 product categories, the paper examines the continuum of consumers so defined and seeks to relate their buying behavior to the question of how and what consumers maximize. PMID:15157975
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-27
... Management and Budget (67 FR 8452-8460), pursuant to section 515 of the Treasury and General Government... FR 8452-8460) that direct each federal agency to (1) Issue its own guidelines ensuring and maximizing... releases, archival records, public filings, subpoenas, or adjudicative processes. 3. ``Influential,''...
ERIC Educational Resources Information Center
Jacobs, Paul G.; Brown, P. Margaret; Paatsch, Louise
2012-01-01
This article documents a strength-based understanding of how individuals who are deaf maximize their social and professional potential. This exploratory study was conducted with 49 adult participants who are deaf (n = 30) and who have typical hearing (n = 19) residing in America, Australia, England, and South Africa. The findings support a…
The temporal derivative of expected utility: a neural mechanism for dynamic decision-making.
Zhang, Xian; Hirsch, Joy
2013-01-15
Real world tasks involving moving targets, such as driving a vehicle, are performed based on continuous decisions thought to depend upon the temporal derivative of the expected utility (∂V/∂t), where the expected utility (V) is the effective value of a future reward. However, the neural mechanisms that underlie dynamic decision-making are not well understood. This study investigates human neural correlates of both V and ∂V/∂t using fMRI and a novel experimental paradigm based on a pursuit-evasion game optimized to isolate components of dynamic decision processes. Our behavioral data show that players of the pursuit-evasion game adopt an exponential discounting function, supporting the expected utility theory. The continuous functions of V and ∂V/∂t were derived from the behavioral data and applied as regressors in fMRI analysis, enabling temporal resolution that exceeded the sampling rate of image acquisition, hyper-temporal resolution, by taking advantage of numerous trials that provide rich and independent manipulation of those variables. V and ∂V/∂t were each associated with distinct neural activity. Specifically, ∂V/∂t was associated with anterior and posterior cingulate cortices, superior parietal lobule, and ventral pallidum, whereas V was primarily associated with supplementary motor, pre and post central gyri, cerebellum, and thalamus. The association between the ∂V/∂t and brain regions previously related to decision-making is consistent with the primary role of the temporal derivative of expected utility in dynamic decision-making. PMID:22963852
Robust optimal sensor placement for operational modal analysis based on maximum expected utility
NASA Astrophysics Data System (ADS)
Li, Binbin; Der Kiureghian, Armen
2016-06-01
Optimal sensor placement is essentially a decision problem under uncertainty. The maximum expected utility theory and a Bayesian linear model are used in this paper for robust sensor placement aimed at operational modal identification. To avoid nonlinear relations between modal parameters and measured responses, we choose to optimize the sensor locations relative to identifying modal responses. Since the modal responses contain all the information necessary to identify the modal parameters, the optimal sensor locations for modal response estimation provide at least a suboptimal solution for identification of modal parameters. First, a probabilistic model for sensor placement considering model uncertainty, load uncertainty and measurement error is proposed. The maximum expected utility theory is then applied with this model by considering utility functions based on three principles: quadratic loss, Shannon information, and K-L divergence. In addition, the prior covariance of modal responses under band-limited white-noise excitation is derived and the nearest Kronecker product approximation is employed to accelerate evaluation of the utility function. As demonstration and validation examples, sensor placements in a 16-degrees-of-freedom shear-type building and in Guangzhou TV Tower under ground motion and wind load are considered. Placements of individual displacement meter, velocimeter, accelerometer and placement of mixed sensors are illustrated.
Liu, Xiaoxi; Wang, Yuhuan
2016-08-01
For the purpose of population pharmacometric modeling, a variety of mathematic algorithms are implemented in major modeling software packages to facilitate the maximum likelihood modeling, such as FO, FOCE, Laplace, ITS and EM. These methods are all designed to estimate the set of parameters that maximize the joint likelihood of observations in a given problem. While FOCE is still currently the most widely used method in population modeling, EM methods are getting more popular as the current-generation methods of choice because of their robustness with more complex models and sparse data structures. There are several versions of EM method implementation that are available in public modeling software packages. Although there have been several studies and reviews comparing the performance of different methods in handling relatively simple models, there has not been a dedicated study to compare different versions of EM algorithms in solving complex PBPK models. This study took everolimus as a model drug and simulated PK data based on published results. Three most popular EM methods (SAEM, IMP and QRPEM) and FOCE (as a benchmark reference) were evaluated for their estimation accuracy and converging speed when solving models of increased complexity. Both sparse and rich sampling data structure were tested. We concluded that FOCE was superior to EM methods for simple structured models. For more complex models and/ or sparse data, EM methods are much more robust. While the estimation accuracy was very close across EM methods, the general ranking of speed (fastest to slowest) was: QRPEM, IMP and SAEM. IMP gave the most realistic estimation of parameter standard errors, while under- and over- estimation of standard errors were observed in SAEM and QRPEM methods. PMID:27215925
Jergens, Albert E; Willard, Michael D; Allenspach, Karin
2016-08-01
Flexible endoscopy has become a valuable tool for the diagnosis of many small animal gastrointestinal (GI) diseases, but the techniques must be performed carefully so that the results are meaningful. This article reviews the current diagnostic utility of flexible endoscopy, including practical/technical considerations for endoscopic biopsy, optimal instrumentation for mucosal specimen collection, the correlation of endoscopic indices to clinical activity and to histopathologic findings, and new developments in the endoscopic diagnosis of GI disease. Recent studies have defined endoscopic biopsy guidelines for the optimal number and quality of diagnostic specimens from different regions of the gut. They also have shown the value of ileal biopsy in the diagnosis of canine and feline chronic enteropathies, and have demonstrated the utility of endoscopic biopsy specimens beyond routine hematoxylin and eosin histopathological analysis, including their use in immunohistochemical, microbiological, and molecular studies. PMID:27387727
Utilizing Maximal Independent Sets as Dominating Sets in Scale-Free Networks
NASA Astrophysics Data System (ADS)
Derzsy, N.; Molnar, F., Jr.; Szymanski, B. K.; Korniss, G.
Dominating sets provide key solution to various critical problems in networked systems, such as detecting, monitoring, or controlling the behavior of nodes. Motivated by graph theory literature [Erdos, Israel J. Math. 4, 233 (1966)], we studied maximal independent sets (MIS) as dominating sets in scale-free networks. We investigated the scaling behavior of the size of MIS in artificial scale-free networks with respect to multiple topological properties (size, average degree, power-law exponent, assortativity), evaluated its resilience to network damage resulting from random failure or targeted attack [Molnar et al., Sci. Rep. 5, 8321 (2015)], and compared its efficiency to previously proposed dominating set selection strategies. We showed that, despite its small set size, MIS provides very high resilience against network damage. Using extensive numerical analysis on both synthetic and real-world (social, biological, technological) network samples, we demonstrate that our method effectively satisfies four essential requirements of dominating sets for their practical applicability on large-scale real-world systems: 1.) small set size, 2.) minimal network information required for their construction scheme, 3.) fast and easy computational implementation, and 4.) resiliency to network damage. Supported by DARPA, DTRA, and NSF.
Expected Utility Based Decision Making under Z-Information and Its Application.
Aliev, Rashad R; Mraiziq, Derar Atallah Talal; Huseynov, Oleg H
2015-01-01
Real-world decision relevant information is often partially reliable. The reasons are partial reliability of the source of information, misperceptions, psychological biases, incompetence, and so forth. Z-numbers based formalization of information (Z-information) represents a natural language (NL) based value of a variable of interest in line with the related NL based reliability. What is important is that Z-information not only is the most general representation of real-world imperfect information but also has the highest descriptive power from human perception point of view as compared to fuzzy number. In this study, we present an approach to decision making under Z-information based on direct computation over Z-numbers. This approach utilizes expected utility paradigm and is applied to a benchmark decision problem in the field of economics. PMID:26366163
Maximizing the utility of monitoring to the adaptive management of natural resources
Kendall, William L.; Moore, Clinton T.
2012-01-01
Data collection is an important step in any investigation about the structure or processes related to a natural system. In a purely scientific investigation (experiments, quasi-experiments, observational studies), data collection is part of the scientific method, preceded by the identification of hypotheses and the design of any manipulations of the system to test those hypotheses. Data collection and the manipulations that precede it are ideally designed to maximize the information that is derived from the study. That is, such investigations should be designed for maximum power to evaluate the relative validity of the hypotheses posed. When data collection is intended to inform the management of ecological systems, we call it monitoring. Note that our definition of monitoring encompasses a broader range of data-collection efforts than some alternative definitions – e.g. Chapter 3. The purpose of monitoring as we use the term can vary, from surveillance or “thumb on the pulse” monitoring (see Nichols and Williams 2006), intended to detect changes in a system due to any non-specified source (e.g. the North American Breeding Bird Survey), to very specific and targeted monitoring of the results of specific management actions (e.g. banding and aerial survey efforts related to North American waterfowl harvest management). Although a role of surveillance monitoring is to detect unanticipated changes in a system, the same result is possible from a collection of targeted monitoring programs distributed across the same spatial range (Box 4.1). In the face of limited budgets and many specific management questions, tying monitoring as closely as possible to management needs is warranted (Nichols and Williams 2006). Adaptive resource management (ARM; Walters 1986, Williams 1997, Kendall 2001, Moore and Conroy 2006, McCarthy and Possingham 2007, Conroy et al. 2008a) provides a context and specific purpose for monitoring: to evaluate decisions with respect to achievement
Utilization of negative beat-frequencies for maximizing the update-rate of OFDR
NASA Astrophysics Data System (ADS)
Gabai, Haniel; Botsev, Yakov; Hahami, Meir; Eyal, Avishay
2015-07-01
In traditional OFDR systems, the backscattered profile of a sensing fiber is inefficiently duplicated to the negative band of spectrum. In this work, we present a new OFDR design and algorithm that remove this redundancy and make use of negative beat frequencies. In contrary to conventional OFDR designs, it facilitates efficient use of the available system bandwidth and enables distributed sensing with the maximum allowable interrogation update-rate for a given fiber length. To enable the reconstruction of negative beat frequencies an I/Q type receiver is used. In this receiver, both the in-phase (I) and quadrature (Q) components of the backscatter field are detected. Following detection, both components are digitally combined to produce a complex backscatter signal. Accordingly, due to its asymmetric nature, the produced spectrum will not be corrupted by the appearance of negative beat-frequencies. Here, via a comprehensive computer simulation, we show that in contrast to conventional OFDR systems, I/Q OFDR can be operated at maximum interrogation update-rate for a given fiber length. In addition, we experimentally demonstrate, for the first time, the ability of I/Q OFDR to utilize negative beat-frequencies for long-range distributed sensing.
A Neurodynamic Approach for Real-Time Scheduling via Maximizing Piecewise Linear Utility.
Guo, Zhishan; Baruah, Sanjoy K
2016-02-01
In this paper, we study a set of real-time scheduling problems whose objectives can be expressed as piecewise linear utility functions. This model has very wide applications in scheduling-related problems, such as mixed criticality, response time minimization, and tardiness analysis. Approximation schemes and matrix vectorization techniques are applied to transform scheduling problems into linear constraint optimization with a piecewise linear and concave objective; thus, a neural network-based optimization method can be adopted to solve such scheduling problems efficiently. This neural network model has a parallel structure, and can also be implemented on circuits, on which the converging time can be significantly limited to meet real-time requirements. Examples are provided to illustrate how to solve the optimization problem and to form a schedule. An approximation ratio bound of 0.5 is further provided. Experimental studies on a large number of randomly generated sets suggest that our algorithm is optimal when the set is nonoverloaded, and outperforms existing typical scheduling strategies when there is overload. Moreover, the number of steps for finding an approximate solution remains at the same level when the size of the problem (number of jobs within a set) increases. PMID:26336153
Cano, I; Roca, J; Wagner, P D
2015-01-01
Previous models of O2 transport and utilization in health considered diffusive exchange of O2 in lung and muscle, but, reasonably, neglected functional heterogeneities in these tissues. However, in disease, disregarding such heterogeneities would not be justified. Here, pulmonary ventilation–perfusion and skeletal muscle metabolism–perfusion mismatching were added to a prior model of only diffusive exchange. Previously ignored O2 exchange in non-exercising tissues was also included. We simulated maximal exercise in (a) healthy subjects at sea level and altitude, and (b) COPD patients at sea level, to assess the separate and combined effects of pulmonary and peripheral functional heterogeneities on overall muscle O2 uptake ( and on mitochondrial (). In healthy subjects at maximal exercise, the combined effects of pulmonary and peripheral heterogeneities reduced arterial () at sea level by 32 mmHg, but muscle by only 122 ml min−1 (–3.5%). At the altitude of Mt Everest, lung and tissue heterogeneity together reduced by less than 1 mmHg and by 32 ml min−1 (–2.4%). Skeletal muscle heterogeneity led to a wide range of potential among muscle regions, a range that becomes narrower as increases, and in regions with a low ratio of metabolic capacity to blood flow, can exceed that of mixed muscle venous blood. For patients with severe COPD, peak was insensitive to substantial changes in the mitochondrial characteristics for O2 consumption or the extent of muscle heterogeneity. This integrative computational model of O2 transport and utilization offers the potential for estimating profiles of both in health and in diseases such as COPD if the extent for both lung ventilation–perfusion and tissue metabolism–perfusion heterogeneity is known. PMID:25640017
Holden, J E
2013-01-01
We introduce a method for denoising dynamic PET data, spatio-temporal expectation-maximization (STEM) filtering, that combines 4-dimensional Gaussian filtering with EM deconvolution. The initial Gaussian filter suppresses noise at a broad range of spatial and temporal frequencies and EM deconvolution quickly restores the frequencies most important to the signal. We aim to demonstrate that STEM filtering can improve variance in both individual time frames and in parametric images without introducing significant bias. We evaluate STEM filtering with a dynamic phantom study, and with simulated and human dynamic PET studies of a tracer with reversible binding behaviour, [C-11]raclopride, and a tracer with irreversible binding behaviour, [F-18]FDOPA. STEM filtering is compared to a number of established 3 and 4-dimensional denoising methods. STEM filtering provides substantial improvements in variance in both individual time frames and in parametric images generated with a number of kinetic analysis techniques while introducing little bias. STEM filtering does bias early frames, but this does not affect quantitative parameter estimates. STEM filtering is shown to be superior to the other simple denoising methods studied. STEM filtering is a simple and effective denoising method that could be valuable for a wide range of dynamic PET applications. PMID:23370699
Grimes, Morad; Bouhadjera, Abdelmalek; Haddad, Sofiane; Benkedidah, Toufik
2012-07-01
In testing cancellous bone using ultrasound, two types of longitudinal Biot's waves are observed in the received signal. These are known as fast and slow waves and their appearance depend on the alignment of bone trabeculae in the propagation path and the thickness of the specimen under test (SUT). They can be used as an effective tool for the diagnosis of osteoporosis because wave propagation behavior depends on the bone structure. However, the identification of these waves in the received signal can be difficult to achieve. In this study, ultrasonic wave propagation in a 4mm thick bovine cancellous bone in the direction parallel to the trabecular alignment is considered. The observed Biot's fast and slow longitudinal waves are superimposed; which makes it difficult to extract any information from the received signal. These two waves can be separated using the space alternating generalized expectation maximization (SAGE) algorithm. The latter has been used mainly in speech processing. In this new approach, parameters such as, arrival time, center frequency, bandwidth, amplitude, phase and velocity of each wave are estimated. The B-Scan images and its associated A-scans obtained through simulations using Biot's finite-difference time-domain (FDTD) method are validated experimentally using a thin bone sample obtained from the femoral-head of a 30 months old bovine. PMID:22284937
Haas, Kevin R; Yang, Haw; Chu, Jhih-Wei
2013-12-12
The dynamics of a protein along a well-defined coordinate can be formally projected onto the form of an overdamped Lagevin equation. Here, we present a comprehensive statistical-learning framework for simultaneously quantifying the deterministic force (the potential of mean force, PMF) and the stochastic force (characterized by the diffusion coefficient, D) from single-molecule Förster-type resonance energy transfer (smFRET) experiments. The likelihood functional of the Langevin parameters, PMF and D, is expressed by a path integral of the latent smFRET distance that follows Langevin dynamics and realized by the donor and the acceptor photon emissions. The solution is made possible by an eigen decomposition of the time-symmetrized form of the corresponding Fokker-Planck equation coupled with photon statistics. To extract the Langevin parameters from photon arrival time data, we advance the expectation-maximization algorithm in statistical learning, originally developed for and mostly used in discrete-state systems, to a general form in the continuous space that allows for a variational calculus on the continuous PMF function. We also introduce the regularization of the solution space in this Bayesian inference based on a maximum trajectory-entropy principle. We use a highly nontrivial example with realistically simulated smFRET data to illustrate the application of this new method. PMID:23937300
2014-01-01
Background Population genetics and association studies usually rely on a set of known variable sites that are then genotyped in subsequent samples, because it is easier to genotype than to discover the variation. This is also true for structural variation detected from sequence data. However, the genotypes at known variable sites can only be inferred with uncertainty from low coverage data. Thus, statistical approaches that infer genotype likelihoods, test hypotheses, and estimate population parameters without requiring accurate genotypes are becoming popular. Unfortunately, the current implementations of these methods are intended to analyse only single nucleotide and short indel variation, and they usually assume that the two alleles in a heterozygous individual are sampled with equal probability. This is generally false for structural variants detected with paired ends or split reads. Therefore, the population genetics of structural variants cannot be studied, unless a painstaking and potentially biased genotyping is performed first. Results We present svgem, an expectation-maximization implementation to estimate allele and genotype frequencies, calculate genotype posterior probabilities, and test for Hardy-Weinberg equilibrium and for population differences, from the numbers of times the alleles are observed in each individual. Although applicable to single nucleotide variation, it aims at bi-allelic structural variation of any type, observed by either split reads or paired ends, with arbitrarily high allele sampling bias. We test svgem with simulated and real data from the 1000 Genomes Project. Conclusions svgem makes it possible to use low-coverage sequencing data to study the population distribution of structural variants without having to know their genotypes. Furthermore, this advance allows the combined analysis of structural and nucleotide variation within the same genotype-free statistical framework, thus preventing biases introduced by genotype
Huda, Shamsul; Yearwood, John; Togneri, Roberto
2009-02-01
This paper attempts to overcome the tendency of the expectation-maximization (EM) algorithm to locate a local rather than global maximum when applied to estimate the hidden Markov model (HMM) parameters in speech signal modeling. We propose a hybrid algorithm for estimation of the HMM in automatic speech recognition (ASR) using a constraint-based evolutionary algorithm (EA) and EM, the CEL-EM. The novelty of our hybrid algorithm (CEL-EM) is that it is applicable for estimation of the constraint-based models with many constraints and large numbers of parameters (which use EM) like HMM. Two constraint-based versions of the CEL-EM with different fusion strategies have been proposed using a constraint-based EA and the EM for better estimation of HMM in ASR. The first one uses a traditional constraint-handling mechanism of EA. The other version transforms a constrained optimization problem into an unconstrained problem using Lagrange multipliers. Fusion strategies for the CEL-EM use a staged-fusion approach where EM has been plugged with the EA periodically after the execution of EA for a specific period of time to maintain the global sampling capabilities of EA in the hybrid algorithm. A variable initialization approach (VIA) has been proposed using a variable segmentation to provide a better initialization for EA in the CEL-EM. Experimental results on the TIMIT speech corpus show that CEL-EM obtains higher recognition accuracies than the traditional EM algorithm as well as a top-standard EM (VIA-EM, constructed by applying the VIA to EM). PMID:19068441
Illustrating Caffeine's Pharmacological and Expectancy Effects Utilizing a Balanced Placebo Design.
ERIC Educational Resources Information Center
Lotshaw, Sandra C.; And Others
1996-01-01
Hypothesizes that pharmacological and expectancy effects may be two principles that govern caffeine consumption in the same way they affect other drug use. Tests this theory through a balanced placebo design on 100 male undergraduate students. Expectancy set and caffeine content appeared equally powerful, and worked additionally, to affect…
Dolman, M; Chase, J
1996-08-01
A small-scale study was undertaken to test the relative predictive power of the Health Belief Model and Subjective Expected Utility Theory for the uptake of a behaviour (pelvic floor exercises) to reduce post-partum urinary incontinence in primigravida females. A structured questionnaire was used to gather data relevant to both models from a sample antenatal and postnatal primigravida women. Questions examined the perceived probability of becoming incontinent, the perceived (dis)utility of incontinence, the perceived probability of pelvic floor exercises preventing future urinary incontinence, the costs and benefits of performing pelvic floor exercises and sources of information and knowledge about incontinence. Multiple regression analysis focused on whether or not respondents intended to perform pelvic floor exercises and the factors influencing their decisions. Aggregated data were analysed to compare the Health Belief Model and Subjective Expected Utility Theory directly. PMID:9238593
Expected Utility Theory as a Guide to Contingency (Allowance or Management Reserve) Allocation
Thibadeau, Barbara M
2006-01-01
In this paper, I view a project from the perspective of utility theory. I suggest that, by determining an optimal percent contingency (relative to remaining work) and identifying and enforcing a required change in behavior, from one that is risk-seeking to one that is risk-averse, a project's contingency can be managed more effectively. I argue that early on in a project, risk-seeking behavior dominates. During this period, requests for contingency are less rigorously scrutinized. As the design evolves, more accurate information becomes available. Once the designs have been finalized, the project team must transition from a free-thinking, exploratory mode to an execution mode. If projects do not transition fast enough from a risk-seeking to a risk-averse organization, an inappropriate allocation of project contingency could occur (too much too early in the project). I show that the behavioral patterns used to characterize utility theory are those that exist in the project environment. I define a project's utility and thus, provide project managers with a metric against which all gambles (requests for contingency) can be evaluated. I discuss other research as it relates to utility and project management. From empirical data analysis, I demonstrate that there is a direct correlation between progress on a project's design activities and the rate at which project contingency is allocated and recommend a transition time frame during which the rate of allocation should decrease and the project should transition from risk-seeking to risk-averse. I show that these data are already available from a project's earned value management system and thus, inclusion of this information in the standard monthly reporting suite can enhance a project manager's decision making capability.
Stevens, R.; Reber, E.
1993-01-01
The design, development and implementation of medical education software often occurs without sufficient consideration of the potential benefits that can be realized by making the software network aware. These benefits can be considerable and can greatly enhance the utilization and potential impact of the software. This article details how multiple aspects of the IMMEX problem solving project have benefited from taking maximum advantage of LAN resources. PMID:8130583
DeFrancisco, S.T.; Sanford, S.J.; Hong, K.C.
1995-12-31
The Water-Alternating-Steam-Process (WASP) has been utilized on Section 13D, West Coalinga Field since 1988. Originally implemented to control premature, high-temperature steam breakthrough, the process has improved sales oil recovery in both breakthrough and non-breakthrough patterns. A desktop, semi-conceptual simulation study was initiated in June 1993 to provide a theoretical basis for optimizing and monitoring the WASP project. The simulation study results showed that the existing WASP injection strategy could be further optimized. It also showed that conversion to continuous hot waterflood was the optimum injection strategy for the steamflood sands. The Section 13D WASP project was gradually converted to hot waterflood during 1994. Conversion to hot waterflood has significantly improved project cash flow and increased the value of the Section 13D thermal project.
Brotnow, Line; Reiss, David; Stover, Carla S.; Ganiban, Jody; Leve, Leslie D.; Neiderhiser, Jenae M.; Shaw, Daniel S.; Stevens, Hanna E.
2015-01-01
Background Mothers’ stress in pregnancy is considered an environmental risk factor in child development. Multiple stressors may combine to increase risk, and maternal personal characteristics may offset the effects of stress. This study aimed to test the effect of 1) multifactorial prenatal stress, integrating objective “stressors” and subjective “distress” and 2) the moderating effects of maternal characteristics (perceived social support, self-esteem and specific personality traits) on infant birthweight. Method Hierarchical regression modeling was used to examine cross-sectional data on 403 birth mothers and their newborns from an adoption study. Results Distress during pregnancy showed a statistically significant association with birthweight (R2 = 0.032, F(2, 398) = 6.782, p = .001). The hierarchical regression model revealed an almost two-fold increase in variance of birthweight predicted by stressors as compared with distress measures (R2Δ = 0.049, F(4, 394) = 5.339, p < .001). Further, maternal characteristics moderated this association (R2Δ = 0.031, F(4, 389) = 3.413, p = .009). Specifically, the expected benefit to birthweight as a function of higher SES was observed only for mothers with lower levels of harm-avoidance and higher levels of perceived social support. Importantly, the results were not better explained by prematurity, pregnancy complications, exposure to drugs, alcohol or environmental toxins. Conclusions The findings support multidimensional theoretical models of prenatal stress. Although both objective stressors and subjectively measured distress predict birthweight, they should be considered distinct and cumulative components of stress. This study further highlights that jointly considering risk factors and protective factors in pregnancy improves the ability to predict birthweight. PMID:26544958
Razali, Azhani Mohd Abdullah, Jaafar
2015-04-29
Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.
NASA Astrophysics Data System (ADS)
Razali, Azhani Mohd; Abdullah, Jaafar
2015-04-01
Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.
Cacioppo, Cara N.; Chandler, Ariel E.; Towne, Meghan C.; Beggs, Alan H.; Holm, Ingrid A.
2016-01-01
Purpose Much information on parental perspectives on the return of individual research results (IRR) in pediatric genomic research is based on hypothetical rather than actual IRR. Our aim was to understand how the expected utility to parents who received IRR on their child from a genetic research study compared to the actual utility of the IRR received. Methods We conducted individual telephone interviews with parents who received IRR on their child through participation in the Manton Center for Orphan Disease Research Gene Discovery Core (GDC) at Boston Children’s Hospital (BCH). Results Five themes emerged around the utility that parents expected and actually received from IRR: predictability, management, family planning, finding answers, and helping science and/or families. Parents expressing negative or mixed emotions after IRR return were those who did not receive the utility they expected from the IRR. Conversely, parents who expressed positive emotions were those who received as much or greater utility than expected. Conclusions Discrepancies between expected and actual utility of IRR affect the experiences of parents and families enrolled in genetic research studies. An informed consent process that fosters realistic expectations between researchers and participants may help to minimize any negative impact on parents and families. PMID:27082877
Sok, J; Hogeveen, H; Elbers, A R W; Velthuis, A G J; Oude Lansink, A G J M
2014-08-01
In order to put a halt to the Bluetongue virus serotype 8 (BTV-8) epidemic in 2008, the European Commission promoted vaccination at a transnational level as a new measure to combat BTV-8. Most European member states opted for a mandatory vaccination campaign, whereas the Netherlands, amongst others, opted for a voluntary campaign. For the latter to be effective, the farmer's willingness to vaccinate should be high enough to reach satisfactory vaccination coverage to stop the spread of the disease. This study looked at a farmer's expected utility of vaccination, which is expected to have a positive impact on the willingness to vaccinate. Decision analysis was used to structure the vaccination decision problem into decisions, events and payoffs, and to define the relationships among these elements. Two scenarios were formulated to distinguish farmers' mindsets, based on differences in dairy heifer management. For each of the scenarios, a decision tree was run for two years to study vaccination behaviour over time. The analysis was done based on the expected utility criterion. This allows to account for the effect of a farmer's risk preference on the vaccination decision. Probabilities were estimated by experts, payoffs were based on an earlier published study. According to the results of the simulation, the farmer decided initially to vaccinate against BTV-8 as the net expected utility of vaccination was positive. Re-vaccination was uncertain due to less expected costs of a continued outbreak. A risk averse farmer in this respect is more likely to re-vaccinate. When heifers were retained for export on the farm, the net expected utility of vaccination was found to be generally larger and thus was re-vaccination more likely to happen. For future animal health programmes that rely on a voluntary approach, results show that the provision of financial incentives can be adjusted to the farmers' willingness to vaccinate over time. Important in this respect are the decision
NASA Astrophysics Data System (ADS)
Beller-Simms, N.; Metchis, K.
2014-12-01
Water utilities, reeling from increased impacts of successive extreme events such as floods, droughts, and derechos, are taking a more proactive role in preparing for future incursions. A recent study by Federal and water foundation investigators, reveals how six US water utilities and their regions prepared for, responded to, and coped with recent extreme weather and climate events and the lessons they are using to plan future adaptation and resilience activities. Two case studies will be highlighted. (1) Sonoma County, CA, has had alternating floods and severe droughts. In 2009, this area, home to competing water users, namely, agricultural crops, wineries, tourism, and fisheries faced a three-year drought, accompanied at the end by intense frosts. Competing uses of water threatened the grape harvest, endangered the fish industry and resulted in a series of regulations, and court cases. Five years later, new efforts by partners in the entire watershed have identified mutual opportunities for increased basin sustainability in the face of a changing climate. (2) Washington DC had a derecho in late June 2012, which curtailed water, communications, and power delivery during a record heat spell that impacted hundreds of thousands of residents and lasted over the height of the tourist-intensive July 4th holiday. Lessons from this event were applied three months later in anticipation of an approaching Superstorm Sandy. This study will help other communities in improving their resiliency in the face of future climate extremes. For example, this study revealed that (1) communities are planning with multiple types and occurrences of extreme events which are becoming more severe and frequent and are impacting communities that are expanding into more vulnerable areas and (2) decisions by one sector can not be made in a vacuum and require the scientific, sectoral and citizen communities to work towards sustainable solutions.
Gimple, L.W.; Hutter, A.M. Jr.; Guiney, T.E.; Boucher, C.A. )
1989-12-01
The prognostic value of predischarge dipyridamole-thallium scanning after uncomplicated myocardial infarction was determined by comparison with submaximal exercise electrocardiography and 6-week maximal exercise thallium imaging and by correlation with clinical events. Two endpoints were defined: cardiac events and severe ischemic potential. Of the 40 patients studied, 8 had cardiac events within 6 months (1 died, 3 had myocardial infarction and 4 had unstable angina requiring hospitalization). The finding of any redistribution on dipyridamole-thallium scanning was common (77%) in these patients and had poor specificity (29%). Redistribution outside of the infarct zone, however, had equivalent sensitivity (63%) and better specificity (75%) for events (p less than 0.05). Both predischarge dipyridamole-thallium and submaximal exercise electrocardiography identified 5 of the 8 events (p = 0.04 and 0.07, respectively). The negative predictive accuracy for events for both dipyridamole-thallium and submaximal exercise electrocardiography was 88%. In addition to the 8 patients with events, 16 other patients had severe ischemic potential (6 had coronary bypass surgery, 1 had inoperable 3-vessel disease and 9 had markedly abnormal 6-week maximal exercise tests). Predischarge dipyridamole-thallium and submaximal exercise testing also identified 8 and 7 of these 16 patients with severe ischemic potential, respectively. Six of the 8 cardiac events occurred before 6-week follow-up. A maximal exercise thallium test at 6 weeks identified 1 of the 2 additional events within 6 months correctly. Thallium redistribution after dipyridamole in coronary territories outside the infarct zone is a sensitive and specific predictor of subsequent cardiac events and identifies patients with severe ischemic potential.
Evidence for surprise minimization over value maximization in choice behavior
Schwartenbeck, Philipp; FitzGerald, Thomas H. B.; Mathys, Christoph; Dolan, Ray; Kronbichler, Martin; Friston, Karl
2015-01-01
Classical economic models are predicated on the idea that the ultimate aim of choice is to maximize utility or reward. In contrast, an alternative perspective highlights the fact that adaptive behavior requires agents’ to model their environment and minimize surprise about the states they frequent. We propose that choice behavior can be more accurately accounted for by surprise minimization compared to reward or utility maximization alone. Minimizing surprise makes a prediction at variance with expected utility models; namely, that in addition to attaining valuable states, agents attempt to maximize the entropy over outcomes and thus ‘keep their options open’. We tested this prediction using a simple binary choice paradigm and show that human decision-making is better explained by surprise minimization compared to utility maximization. Furthermore, we replicated this entropy-seeking behavior in a control task with no explicit utilities. These findings highlight a limitation of purely economic motivations in explaining choice behavior and instead emphasize the importance of belief-based motivations. PMID:26564686
Cooper, Rachel
2014-02-01
In the 1940s and 1950s thousands of lobotomies were performed on people with mental disorders. These operations were known to be dangerous, but thought to offer great hope. Nowadays, the lobotomies of the 1940s and 1950s are widely condemned. The consensus is that the practitioners who employed them were, at best, misguided enthusiasts, or, at worst, evil. In this paper I employ standard decision theory to understand and assess shifts in the evaluation of lobotomy. Textbooks of medical decision making generally recommend that decisions under risk are made so as to maximise expected utility (MEU) I show that using this procedure suggests that the 1940s and 1950s practice of psychosurgery was justifiable. In making sense of this finding we have a choice: Either we can accept that psychosurgery was justified, in which case condemnation of the lobotomists is misplaced. Or, we can conclude that the use of formal decision procedures, such as MEU, is problematic. PMID:24449251
Kurnianingsih, Yoanna A.; Sim, Sam K. Y.; Chee, Michael W. L.; Mullette-Gillman, O’Dhaniel A.
2015-01-01
We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble) and choice strategies (what gamble information influences choices) within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk, and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning. We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61–80 years old) were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic decision
Oh, Yuri; Xu, Xu; Kim, Ji Young; Park, Jong Moon
2015-08-01
Brown seaweed contains up to 67% of carbohydrates by dry weight and presents high potential as a polysaccharide feedstock for biofuel production. To effectively use brown seaweed as a biomass, degradation of alginate is the major challenge due to its complicated structure and low solubility in water. This study focuses on the isolation of alginate degrading bacteria, determining of the optimum fermentation conditions, as well as comparing the conventional single fermentation system with the two-phase fermentation system which is separately using alginate and mannitol extracted from Laminaria japonica. Maximum yield of organic acids production and volatile solids reduction obtained were 0.516 g/g and 79.7%, respectively, using the two-phase fermentation system in which alginate fermentation was carried out at pH 7 and mannitol fermentation at pH 8. The two-phase fermentation system increased the yield of organic acids production by 1.14 times and led to a 1.45-times reduction of VS when compared to the conventional single fermentation system at pH 8. The results show that the two-phase fermentation system improved the utilization of alginate by separating alginate from mannitol leading to enhanced alginate lyase activity. PMID:26098412
Larry G. Felix; P. Vann Bush; Stephen Niksa
2003-04-30
In full-scale boilers, the effect of biomass cofiring on NO{sub x} and unburned carbon (UBC) emissions has been found to be site-specific. Few sets of field data are comparable and no consistent database of information exists upon which cofiring fuel choice or injection system design can be based to assure that NOX emissions will be minimized and UBC be reduced. This report presents the results of a comprehensive project that generated an extensive set of pilot-scale test data that were used to validate a new predictive model for the cofiring of biomass and coal. All testing was performed at the 3.6 MMBtu/hr (1.75 MW{sub t}) Southern Company Services/Southern Research Institute Combustion Research Facility where a variety of burner configurations, coals, biomasses, and biomass injection schemes were utilized to generate a database of consistent, scalable, experimental results (422 separate test conditions). This database was then used to validate a new model for predicting NO{sub x} and UBC emissions from the cofiring of biomass and coal. This model is based on an Advanced Post-Processing (APP) technique that generates an equivalent network of idealized reactor elements from a conventional CFD simulation. The APP reactor network is a computational environment that allows for the incorporation of all relevant chemical reaction mechanisms and provides a new tool to quantify NOx and UBC emissions for any cofired combination of coal and biomass.
NASA Technical Reports Server (NTRS)
Jaap, John; Davis, Elizabeth; Richardson, Lea
2004-01-01
Planning and scheduling systems organize tasks into a timeline or schedule. Tasks are logically grouped into containers called models. Models are a collection of related tasks, along with their dependencies and requirements, that when met will produce the desired result. One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed; the information sought is at the cutting edge of scientific endeavor; and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a maximally expressive modeling schema.
NASA Astrophysics Data System (ADS)
Jois, Manjunath Holaykoppa Nanjunda
The conventional Influence Maximization problem is the problem of finding such a team (a small subset) of seed nodes in a social network that would maximize the spread of influence over the whole network. This paper considers a lottery system aimed at maximizing the awareness spread to promote energy conservation behavior as a stochastic Influence Maximization problem with the constraints ensuring lottery fairness. The resulting Multi-Team Influence Maximization problem involves assigning the probabilities to multiple teams of seeds (interpreted as lottery winners) to maximize the expected awareness spread. Such a variation of the Influence Maximization problem is modeled as a Linear Program; however, enumerating all the possible teams is a hard task considering that the feasible team count grows exponentially with the network size. In order to address this challenge, we develop a column generation based approach to solve the problem with a limited number of candidate teams, where new candidates are generated and added to the problem iteratively. We adopt a piecewise linear function to model the impact of including a new team so as to pick only such teams which can improve the existing solution. We demonstrate that with this approach we can solve such influence maximization problems to optimality, and perform computational study with real-world social network data sets to showcase the efficiency of the approach in finding lottery designs for optimal awareness spread. Lastly, we explore other possible scenarios where this model can be utilized to optimally solve the otherwise hard to solve influence maximization problems.
Maximally Expressive Task Modeling
NASA Technical Reports Server (NTRS)
Japp, John; Davis, Elizabeth; Maxwell, Theresa G. (Technical Monitor)
2002-01-01
Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiment activities for the Space Station. The equipment used in these experiments is some of the most complex hardware ever developed by mankind, the information sought by these experiments is at the cutting edge of scientific endeavor, and the procedures for executing the experiments are intricate and exacting. Scheduling is made more difficult by a scarcity of space station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling space station experiment operations calls for a "maximally expressive" modeling schema. Modeling even the simplest of activities cannot be automated; no sensor can be attached to a piece of equipment that can discern how to use that piece of equipment; no camera can quantify how to operate a piece of equipment. Modeling is a human enterprise-both an art and a science. The modeling schema should allow the models to flow from the keyboard of the user as easily as works of literature flowed from the pen of Shakespeare. The Ground Systems Department at the Marshall Space Flight Center has embarked on an effort to develop a new scheduling engine that is highlighted by a maximally expressive modeling schema. This schema, presented in this paper, is a synergy of technological advances and domain-specific innovations.
ERIC Educational Resources Information Center
Sullivan, Patricia
1999-01-01
Parents must learn to transmit a sense of high expectations to their children (related to behavior and accomplishments) without crushing them with too much pressure. This means setting realistic expectations based on their children's special abilities, listening to their children's feelings about the expectations, and understanding what…
Maximally nonlocal theories cannot be maximally random.
de la Torre, Gonzalo; Hoban, Matty J; Dhara, Chirag; Prettico, Giuseppe; Acín, Antonio
2015-04-24
Correlations that violate a Bell inequality are said to be nonlocal; i.e., they do not admit a local and deterministic explanation. Great effort has been devoted to study how the amount of nonlocality (as measured by a Bell inequality violation) serves to quantify the amount of randomness present in observed correlations. In this work we reverse this research program and ask what do the randomness certification capabilities of a theory tell us about the nonlocality of that theory. We find that, contrary to initial intuition, maximal randomness certification cannot occur in maximally nonlocal theories. We go on and show that quantum theory, in contrast, permits certification of maximal randomness in all dichotomic scenarios. We hence pose the question of whether quantum theory is optimal for randomness; i.e., is it the most nonlocal theory that allows maximal randomness certification? We answer this question in the negative by identifying a larger-than-quantum set of correlations capable of this feat. Not only are these results relevant to understanding quantum mechanics' fundamental features, but also put fundamental restrictions on device-independent protocols based on the no-signaling principle. PMID:25955039
Maximal combustion temperature estimation
NASA Astrophysics Data System (ADS)
Golodova, E.; Shchepakina, E.
2006-12-01
This work is concerned with the phenomenon of delayed loss of stability and the estimation of the maximal temperature of safe combustion. Using the qualitative theory of singular perturbations and canard techniques we determine the maximal temperature on the trajectories located in the transition region between the slow combustion regime and the explosive one. This approach is used to estimate the maximal temperature of safe combustion in multi-phase combustion models.
ERIC Educational Resources Information Center
Cannon, John
2011-01-01
Awareness of expectations is so important in the facilities business. The author's experiences has taught him that it is essential to understand how expectations impact people's lives as well as those for whom they provide services for every day. This article presents examples and ideas that will provide insight and ideas to help educators…
Maximization, learning, and economic behavior.
Erev, Ido; Roth, Alvin E
2014-07-22
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design. PMID:25024182
Maximization, learning, and economic behavior
Erev, Ido; Roth, Alvin E.
2014-01-01
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design. PMID:25024182
Inclusive fitness maximization: An axiomatic approach.
Okasha, Samir; Weymark, John A; Bossert, Walter
2014-06-01
Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it. PMID:24530825
Maximizing Classroom Participation.
ERIC Educational Resources Information Center
Englander, Karen
2001-01-01
Discusses how to maximize classroom participation in the English-as-a-Second-or-Foreign-Language classroom, and provides a classroom discussion method that is based on real-life problem solving. (Author/VWL)
Generation and Transmission Maximization Model
2001-04-05
GTMax was developed to study complex marketing and system operational issues facing electric utility power systems. The model maximizes the value of the electric system taking into account not only a single system''s limited energy and transmission resources but also firm contracts, independent power producer (IPP) agreements, and bulk power transaction opportunities on the spot market. GTMax maximizes net revenues of power systems by finding a solution that increases income while keeping expenses at amore » minimum. It does this while ensuring that market transactions and system operations are within the physical and institutional limitations of the power system. When multiple systems are simulated, GTMax identifies utilities that can successfully compete on the market by tracking hourly energy transactions, costs, and revenues. Some limitations that are modeled are power plant seasonal capabilities and terms specified in firm and IPP contracts. GTMax also considers detaile operational limitations such as power plant ramp rates and hydropower reservoir constraints.« less
Maximal Outboxes of Quadrilaterals
ERIC Educational Resources Information Center
Zhao, Dongsheng
2011-01-01
An outbox of a quadrilateral is a rectangle such that each vertex of the given quadrilateral lies on one side of the rectangle and different vertices lie on different sides. We first investigate those quadrilaterals whose every outbox is a square. Next, we consider the maximal outboxes of rectangles and those quadrilaterals with perpendicular…
ERIC Educational Resources Information Center
Branzburg, Jeffrey
2004-01-01
Google is shaking out to be the leading Web search engine, with recent research from Nielsen NetRatings reporting about 40 percent of all U.S. households using the tool at least once in January 2004. This brief article discusses how teachers and students can maximize their use of Google.
Infrared Maximally Abelian Gauge
Mendes, Tereza; Cucchieri, Attilio; Mihara, Antonio
2007-02-27
The confinement scenario in Maximally Abelian gauge (MAG) is based on the concepts of Abelian dominance and of dual superconductivity. Recently, several groups pointed out the possible existence in MAG of ghost and gluon condensates with mass dimension 2, which in turn should influence the infrared behavior of ghost and gluon propagators. We present preliminary results for the first lattice numerical study of the ghost propagator and of ghost condensation for pure SU(2) theory in the MAG.
NASA Technical Reports Server (NTRS)
Zak, Michail
2008-01-01
A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).
NASA Technical Reports Server (NTRS)
Gendreau, Keith; Cash, Webster; Gorenstein, Paul; Windt, David; Kaaret, Phil; Reynolds, Chris
2004-01-01
The Beyond Einstein Program in NASA's Office of Space Science Structure and Evolution of the Universe theme spells out the top level scientific requirements for a Black Hole Imager in its strategic plan. The MAXIM mission will provide better than one tenth of a microarcsecond imaging in the X-ray band in order to satisfy these requirements. We will overview the driving requirements to achieve these goals and ultimately resolve the event horizon of a supermassive black hole. We will present the current status of this effort that includes a study of a baseline design as well as two alternative approaches.
Varieties of maximal line subbundles
NASA Astrophysics Data System (ADS)
Oxbury, W. M.
2000-07-01
The point of this note is to make an observation concerning the variety M(E) parametrizing line subbundles of maximal degree in a generic stable vector bundle E over an algebraic curve C. M(E) is smooth and projective and its dimension is known in terms of the rank and degree of E and the genus of C (see Section 1). Our observation (Theorem 3·1) is that it has exactly the Chern numbers of an étale cover of the symmetric product S[delta]C where [delta] = dim M(E).This suggests looking for a natural map M(E) [rightward arrow] S[delta]C; however, it is not clear what such a map should be. Indeed, we exhibit an example in which M(E) is connected and deforms non-trivially with E, while there are only finitely many isomorphism classes of étale cover of the symmetric product. This shows that for a general deformation in the family M(E) cannot be such a cover (see Section 4).One may conjecture that M(E) is always connected. This would follow from ampleness of a certain Picard-type bundle on the Jacobian and there seems to be some evidence for expecting this, though we do not pursue this question here.Note that by forgetting the inclusion of a maximal line subbundle in E we get a natural map from M(E) to the Jacobian whose image W(E) is analogous to the classical (Brill-Noether) varieties of special line bundles. (In this sense M(E) is precisely a generalization of the symmetric products of C.) In Section 2 we give some results on W(E) which generalise standard Brill-Noether properties. These are due largely to Laumon, to whom the author is grateful for the reference [9].
ERIC Educational Resources Information Center
Logan, Jennifer A.; Beatty, Maile; Woliver, Renee; Rubinstein, Eric P.; Averbach, Abigail R.
2005-01-01
Over time, improvements in HIV/AIDS surveillance and service utilization data have increased their usefulness for planning programs, targeting resources, and otherwise informing HIV/AIDS policy. However, community planning groups, service providers, and health department staff often have difficulty in interpreting and applying the wide array of…
Maximizing Brightness in Photoinjectors
Limborg-Deprey, C.; Tomizawa, H.; /JAERI-RIKEN, Hyogo
2011-11-30
If the laser pulse driving photoinjectors could be arbitrarily shaped, the emittance growth induced by space charge effects could be totally compensated for. In particular, for RF guns the photo-electron distribution leaving the cathode should have a 3D-ellipsoidal shape. The emittance at the end of the injector could be as small as the cathode emittance. We explore how the emittance and the brightness can be optimized for photoinjector based on RF gun depending on the peak current requirements. Techniques available to produce those ideal laser pulse shapes are also discussed. If the laser pulse driving photoinjectors could be arbitrarily shaped, the emittance growth induced by space charge effects could be totally compensated for. In particular, for RF guns, the photo-electron distribution leaving the cathode should be close to a uniform distribution contained in a 3D-ellipsoid contour. For photo-cathodes which have very fast emission times, and assuming a perfectly uniform emitting surface, this could be achieved by shaping the laser in a pulse of constant fluence and limited in space by a 3D-ellipsoid contour. Simulations show that in such conditions, with the standard linear emittance compensation, the emittance at the end of the photo-injector beamline approaches the minimum value imposed by the cathode emittance. Brightness, which is expressed as the ratio of peak current over the product of the two transverse emittance, seems to be maximized for small charges. Numerical simulations also show that for very high charge per bunch (10nC), emittances as small as 2 mm-mrad could be reached by using 3D-ellipsoidal laser pulses in an S-Band gun. The production of 3D-ellipsoidal pulses is very challenging, but seems worthwhile the effort. We briefly discuss some of the present ideas and difficulties of achieving such pulses.
Smoking Outcome Expectancies among College Students.
ERIC Educational Resources Information Center
Brandon, Thomas H.; Baker, Timothy B.
Alcohol expectancies have been found to predict later onset of drinking among adolescents. This study examined whether the relationship between level of alcohol use and expectancies is paralleled with cigarette smoking, and attempted to identify the content of smoking expectancies. An instrument to measure the subjective expected utility of…
ERIC Educational Resources Information Center
Lange, L. H.
1974-01-01
Five different methods for determining the maximizing condition for x(a - x) are presented. Included is the ancient Greek version and a method attributed to Fermat. None of the proofs use calculus. (LS)
NASA Astrophysics Data System (ADS)
Salvio, Alberto; Staub, Florian; Strumia, Alessandro; Urbano, Alfredo
2016-03-01
Motivated by the 750 GeV diphoton excess found at LHC, we compute the maximal width into γγ that a neutral scalar can acquire through a loop of charged fermions or scalars as function of the maximal scale at which the theory holds, taking into account vacuum (meta)stability bounds. We show how an extra gauge symmetry can qualitatively weaken such bounds, and explore collider probes and connections with Dark Matter.
All maximally entangling unitary operators
Cohen, Scott M.
2011-11-15
We characterize all maximally entangling bipartite unitary operators, acting on systems A and B of arbitrary finite dimensions d{sub A}{<=}d{sub B}, when ancillary systems are available to both parties. Several useful and interesting consequences of this characterization are discussed, including an understanding of why the entangling and disentangling capacities of a given (maximally entangling) unitary can differ and a proof that these capacities must be equal when d{sub A}=d{sub B}.
Multidimensional Scaling for Measuring Alcohol Expectancies.
ERIC Educational Resources Information Center
Rather, Bruce; And Others
Although expectancies for alcohol have been shown to influence drinking behavior, current expectancy questionnaires do not lend themselves to the study of how expectancies are represented in memory. Two studies were conducted which utilized multidimensional scaling techniques designed to produce hypothesized representations of cognitive…
The biomass utilization task consists of the evaluation of a biomass conversion technology including research and development initiatives. The project is expected to provide information on co-control of pollutants, as well as, to prove the feasibility of biomass conversion techn...
Maximizing TDRS Command Load Lifetime
NASA Technical Reports Server (NTRS)
Brown, Aaron J.
2002-01-01
was therefore the key to achieving this goal. This goal was eventually realized through development of an Excel spreadsheet tool called EMMIE (Excel Mean Motion Interactive Estimation). EMMIE utilizes ground ephemeris nodal data to perform a least-squares fit to inferred mean anomaly as a function of time, thus generating an initial estimate for mean motion. This mean motion in turn drives a plot of estimated downtrack position difference versus time. The user can then manually iterate the mean motion, and determine an optimal value that will maximize command load lifetime. Once this optimal value is determined, the mean motion initially calculated by the command builder tool is overwritten with the new optimal value, and the command load is built for uplink to ISS. EMMIE also provides the capability for command load lifetime to be tracked through multiple TORS ephemeris updates. Using EMMIE, TORS command load lifetimes of approximately 30 days have been achieved.
Utilizing Partnerships to Maximize Resources in College Counseling Services
ERIC Educational Resources Information Center
Stewart, Allison; Moffat, Meridith; Travers, Heather; Cummins, Douglas
2015-01-01
Research indicates an increasing number of college students are experiencing severe psychological problems that are impacting their academic performance. However, many colleges and universities operate with constrained budgets that limit their ability to provide adequate counseling services for their student population. Moreover, accessing…
Do Speakers and Listeners Observe the Gricean Maxim of Quantity?
ERIC Educational Resources Information Center
Engelhardt, Paul E.; Bailey, Karl G. D.; Ferreira, Fernanda
2006-01-01
The Gricean Maxim of Quantity is believed to govern linguistic performance. Speakers are assumed to provide as much information as required for referent identification and no more, and listeners are believed to expect unambiguous but concise descriptions. In three experiments we examined the extent to which naive participants are sensitive to the…
Changing expectancies: cognitive mechanisms and context effects.
Wiers, Reinout W; Wood, Mark D; Darkes, Jack; Corbin, William R; Jones, Barry T; Sher, Kenneth J
2003-02-01
This article presents the proceedings of a symposium at the 2002 RSA Meeting in San Francisco, organized by Reinout W. Wiers and Mark D. Wood. The symposium combined two topics of recent interest in studies of alcohol expectancies: cognitive mechanisms in expectancy challenge studies, and context-related changes of expectancies. With increasing recognition of the substantial role played by alcohol expectancies in drinking, investigators have begun to develop and evaluate expectancy challenge procedures as a potentially promising new prevention strategy. The two major issues addressed in the symposium were whether expectancy challenges result in changes in expectancies that mediate intervention (outcome relations), and the influence of simulated bar environments ("bar labs," in which challenges are usually done) on expectancies. The presentations were (1) An introduction, by Jack Darkes; (2) Investigating the utility of alcohol expectancy challenge with heavy drinking college students, by Mark D. Wood; (3) Effects of an expectancy challenge on implicit and explicit expectancies and drinking, by Reinout W. Wiers; (4) Effects of graphic feedback and simulated bar assessments on alcohol expectancies and consumption, by William R. Corbin; (5) Implicit alcohol associations and context, by Barry T Jones; and (6) A discussion by Kenneth J. Sher, who pointed out that it is important not only to study changes of expectancies in the paradigm of an expectancy challenge but also to consider the role of changing expectancies in natural development and in treatments not explicitly aimed at changing expectancies. PMID:12605068
The Naïve Utility Calculus: Computational Principles Underlying Commonsense Psychology.
Jara-Ettinger, Julian; Gweon, Hyowon; Schulz, Laura E; Tenenbaum, Joshua B
2016-08-01
We propose that human social cognition is structured around a basic understanding of ourselves and others as intuitive utility maximizers: from a young age, humans implicitly assume that agents choose goals and actions to maximize the rewards they expect to obtain relative to the costs they expect to incur. This 'naïve utility calculus' allows both children and adults observe the behavior of others and infer their beliefs and desires, their longer-term knowledge and preferences, and even their character: who is knowledgeable or competent, who is praiseworthy or blameworthy, who is friendly, indifferent, or an enemy. We review studies providing support for the naïve utility calculus, and we show how it captures much of the rich social reasoning humans engage in from infancy. PMID:27388875
Cognitive Somatic Behavioral Interventions for Maximizing Gymnastic Performance.
ERIC Educational Resources Information Center
Ravizza, Kenneth; Rotella, Robert
Psychological training programs developed and implemented for gymnasts of a wide range of age and varying ability levels are examined. The programs utilized strategies based on cognitive-behavioral intervention. The approach contends that mental training plays a crucial role in maximizing performance for most gymnasts. The object of the training…
Using Debate to Maximize Learning Potential: A Case Study
ERIC Educational Resources Information Center
Firmin, Michael W.; Vaughn, Aaron; Dye, Amanda
2007-01-01
Following a review of the literature, an educational case study is provided for the benefit of faculty preparing college courses. In particular, we provide a transcribed debate utilized in a General Psychology course as a best practice example of how to craft a debate which maximizes student learning. The work is presented as a model for the…
Factors affecting maximal acid secretion
Desai, H. G.
1969-01-01
The mechanisms by which different factors affect the maximal acid secretion of the stomach are discussed with particular reference to nationality, sex, age, body weight or lean body mass, procedural details, mode of calculation, the nature, dose and route of administration of a stimulus, the synergistic action of another stimulus, drugs, hormones, electrolyte levels, anaemia or deficiency of the iron-dependent enzyme system, vagal continuity and parietal cell mass. PMID:4898322
Learning to maximize reward rate: a model based on semi-Markov decision processes
Khodadadi, Arash; Fakhari, Pegah; Busemeyer, Jerome R.
2014-01-01
When animals have to make a number of decisions during a limited time interval, they face a fundamental problem: how much time they should spend on each decision in order to achieve the maximum possible total outcome. Deliberating more on one decision usually leads to more outcome but less time will remain for other decisions. In the framework of sequential sampling models, the question is how animals learn to set their decision threshold such that the total expected outcome achieved during a limited time is maximized. The aim of this paper is to provide a theoretical framework for answering this question. To this end, we consider an experimental design in which each trial can come from one of the several possible “conditions.” A condition specifies the difficulty of the trial, the reward, the penalty and so on. We show that to maximize the expected reward during a limited time, the subject should set a separate value of decision threshold for each condition. We propose a model of learning the optimal value of decision thresholds based on the theory of semi-Markov decision processes (SMDP). In our model, the experimental environment is modeled as an SMDP with each “condition” being a “state” and the value of decision thresholds being the “actions” taken in those states. The problem of finding the optimal decision thresholds then is cast as the stochastic optimal control problem of taking actions in each state in the corresponding SMDP such that the average reward rate is maximized. Our model utilizes a biologically plausible learning algorithm to solve this problem. The simulation results show that at the beginning of learning the model choses high values of decision threshold which lead to sub-optimal performance. With experience, however, the model learns to lower the value of decision thresholds till finally it finds the optimal values. PMID:24904252
Viswanathan, Vilayanur V.; Kintner-Meyer, Michael CW
2010-09-30
Plug-in hybrid electric vehicles (PHEVs) and electric vehicles (EVs) are expected to gain significant market share over the next decade. The economic viability for such vehicles is contingent upon the availability of cost-effective batteries with high power and energy density. For initial commercial success, government subsidies will be highly instrumental in allowing PHEVs to gain a foothold. However, in the long-term, for electric vehicles to be commercially viable, the economics have to be self-sustaining. Towards the end of battery life in the vehicle, the energy capacity left in the battery is not sufficient to provide the designed range for the vehicle. Typically, the automotive manufacturers indicated the need for battery replacement when the remaining energy capacity reaches 70-80%. There is still sufficient power (kW) and energy capacity (kWh) left in the battery to support various grid ancillary services such as balancing, spinning reserve, load following services. As renewable energy penetration increases, the need for such balancing services is expected to increase. This work explores optimality for the replacement of transportation batteries to be subsequently used for grid services. This analysis maximizes the value of an electric vehicle battery to be used as a transportation battery (in its first life) and then as a resource for providing grid services (in its second life). The results are presented across a range of key parameters, such as depth of discharge (DOD), number of batteries used over the life of the vehicle, battery life in vehicle, battery state of health (SOH) at end of life in vehicle and ancillary services rate. The results provide valuable insights for the automotive industry into maximizing the utility and the value of the vehicle batteries in an effort to either reduce the selling price of EVs and PHEVs or maximize the profitability of the emerging electrification of transportation.
Maximizing algebraic connectivity in air transportation networks
NASA Astrophysics Data System (ADS)
Wei, Peng
In air transportation networks the robustness of a network regarding node and link failures is a key factor for its design. An experiment based on the real air transportation network is performed to show that the algebraic connectivity is a good measure for network robustness. Three optimization problems of algebraic connectivity maximization are then formulated in order to find the most robust network design under different constraints. The algebraic connectivity maximization problem with flight routes addition or deletion is first formulated. Three methods to optimize and analyze the network algebraic connectivity are proposed. The Modified Greedy Perturbation Algorithm (MGP) provides a sub-optimal solution in a fast iterative manner. The Weighted Tabu Search (WTS) is designed to offer a near optimal solution with longer running time. The relaxed semi-definite programming (SDP) is used to set a performance upper bound and three rounding techniques are discussed to find the feasible solution. The simulation results present the trade-off among the three methods. The case study on two air transportation networks of Virgin America and Southwest Airlines show that the developed methods can be applied in real world large scale networks. The algebraic connectivity maximization problem is extended by adding the leg number constraint, which considers the traveler's tolerance for the total connecting stops. The Binary Semi-Definite Programming (BSDP) with cutting plane method provides the optimal solution. The tabu search and 2-opt search heuristics can find the optimal solution in small scale networks and the near optimal solution in large scale networks. The third algebraic connectivity maximization problem with operating cost constraint is formulated. When the total operating cost budget is given, the number of the edges to be added is not fixed. Each edge weight needs to be calculated instead of being pre-determined. It is illustrated that the edge addition and the
User Expectations: Nurses' Perspective.
Gürsel, Güney
2016-01-01
Healthcare is a technology-intensive industry. Although all healthcare staff needs qualified computer support, physicians and nurses need more. As nursing practice is an information intensive issue, understanding nurses' expectations from healthcare information systems (HCIS) is a must issue to meet their needs and help them in a better way. In this study perceived importance of nurses' expectations from HCIS is investigated, and two HCIS is evaluated for meeting the expectations of nurses by using fuzzy logic methodologies. PMID:27332398
A Rational Expectations Experiment.
ERIC Educational Resources Information Center
Peterson, Norris A.
1990-01-01
Presents a simple classroom simulation of the Lucas supply curve mechanism with rational expectations. Concludes that the exercise has proved very useful as an introduction to the concepts of rational and adaptive expectations, the Lucas supply curve, the natural rate hypothesis, and random supply shocks. (DB)
ERIC Educational Resources Information Center
Schwartzman, Steven
1993-01-01
Discusses the surprising result that the expected number of marbles of one color drawn from a set of marbles of two colors after two draws without replacement is the same as the expected number of that color marble after two draws with replacement. Presents mathematical models to help explain this phenomenon. (MDH)
ERIC Educational Resources Information Center
Santini, Joseph
2014-01-01
This article describes a teachers reflections on the matter of student expectations. Santini begins with a common understanding of the "Pygmalion effect" from research projects conducted in earlier years that intimated "people's expectations could influence other people in the world around them." In the world of deaf…
Knowledge discovery by accuracy maximization
Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo
2014-01-01
Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold’s topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan’s presidency and not from its beginning. PMID:24706821
Maximally coherent mixed states: Complementarity between maximal coherence and mixedness
NASA Astrophysics Data System (ADS)
Singh, Uttam; Bera, Manabendra Nath; Dhar, Himadri Shekhar; Pati, Arun Kumar
2015-05-01
Quantum coherence is a key element in topical research on quantum resource theories and a primary facilitator for design and implementation of quantum technologies. However, the resourcefulness of quantum coherence is severely restricted by environmental noise, which is indicated by the loss of information in a quantum system, measured in terms of its purity. In this work, we derive the limits imposed by the mixedness of a quantum system on the amount of quantum coherence that it can possess. We obtain an analytical trade-off between the two quantities that upperbound the maximum quantum coherence for fixed mixedness in a system. This gives rise to a class of quantum states, "maximally coherent mixed states," whose coherence cannot be increased further under any purity-preserving operation. For the above class of states, quantum coherence and mixedness satisfy a complementarity relation, which is crucial to understand the interplay between a resource and noise in open quantum systems.
Maximal acceleration and radiative processes
NASA Astrophysics Data System (ADS)
Papini, Giorgio
2015-08-01
We derive the radiation characteristics of an accelerated, charged particle in a model due to Caianiello in which the proper acceleration of a particle of mass m has the upper limit 𝒜m = 2mc3/ℏ. We find two power laws, one applicable to lower accelerations, the other more suitable for accelerations closer to 𝒜m and to the related physical singularity in the Ricci scalar. Geometrical constraints and power spectra are also discussed. By comparing the power laws due to the maximal acceleration (MA) with that for particles in gravitational fields, we find that the model of Caianiello allows, in principle, the use of charged particles as tools to distinguish inertial from gravitational fields locally.
Lighting spectrum to maximize colorfulness.
Masuda, Osamu; Nascimento, Sérgio M C
2012-02-01
The spectrum of modern illumination can be computationally tailored considering the visual effects of lighting. We investigated the spectral profiles of the white illumination maximizing the theoretical limits of the perceivable object colors. A large number of metamers with various degrees of smoothness were generated on and around the Planckian locus, and the volume in the CIELAB space of the optimal colors for each metamer was calculated. The optimal spectrum was found at the color temperature of around 5.7×10(3) K, had three peaks at both ends of the visible band and at around 510 nm, and was 25% better than daylight and 35% better than Thornton's prime color lamp. PMID:22297368
Robine, J. M.; Romieu, I.; Cambois, E.
1999-01-01
An outline is presented of progress in the development of health expectancy indicators, which are growing in importance as a means of assessing the health status of populations and determining public health priorities. PMID:10083720
Maximal dinucleotide and trinucleotide circular codes.
Michel, Christian J; Pellegrini, Marco; Pirillo, Giuseppe
2016-01-21
We determine here the number and the list of maximal dinucleotide and trinucleotide circular codes. We prove that there is no maximal dinucleotide circular code having strictly less than 6 elements (maximum size of dinucleotide circular codes). On the other hand, a computer calculus shows that there are maximal trinucleotide circular codes with less than 20 elements (maximum size of trinucleotide circular codes). More precisely, there are maximal trinucleotide circular codes with 14, 15, 16, 17, 18 and 19 elements and no maximal trinucleotide circular code having less than 14 elements. We give the same information for the maximal self-complementary dinucleotide and trinucleotide circular codes. The amino acid distribution of maximal trinucleotide circular codes is also determined. PMID:26382231
Maximizing the optical network capacity
Bayvel, Polina; Maher, Robert; Liga, Gabriele; Shevchenko, Nikita A.; Lavery, Domaniç; Killey, Robert I.
2016-01-01
Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity. PMID:26809572
Maximizing the optical network capacity.
Bayvel, Polina; Maher, Robert; Xu, Tianhua; Liga, Gabriele; Shevchenko, Nikita A; Lavery, Domaniç; Alvarado, Alex; Killey, Robert I
2016-03-01
Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity. PMID:26809572
Maximal switchability of centralized networks
NASA Astrophysics Data System (ADS)
Vakulenko, Sergei; Morozov, Ivan; Radulescu, Ovidiu
2016-08-01
We consider continuous time Hopfield-like recurrent networks as dynamical models for gene regulation and neural networks. We are interested in networks that contain n high-degree nodes preferably connected to a large number of N s weakly connected satellites, a property that we call n/N s -centrality. If the hub dynamics is slow, we obtain that the large time network dynamics is completely defined by the hub dynamics. Moreover, such networks are maximally flexible and switchable, in the sense that they can switch from a globally attractive rest state to any structurally stable dynamics when the response time of a special controller hub is changed. In particular, we show that a decrease of the controller hub response time can lead to a sharp variation in the network attractor structure: we can obtain a set of new local attractors, whose number can increase exponentially with N, the total number of nodes of the nework. These new attractors can be periodic or even chaotic. We provide an algorithm, which allows us to design networks with the desired switching properties, or to learn them from time series, by adjusting the interactions between hubs and satellites. Such switchable networks could be used as models for context dependent adaptation in functional genetics or as models for cognitive functions in neuroscience.
A Maximally Supersymmetric Kondo Model
Harrison, Sarah; Kachru, Shamit; Torroba, Gonzalo; /Stanford U., Phys. Dept. /SLAC
2012-02-17
We study the maximally supersymmetric Kondo model obtained by adding a fermionic impurity to N = 4 supersymmetric Yang-Mills theory. While the original Kondo problem describes a defect interacting with a free Fermi liquid of itinerant electrons, here the ambient theory is an interacting CFT, and this introduces qualitatively new features into the system. The model arises in string theory by considering the intersection of a stack of M D5-branes with a stack of N D3-branes, at a point in the D3 worldvolume. We analyze the theory holographically, and propose a dictionary between the Kondo problem and antisymmetric Wilson loops in N = 4 SYM. We perform an explicit calculation of the D5 fluctuations in the D3 geometry and determine the spectrum of defect operators. This establishes the stability of the Kondo fixed point together with its basic thermodynamic properties. Known supergravity solutions for Wilson loops allow us to go beyond the probe approximation: the D5s disappear and are replaced by three-form flux piercing a new topologically non-trivial S3 in the corrected geometry. This describes the Kondo model in terms of a geometric transition. A dual matrix model reflects the basic properties of the corrected gravity solution in its eigenvalue distribution.
Maximal Oxygen Intake and Maximal Work Performance of Active College Women.
ERIC Educational Resources Information Center
Higgs, Susanne L.
Maximal oxygen intake and associated physiological variables were measured during strenuous exercise on women subjects (N=20 physical education majors). Following assessment of maximal oxygen intake, all subjects underwent a performance test at the work level which had elicited their maximal oxygen intake. Mean maximal oxygen intake was 41.32…
Ray, P.E.
1998-09-04
This document outlines the significant accomplishments of fiscal year 1998 for the Tank Waste Remediation System (TWRS) Project Hanford Management Contract (PHMC) team. Opportunities for improvement to better meet some performance expectations have been identified. The PHMC has performed at an excellent level in administration of leadership, planning, and technical direction. The contractor has met and made notable improvement of attaining customer satisfaction in mission execution. This document includes the team`s recommendation that the PHMC TWRS Performance Expectation Plan evaluation rating for fiscal year 1998 be an Excellent.
ERIC Educational Resources Information Center
Williams, Roger; Williams, Sherry
2014-01-01
Author and husband, Roger Williams, is hearing and signs fluently, and author and wife, Sherry Williams, is deaf and uses both speech and signs, although she is most comfortable signing. As parents of six children--deaf and hearing--they are determined to encourage their children to do their best, and they always set their expectations high. They…
Parenting with High Expectations
ERIC Educational Resources Information Center
Timperlake, Benna Hull; Sanders, Genelle Timperlake
2014-01-01
In some ways raising deaf or hard of hearing children is no different than raising hearing children; expectations must be established and periodically tweaked. Benna Hull Timperlake, who with husband Roger, raised two hearing children in addition to their deaf daughter, Genelle Timperlake Sanders, and Genelle, now a deaf professional, share their…
Great Expectations. [Lesson Plan].
ERIC Educational Resources Information Center
Devine, Kelley
Based on Charles Dickens' novel "Great Expectations," this lesson plan presents activities designed to help students understand the differences between totalitarianism and democracy; and a that a writer of a story considers theme, plot, characters, setting, and point of view. The main activity of the lesson involves students working in groups to…
Heterogeneity in expected longevities.
Pijoan-Mas, Josep; Ríos-Rull, José-Víctor
2014-12-01
We develop a new methodology to compute differences in the expected longevity of individuals of a given cohort who are in different socioeconomic groups at a certain age. We address the two main problems associated with the standard use of life expectancy: (1) that people's socioeconomic characteristics change, and (2) that mortality has decreased over time. Our methodology uncovers substantial heterogeneity in expected longevities, yet much less heterogeneity than what arises from the naive application of life expectancy formulae. We decompose the longevity differences into differences in health at age 50, differences in the evolution of health with age, and differences in mortality conditional on health. Remarkably, education, wealth, and income are health-protecting but have very little impact on two-year mortality rates conditional on health. Married people and nonsmokers, however, benefit directly in their immediate mortality. Finally, we document an increasing time trend of the socioeconomic gradient of longevity in the period 1992-2008, and we predict an increase in the socioeconomic gradient of mortality rates for the coming years. PMID:25391225
Maximally Expressive Modeling of Operations Tasks
NASA Technical Reports Server (NTRS)
Jaap, John; Richardson, Lea; Davis, Elizabeth
2002-01-01
Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed, the information sought is at the cutting edge of scientific endeavor, and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a "maximally expressive" modeling schema.
Does mental exertion alter maximal muscle activation?
Rozand, Vianney; Pageaux, Benjamin; Marcora, Samuele M.; Papaxanthis, Charalambos; Lepers, Romuald
2014-01-01
Mental exertion is known to impair endurance performance, but its effects on neuromuscular function remain unclear. The purpose of this study was to test the hypothesis that mental exertion reduces torque and muscle activation during intermittent maximal voluntary contractions of the knee extensors. Ten subjects performed in a randomized order three separate mental exertion conditions lasting 27 min each: (i) high mental exertion (incongruent Stroop task), (ii) moderate mental exertion (congruent Stroop task), (iii) low mental exertion (watching a movie). In each condition, mental exertion was combined with 10 intermittent maximal voluntary contractions of the knee extensor muscles (one maximal voluntary contraction every 3 min). Neuromuscular function was assessed using electrical nerve stimulation. Maximal voluntary torque, maximal muscle activation and other neuromuscular parameters were similar across mental exertion conditions and did not change over time. These findings suggest that mental exertion does not affect neuromuscular function during intermittent maximal voluntary contractions of the knee extensors. PMID:25309404
Inflation in maximal gauged supergravities
Kodama, Hideo; Nozawa, Masato
2015-05-18
We discuss the dynamics of multiple scalar fields and the possibility of realistic inflation in the maximal gauged supergravity. In this paper, we address this problem in the framework of recently discovered 1-parameter deformation of SO(4,4) and SO(5,3) dyonic gaugings, for which the base point of the scalar manifold corresponds to an unstable de Sitter critical point. In the gauge-field frame where the embedding tensor takes the value in the sum of the 36 and 36’ representations of SL(8), we present a scheme that allows us to derive an analytic expression for the scalar potential. With the help of this formalism, we derive the full potential and gauge coupling functions in analytic forms for the SO(3)×SO(3)-invariant subsectors of SO(4,4) and SO(5,3) gaugings, and argue that there exist no new critical points in addition to those discovered so far. For the SO(4,4) gauging, we also study the behavior of 6-dimensional scalar fields in this sector near the Dall’Agata-Inverso de Sitter critical point at which the negative eigenvalue of the scalar mass square with the largest modulus goes to zero as the deformation parameter s approaches a critical value s{sub c}. We find that when the deformation parameter s is taken sufficiently close to the critical value, inflation lasts more than 60 e-folds even if the initial point of the inflaton allows an O(0.1) deviation in Planck units from the Dall’Agata-Inverso critical point. It turns out that the spectral index n{sub s} of the curvature perturbation at the time of the 60 e-folding number is always about 0.96 and within the 1σ range n{sub s}=0.9639±0.0047 obtained by Planck, irrespective of the value of the η parameter at the critical saddle point. The tensor-scalar ratio predicted by this model is around 10{sup −3} and is close to the value in the Starobinsky model.
Post-Secondary Expectations and Educational Attainment
ERIC Educational Resources Information Center
Sciarra, Daniel T.; Ambrosino, Katherine E.
2011-01-01
This study utilized student, teacher, and parent expectations during high school to analyze their predictive effect on post-secondary education status two years after scheduled graduation. The sample included 5,353 students, parents and teachers who participated in the Educational Longitudinal Study (ELS; 2002-2006). The researchers analyzed data…
An Activity for Exploring Marital Expectations
ERIC Educational Resources Information Center
Saur, William G.
1976-01-01
The learning activity, designed for the use of high school students in a family life education course, is designed to explore attitudes towards mate qualities in order to increase the students' awareness of marital expectations. The activity utilizes the format of an auction game and a group discussion. (EC)
Glacier Surface Monitoring by Maximizing Mutual Information
NASA Astrophysics Data System (ADS)
Erten, E.; Rossi, C.; Hajnsek, I.
2012-07-01
The contribution of Polarimetric Synthetic Aperture Radar (PolSAR) images compared with the single-channel SAR in terms of temporal scene characterization has been found and described to add valuable information in the literature. However, despite a number of recent studies focusing on single polarized glacier monitoring, the potential of polarimetry to estimate the surface velocity of glaciers has not been explored due to the complex mechanism of polarization through glacier/snow. In this paper, a new approach to the problem of monitoring glacier surface velocity is proposed by means of temporal PolSAR images, using a basic concept from information theory: Mutual Information (MI). The proposed polarimetric tracking method applies the MI to measure the statistical dependence between temporal polarimetric images, which is assumed to be maximal if the images are geometrically aligned. Since the proposed polarimetric tracking method is very powerful and general, it can be implemented into any kind of multivariate remote sensing data such as multi-spectral optical and single-channel SAR images. The proposed polarimetric tracking is then used to retrieve surface velocity of Aletsch glacier located in Switzerland and of Inyltshik glacier in Kyrgyzstan with two different SAR sensors; Envisat C-band (single polarized) and DLR airborne L-band (fully polarimetric) systems, respectively. The effect of number of channel (polarimetry) into tracking investigations demonstrated that the presence of snow, as expected, effects the location of the phase center in different polarization, such as glacier tracking with temporal HH compared to temporal VV channels. Shortly, a change in polarimetric signature of the scatterer can change the phase center, causing a question of how much of what I am observing is motion then penetration. In this paper, it is shown that considering the multi-channel SAR statistics, it is possible to optimize the separate these contributions.
Specificity of a Maximal Step Exercise Test
ERIC Educational Resources Information Center
Darby, Lynn A.; Marsh, Jennifer L.; Shewokis, Patricia A.; Pohlman, Roberta L.
2007-01-01
To adhere to the principle of "exercise specificity" exercise testing should be completed using the same physical activity that is performed during exercise training. The present study was designed to assess whether aerobic step exercisers have a greater maximal oxygen consumption (max VO sub 2) when tested using an activity specific, maximal step…
Statistical mechanics of maximal independent sets
NASA Astrophysics Data System (ADS)
Dall'Asta, Luca; Pin, Paolo; Ramezanpour, Abolfazl
2009-12-01
The graph theoretic concept of maximal independent set arises in several practical problems in computer science as well as in game theory. A maximal independent set is defined by the set of occupied nodes that satisfy some packing and covering constraints. It is known that finding minimum and maximum-density maximal independent sets are hard optimization problems. In this paper, we use cavity method of statistical physics and Monte Carlo simulations to study the corresponding constraint satisfaction problem on random graphs. We obtain the entropy of maximal independent sets within the replica symmetric and one-step replica symmetry breaking frameworks, shedding light on the metric structure of the landscape of solutions and suggesting a class of possible algorithms. This is of particular relevance for the application to the study of strategic interactions in social and economic networks, where maximal independent sets correspond to pure Nash equilibria of a graphical game of public goods allocation.
Utilizing Alcohol Expectancies in the Treatment of Alcoholism.
ERIC Educational Resources Information Center
Brown, Sandra A.
The heterogeneity of alcoholic populations may be one reason that few specific therapeutic approaches to the treatment of alcoholism have been consistently demonstrated to improve treatment outome across studies. To individualize alcoholism treatment, dimensions which are linked to drinking or relapse and along which alcoholics display significant…
Utilization of the Garland Assessment of Graduation Expectations Test Results.
ERIC Educational Resources Information Center
Strozeski, Michael W.
Virtually every school system is concerned with two educational considerations: (1) where the students are academically, and (2) how to get the students to a particular set of points. Minimum competency testing has been proposed as one way to handle these concerns. Competency testing has, however, been criticized for encouraging "teaching to the…
The futility of utility: how market dynamics marginalize Adam Smith
NASA Astrophysics Data System (ADS)
McCauley, Joseph L.
2000-10-01
Economic theorizing is based on the postulated, nonempiric notion of utility. Economists assume that prices, dynamics, and market equilibria are supposed to be derived from utility. The results are supposed to represent mathematically the stabilizing action of Adam Smith's invisible hand. In deterministic excess demand dynamics I show the following. A utility function generally does not exist mathematically due to nonintegrable dynamics when production/investment are accounted for, resolving Mirowski's thesis. Price as a function of demand does not exist mathematically either. All equilibria are unstable. I then explain how deterministic chaos can be distinguished from random noise at short times. In the generalization to liquid markets and finance theory described by stochastic excess demand dynamics, I also show the following. Market price distributions cannot be rescaled to describe price movements as ‘equilibrium’ fluctuations about a systematic drift in price. Utility maximization does not describe equilibrium. Maximization of the Gibbs entropy of the observed price distribution of an asset would describe equilibrium, if equilibrium could be achieved, but equilibrium does not describe real, liquid markets (stocks, bonds, foreign exchange). There are three inconsistent definitions of equilibrium used in economics and finance, only one of which is correct. Prices in unregulated free markets are unstable against both noise and rising or falling expectations: Adam Smith's stabilizing invisible hand does not exist, either in mathematical models of liquid market data, or in real market data.
Dialysis centers - what to expect
... treatment. Many people have dialysis in a treatment center. This article focuses on hemodialysis at a treatment center. ... Artificial kidneys - dialysis centers - what to expect; Dialysis - what to expect; Renal replacement therapy - dialysis centers - what to expect
Illustrated Examples of the Effects of Risk Preferences and Expectations on Bargaining Outcomes.
ERIC Educational Resources Information Center
Dickinson, David L.
2003-01-01
Describes bargaining examples that use expected utility theory. Provides example results that are intuitive, shown graphically and algebraically, and offer upper-level student samples that illustrate the usefulness of the expected utility theory. (JEH)
Matching, maximizing, and hill-climbing
Hinson, John M.; Staddon, J. E. R.
1983-01-01
In simple situations, animals consistently choose the better of two alternatives. On concurrent variable-interval variable-interval and variable-interval variable-ratio schedules, they approximately match aggregate choice and reinforcement ratios. The matching law attempts to explain the latter result but does not address the former. Hill-climbing rules such as momentary maximizing can account for both. We show that momentary maximizing constrains molar choice to approximate matching; that molar choice covaries with pigeons' momentary-maximizing estimate; and that the “generalized matching law” follows from almost any hill-climbing rule. PMID:16812350
Are all maximally entangled states pure?
NASA Astrophysics Data System (ADS)
Cavalcanti, D.; Brandão, F. G. S. L.; Terra Cunha, M. O.
2005-10-01
We study if all maximally entangled states are pure through several entanglement monotones. In the bipartite case, we find that the same conditions which lead to the uniqueness of the entropy of entanglement as a measure of entanglement exclude the existence of maximally mixed entangled states. In the multipartite scenario, our conclusions allow us to generalize the idea of the monogamy of entanglement: we establish the polygamy of entanglement, expressing that if a general state is maximally entangled with respect to some kind of multipartite entanglement, then it is necessarily factorized of any other system.
Are all maximally entangled states pure?
Cavalcanti, D.; Brandao, F.G.S.L.; Terra Cunha, M.O.
2005-10-15
We study if all maximally entangled states are pure through several entanglement monotones. In the bipartite case, we find that the same conditions which lead to the uniqueness of the entropy of entanglement as a measure of entanglement exclude the existence of maximally mixed entangled states. In the multipartite scenario, our conclusions allow us to generalize the idea of the monogamy of entanglement: we establish the polygamy of entanglement, expressing that if a general state is maximally entangled with respect to some kind of multipartite entanglement, then it is necessarily factorized of any other system.
MAXIM Pathfinder x-ray interferometry mission
NASA Astrophysics Data System (ADS)
Gendreau, Keith C.; Cash, Webster C.; Shipley, Ann F.; White, Nicholas
2003-03-01
The MAXIM Pathfinder (MP) mission is under study as a scientific and technical stepping stone for the full MAXIM X-ray interferometry mission. While full MAXIM will resolve the event horizons of black holes with 0.1 microarcsecond imaging, MP will address scientific and technical issues as a 100 microarcsecond imager with some capabilities to resolve microarcsecond structure. We will present the primary science goals of MP. These include resolving stellar coronae, distinguishing between jets and accretion disks in AGN. This paper will also present the baseline design of MP. We will overview the challenging technical requirements and solutions for formation flying, target acquisition, and metrology.
New standard exceeds expectations
Bennett, M.J. )
1993-08-01
The new ASTM environmental due diligence standard is delivering far more than expected when it was conceived in 1990. Its use goes well beyond the relatively narrow legal liability protection that was the primary goal in its development. The real estate industry, spearheaded by the lending community, was preoccupied with environmental risk and liability. Lenders throughout the concept's evolution have been at the forefront in defining environmental due diligence. The lender liability rule is intended to protect property owners from CERCLA liability for property they own or companies they manage (for example, as a result of foreclosure). The new site assessment standard increasingly is considered a benchmark for prudent environmental due diligence in the interest of risk management, not legal liability. The focus on risk management, including collateral devaluation and corporate credit risk, are becoming dominant areas of policy focus in the lending industry. Lenders now are revising their policies to incorporate transactions beyond issues of real estate, in which a company's economic viability and ability to service debt could be impacted by an environmental problem unrelated to property transfers.
Samuel, Gabrielle; Williams, Clare
2015-01-01
Social scientists have drawn attention to the role of hype and optimistic visions of the future in providing momentum to biomedical innovation projects by encouraging innovation alliances. In this article, we show how less optimistic, uncertain, and modest visions of the future can also provide innovation projects with momentum. Scholars have highlighted the need for clinicians to carefully manage the expectations of their prospective patients. Using the example of a pioneering clinical team providing deep brain stimulation to children and young people with movement disorders, we show how clinicians confront this requirement by drawing on their professional knowledge and clinical expertise to construct visions of the future with their prospective patients; visions which are personalized, modest, and tainted with uncertainty. We refer to this vision-constructing work as recalibration, and we argue that recalibration enables clinicians to manage the tension between the highly optimistic and hyped visions of the future that surround novel biomedical interventions, and the exigencies of delivering those interventions in a clinical setting. Drawing on work from science and technology studies, we suggest that recalibration enrolls patients in an innovation alliance by creating a shared understanding of how the “effectiveness” of an innovation shall be judged. PMID:26527846
Expectations and speech intelligibility.
Babel, Molly; Russell, Jamie
2015-05-01
Socio-indexical cues and paralinguistic information are often beneficial to speech processing as this information assists listeners in parsing the speech stream. Associations that particular populations speak in a certain speech style can, however, make it such that socio-indexical cues have a cost. In this study, native speakers of Canadian English who identify as Chinese Canadian and White Canadian read sentences that were presented to listeners in noise. Half of the sentences were presented with a visual-prime in the form of a photo of the speaker and half were presented in control trials with fixation crosses. Sentences produced by Chinese Canadians showed an intelligibility cost in the face-prime condition, whereas sentences produced by White Canadians did not. In an accentedness rating task, listeners rated White Canadians as less accented in the face-prime trials, but Chinese Canadians showed no such change in perceived accentedness. These results suggest a misalignment between an expected and an observed speech signal for the face-prime trials, which indicates that social information about a speaker can trigger linguistic associations that come with processing benefits and costs. PMID:25994710
Maximal hypersurfaces in asymptotically stationary spacetimes
NASA Astrophysics Data System (ADS)
Chrusciel, Piotr T.; Wald, Robert M.
1992-12-01
The purpose of the work is to extend the results on the existence of maximal hypersurfaces to encompass some situations considered by other authors. The existence of maximal hypersurface in asymptotically stationary spacetimes is proven. Existence of maximal surface and of foliations by maximal hypersurfaces is proven in two classes of asymptotically flat spacetimes which possess a one parameter group of isometries whose orbits are timelike 'near infinity'. The first class consists of strongly causal asymptotically flat spacetimes which contain no 'blackhole or white hole' (but may contain 'ergoregions' where the Killing orbits fail to be timelike). The second class of space times possess a black hole and a white hole, with the black and white hole horizon intersecting in a compact 2-surface S.
Gaussian maximally multipartite-entangled states
Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio; Lupo, Cosmo; Mancini, Stefano
2009-12-15
We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n<=7.
AUC-Maximizing Ensembles through Metalearning
LeDell, Erin; van der Laan, Mark J.; Peterson, Maya
2016-01-01
Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree. PMID:27227721
Natural selection and the maximization of fitness.
Birch, Jonathan
2016-08-01
The notion that natural selection is a process of fitness maximization gets a bad press in population genetics, yet in other areas of biology the view that organisms behave as if attempting to maximize their fitness remains widespread. Here I critically appraise the prospects for reconciliation. I first distinguish four varieties of fitness maximization. I then examine two recent developments that may appear to vindicate at least one of these varieties. The first is the 'new' interpretation of Fisher's fundamental theorem of natural selection, on which the theorem is exactly true for any evolving population that satisfies some minimal assumptions. The second is the Formal Darwinism project, which forges links between gene frequency change and optimal strategy choice. In both cases, I argue that the results fail to establish a biologically significant maximization principle. I conclude that it may be a mistake to look for universal maximization principles justified by theory alone. A more promising approach may be to find maximization principles that apply conditionally and to show that the conditions were satisfied in the evolution of particular traits. PMID:25899152
Formation Control of the MAXIM L2 Libration Orbit Mission
NASA Technical Reports Server (NTRS)
Folta, David; Hartman, Kate; Howell, Kathleen; Marchand, Belinda
2004-01-01
The Micro-Arcsecond X-ray Imaging Mission (MAXIM), a proposed concept for the Structure and Evolution of the Universe (SEU) Black Hole Imager mission, is designed to make a ten million-fold improvement in X-ray image clarity of celestial objects by providing better than 0.1 micro-arcsecond imaging. Currently the mission architecture comprises 25 spacecraft, 24 as optics modules and one as the detector, which will form sparse sub-apertures of a grazing incidence X-ray interferometer covering the 0.3-10 keV bandpass. This formation must allow for long duration continuous science observations and also for reconfiguration that permits re-pointing of the formation. To achieve these mission goals, the formation is required to cooperatively point at desired targets. Once pointed, the individual elements of the MAXIM formation must remain stable, maintaining their relative positions and attitudes below a critical threshold. These pointing and formation stability requirements impact the control and design of the formation. In this paper, we provide analysis of control efforts that are dependent upon the stability and the configuration and dimensions of the MAXIM formation. We emphasize the utilization of natural motions in the Lagrangian regions to minimize the control efforts and we address continuous control via input feedback linearization (IFL). Results provide control cost, configuration options, and capabilities as guidelines for the development of this complex mission.
Formation Control of the MAXIM L2 Libration Orbit Mission
NASA Technical Reports Server (NTRS)
Folta, David; Hartman, Kate; Howell, Kathleen; Marchand, Belinda
2004-01-01
The Micro-Arcsecond Imaging Mission (MAXIM), a proposed concept for the Structure and Evolution of the Universe (SEU) Black Hole Imaging mission, is designed to make a ten million-fold improvement in X-ray image clarity of celestial objects by providing better than 0.1 microarcsecond imaging. To achieve mission requirements, MAXIM will have to improve on pointing by orders of magnitude. This pointing requirement impacts the control and design of the formation. Currently the architecture is comprised of 25 spacecraft, which will form the sparse apertures of a grazing incidence X-ray interferometer covering the 0.3-10 keV bandpass. This configuration will deploy 24 spacecraft as optics modules and one as the detector. The formation must allow for long duration continuous science observations and also for reconfiguration that permits re-pointing of the formation. In this paper, we provide analysis and trades of several control efforts that are dependent upon the pointing requirements and the configuration and dimensions of the MAXIM formation. We emphasize the utilization of natural motions in the Lagrangian regions that minimize the control efforts and we address both continuous and discrete control via LQR and feedback linearization. Results provide control cost, configuration options, and capabilities as guidelines for the development of this complex mission.
Explanatory Variance in Maximal Oxygen Uptake
Robert McComb, Jacalyn J.; Roh, Daesung; Williams, James S.
2006-01-01
The purpose of this study was to develop a prediction equation that could be used to estimate maximal oxygen uptake (VO2max) from a submaximal water running protocol. Thirty-two volunteers (n =19 males, n = 13 females), ages 18 - 24 years, underwent the following testing procedures: (a) a 7-site skin fold assessment; (b) a land VO2max running treadmill test; and (c) a 6 min water running test. For the water running submaximal protocol, the participants were fitted with an Aqua Jogger Classic Uni-Sex Belt and a Polar Heart Rate Monitor; the participants’ head, shoulders, hips and feet were vertically aligned, using a modified running/bicycle motion. A regression model was used to predict VO2max. The criterion variable, VO2max, was measured using open-circuit calorimetry utilizing the Bruce Treadmill Protocol. Predictor variables included in the model were percent body fat (% BF), height, weight, gender, and heart rate following a 6 min water running protocol. Percent body fat accounted for 76% (r = -0.87, SEE = 3.27) of the variance in VO2max. No other variables significantly contributed to the explained variance in VO2max. The equation for the estimation of VO2max is as follows: VO2max ml.kg-1·min-1 = 56.14 - 0.92 (% BF). Key Points Body Fat is an important predictor of VO2 max. Individuals with low skill level in water running may shorten their stride length to avoid the onset of fatigue at higher work-loads, therefore, the net oxygen cost of the exercise cannot be controlled in inexperienced individuals in water running at fatiguing workloads. Experiments using water running protocols to predict VO2max should use individuals trained in the mechanics of water running. A submaximal water running protocol is needed in the research literature for individuals trained in the mechanics of water running, given the popularity of water running rehabilitative exercise programs and training programs. PMID:24260003
Expecting the Best for Students: Teacher Expectations and Academic Outcomes
ERIC Educational Resources Information Center
Rubie-Davies, Christine; Hattie, John; Hamilton, Richard
2006-01-01
Background: Research into teacher expectations has shown that these have an effect on student achievement. Some researchers have explored the impact of various student characteristics on teachers' expectations. One attribute of interest is ethnicity. Aims: This study aimed to explore differences in teachers' expectations and judgments of student…
Great Expectations: Temporal Expectation Modulates Perceptual Processing Speed
ERIC Educational Resources Information Center
Vangkilde, Signe; Coull, Jennifer T.; Bundesen, Claus
2012-01-01
In a crowded dynamic world, temporal expectations guide our attention in time. Prior investigations have consistently demonstrated that temporal expectations speed motor behavior. We explore effects of temporal expectation on "perceptual" speed in three nonspeeded, cued recognition paradigms. Different hazard rate functions for the cue-stimulus…
Resources and energetics determined dinosaur maximal size
McNab, Brian K.
2009-01-01
Some dinosaurs reached masses that were ≈8 times those of the largest, ecologically equivalent terrestrial mammals. The factors most responsible for setting the maximal body size of vertebrates are resource quality and quantity, as modified by the mobility of the consumer, and the vertebrate's rate of energy expenditure. If the food intake of the largest herbivorous mammals defines the maximal rate at which plant resources can be consumed in terrestrial environments and if that limit applied to dinosaurs, then the large size of sauropods occurred because they expended energy in the field at rates extrapolated from those of varanid lizards, which are ≈22% of the rates in mammals and 3.6 times the rates of other lizards of equal size. Of 2 species having the same energy income, the species that uses the most energy for mass-independent maintenance of necessity has a smaller size. The larger mass found in some marine mammals reflects a greater resource abundance in marine environments. The presumptively low energy expenditures of dinosaurs potentially permitted Mesozoic communities to support dinosaur biomasses that were up to 5 times those found in mammalian herbivores in Africa today. The maximal size of predatory theropods was ≈8 tons, which if it reflected the maximal capacity to consume vertebrates in terrestrial environments, corresponds in predatory mammals to a maximal mass less than a ton, which is what is observed. Some coelurosaurs may have evolved endothermy in association with the evolution of feathered insulation and a small mass. PMID:19581600
Caffeine, maximal power output and fatigue.
Williams, J H; Signorile, J F; Barnes, W S; Henrich, T W
1988-01-01
The purpose of this investigation was to determine the effects of caffeine ingestion on maximal power output and fatigue during short term, high intensity exercise. Nine adult males performed 15 s maximal exercise bouts 60 min after ingestion of caffeine (7 mg.kg-1) or placebo. Exercise bouts were carried out on a modified cycle ergometer which allowed power output to be computed for each one-half pedal stroke via microcomputer. Peak power output under caffeine conditions was not significantly different from that obtained following placebo ingestion. Similarly, time to peak power, total work, power fatigue index and power fatigue rate did not differ significantly between caffeine and placebo conditions. These results suggest that caffeine ingestion does not increase one's maximal ability to generate power. Further, caffeine does not alter the rate or magnitude of fatigue during high intensity, dynamic exercise. PMID:3228680
Energy Band Calculations for Maximally Even Superlattices
NASA Astrophysics Data System (ADS)
Krantz, Richard; Byrd, Jason
2007-03-01
Superlattices are multiple-well, semiconductor heterostructures that can be described by one-dimensional potential wells separated by potential barriers. We refer to a distribution of wells and barriers based on the theory of maximally even sets as a maximally even superlattice. The prototypical example of a maximally even set is the distribution of white and black keys on a piano keyboard. Black keys may represent wells and the white keys represent barriers. As the number of wells and barriers increase, efficient and stable methods of calculation are necessary to study these structures. We have implemented a finite-element method using the discrete variable representation (FE-DVR) to calculate E versus k for these superlattices. Use of the FE-DVR method greatly reduces the amount of calculation necessary for the eigenvalue problem.
Maximal Holevo Quantity Based on Weak Measurements
Wang, Yao-Kun; Fei, Shao-Ming; Wang, Zhi-Xi; Cao, Jun-Peng; Fan, Heng
2015-01-01
The Holevo bound is a keystone in many applications of quantum information theory. We propose “ maximal Holevo quantity for weak measurements” as the generalization of the maximal Holevo quantity which is defined by the optimal projective measurements. The scenarios that weak measurements is necessary are that only the weak measurements can be performed because for example the system is macroscopic or that one intentionally tries to do so such that the disturbance on the measured system can be controlled for example in quantum key distribution protocols. We evaluate systematically the maximal Holevo quantity for weak measurements for Bell-diagonal states and find a series of results. Furthermore, we find that weak measurements can be realized by noise and project measurements. PMID:26090962
An information maximization model of eye movements
NASA Technical Reports Server (NTRS)
Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra
2005-01-01
We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.
Measuring Alcohol Expectancies in Youth
ERIC Educational Resources Information Center
Randolph, Karen A.; Gerend, Mary A.; Miller, Brenda A.
2006-01-01
Beliefs about the consequences of using alcohol, alcohol expectancies, are powerful predictors of underage drinking. The Alcohol Expectancies Questionnaire-Adolescent form (AEQ-A) has been widely used to measure expectancies in youth. Despite its broad use, the factor structure of the AEQ-A has not been firmly established. It is also not known…
A Reward-Maximizing Spiking Neuron as a Bounded Rational Decision Maker.
Leibfried, Felix; Braun, Daniel A
2015-08-01
Rate distortion theory describes how to communicate relevant information most efficiently over a channel with limited capacity. One of the many applications of rate distortion theory is bounded rational decision making, where decision makers are modeled as information channels that transform sensory input into motor output under the constraint that their channel capacity is limited. Such a bounded rational decision maker can be thought to optimize an objective function that trades off the decision maker's utility or cumulative reward against the information processing cost measured by the mutual information between sensory input and motor output. In this study, we interpret a spiking neuron as a bounded rational decision maker that aims to maximize its expected reward under the computational constraint that the mutual information between the neuron's input and output is upper bounded. This abstract computational constraint translates into a penalization of the deviation between the neuron's instantaneous and average firing behavior. We derive a synaptic weight update rule for such a rate distortion optimizing neuron and show in simulations that the neuron efficiently extracts reward-relevant information from the input by trading off its synaptic strengths against the collected reward. PMID:26079747
On the Relationship between Maximal Reliability and Maximal Validity of Linear Composites
ERIC Educational Resources Information Center
Penev, Spiridon; Raykov, Tenko
2006-01-01
A linear combination of a set of measures is often sought as an overall score summarizing subject performance. The weights in this composite can be selected to maximize its reliability or to maximize its validity, and the optimal choice of weights is in general not the same for these two optimality criteria. We explore several relationships…
Patient (customer) expectations in hospitals.
Bostan, Sedat; Acuner, Taner; Yilmaz, Gökhan
2007-06-01
The expectations of patient are one of the determining factors of healthcare service. The purpose of this study is to measure the Patients' Expectations, based on Patient's Rights. This study was done with Likert-Survey in Trabzon population. The analyses showed that the level of the expectations of the patient was high on the factor of receiving information and at an acceptable level on the other factors. Statistical meaningfulness was determined between age, sex, education, health insurance, and the income of the family and the expectations of the patients (p<0.05). According to this study, the current legal regulations have higher standards than the expectations of the patients. The reason that the satisfaction of the patients high level is interpreted due to the fact that the level of the expectation is low. It is suggested that the educational and public awareness studies on the patients' rights must be done in order to increase the expectations of the patients. PMID:17028043
Understanding violations of Gricean maxims in preschoolers and adults.
Okanda, Mako; Asada, Kosuke; Moriguchi, Yusuke; Itakura, Shoji
2015-01-01
This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants' understanding of the following maxims was assessed: be informative (first maxim of quantity), avoid redundancy (second maxim of quantity), be truthful (maxim of quality), be relevant (maxim of relation), avoid ambiguity (second maxim of manner), and be polite (maxim of politeness). Sensitivity to violations of Gricean maxims increased with age: 4-year-olds' understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner), and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed. PMID:26191018
Understanding violations of Gricean maxims in preschoolers and adults
Okanda, Mako; Asada, Kosuke; Moriguchi, Yusuke; Itakura, Shoji
2015-01-01
This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants' understanding of the following maxims was assessed: be informative (first maxim of quantity), avoid redundancy (second maxim of quantity), be truthful (maxim of quality), be relevant (maxim of relation), avoid ambiguity (second maxim of manner), and be polite (maxim of politeness). Sensitivity to violations of Gricean maxims increased with age: 4-year-olds' understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner), and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed. PMID:26191018
Maximal aerobic exercise following prolonged sleep deprivation.
Goodman, J; Radomski, M; Hart, L; Plyley, M; Shephard, R J
1989-12-01
The effect of 60 h without sleep upon maximal oxygen intake was examined in 12 young women, using a cycle ergometer protocol. The arousal of the subjects was maintained by requiring the performance of a sequence of cognitive tasks throughout the experimental period. Well-defined oxygen intake plateaus were obtained both before and after sleep deprivation, and no change of maximal oxygen intake was observed immediately following sleep deprivation. The endurance time for exhausting exercise also remained unchanged, as did such markers of aerobic performance as peak exercise ventilation, peak heart rate, peak respiratory gas exchange ratio, and peak blood lactate. However, as in an earlier study of sleep deprivation with male subjects (in which a decrease of treadmill maximal oxygen intake was observed), the formula of Dill and Costill (4) indicated the development of a substantial (11.6%) increase of estimated plasma volume percentage with corresponding decreases in hematocrit and red cell count. Possible factors sustaining maximal oxygen intake under the conditions of the present experiment include (1) maintained arousal of the subjects with no decrease in peak exercise ventilation or the related respiratory work and (2) use of a cycle ergometer rather than a treadmill test with possible concurrent differences in the impact of hematocrit levels and plasma volume expansion upon peak cardiac output and thus oxygen delivery to the working muscles. PMID:2628360
Does evolution lead to maximizing behavior?
Lehmann, Laurent; Alger, Ingela; Weibull, Jörgen
2015-07-01
A long-standing question in biology and economics is whether individual organisms evolve to behave as if they were striving to maximize some goal function. We here formalize this "as if" question in a patch-structured population in which individuals obtain material payoffs from (perhaps very complex multimove) social interactions. These material payoffs determine personal fitness and, ultimately, invasion fitness. We ask whether individuals in uninvadable population states will appear to be maximizing conventional goal functions (with population-structure coefficients exogenous to the individual's behavior), when what is really being maximized is invasion fitness at the genetic level. We reach two broad conclusions. First, no simple and general individual-centered goal function emerges from the analysis. This stems from the fact that invasion fitness is a gene-centered multigenerational measure of evolutionary success. Second, when selection is weak, all multigenerational effects of selection can be summarized in a neutral type-distribution quantifying identity-by-descent between individuals within patches. Individuals then behave as if they were striving to maximize a weighted sum of material payoffs (own and others). At an uninvadable state it is as if individuals would freely choose their actions and play a Nash equilibrium of a game with a goal function that combines self-interest (own material payoff), group interest (group material payoff if everyone does the same), and local rivalry (material payoff differences). PMID:26082379
How to Generate Good Profit Maximization Problems
ERIC Educational Resources Information Center
Davis, Lewis
2014-01-01
In this article, the author considers the merits of two classes of profit maximization problems: those involving perfectly competitive firms with quadratic and cubic cost functions. While relatively easy to develop and solve, problems based on quadratic cost functions are too simple to address a number of important issues, such as the use of…
Ehrenfest's Lottery--Time and Entropy Maximization
ERIC Educational Resources Information Center
Ashbaugh, Henry S.
2010-01-01
Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…
Faculty Salaries and the Maximization of Prestige
ERIC Educational Resources Information Center
Melguizo, Tatiana; Strober, Myra H.
2007-01-01
Through the lens of the emerging economic theory of higher education, we look at the relationship between salary and prestige. Starting from the premise that academic institutions seek to maximize prestige, we hypothesize that monetary rewards are higher for faculty activities that confer prestige. We use data from the 1999 National Study of…
Maximizing the Spectacle of Water Fountains
ERIC Educational Resources Information Center
Simoson, Andrew J.
2009-01-01
For a given initial speed of water from a spigot or jet, what angle of the jet will maximize the visual impact of the water spray in the fountain? This paper focuses on fountains whose spigots are arranged in circular fashion, and couches the measurement of the visual impact in terms of the surface area and the volume under the fountain's natural…
A Model of College Tuition Maximization
ERIC Educational Resources Information Center
Bosshardt, Donald I.; Lichtenstein, Larry; Zaporowski, Mark P.
2009-01-01
This paper develops a series of models for optimal tuition pricing for private colleges and universities. The university is assumed to be a profit maximizing, price discriminating monopolist. The enrollment decision of student's is stochastic in nature. The university offers an effective tuition rate, comprised of stipulated tuition less financial…
Maximizing the Phytonutrient Content of Potatoes
Technology Transfer Automated Retrieval System (TEKTRAN)
We are exploring to what extent the rich genetic diversity of potatoes can be used to maximize the nutritional potential of potatoes. Metabolic profiling is being used to screen potatoes for genotypes with elevated amounts of vitamins and phytonutrients. Substantial differences in phytonutrients am...
Educational Expectations and Attainment. NBER Working Paper No. 15683
ERIC Educational Resources Information Center
Jacob, Brian A.; Wilder, Tamara
2010-01-01
This paper examines the role of educational expectations in the educational attainment process. We utilize data from a variety of datasets to document and analyze the trends in educational expectations between the mid-1970s and the early 2000s. We focus on differences across racial/ethnic and socioeconomic groups and examine how young people…
Teacher Expectancy Related to Student Performance in Vocational Education.
ERIC Educational Resources Information Center
Pandya, Himanshu S.
A study was designed (1) to discover the effect of teacher expectation on student performance in the cognitive and in the psychomotor skills, and (2) to analyze students' attitudes toward teachers because of teacher expectations. The study utilized two different instructional units. The quality milk production unit was used to teach cognitive…
Expectancies vs. Background in the Prediction of Adult Drinking Patterns.
ERIC Educational Resources Information Center
Brown, Sandra A.
Alcoholism research has independently focused on background characteristics and alcohol-related expectations, e.g., social and physical pleasure, reduced tension, and increased assertiveness, as important variables in identifying high risk individuals. To assess the utility of alcohol reinforcement expectations as predictors of drinking patterns,…
The evolution of utility functions and psychological altruism.
Clavien, Christine; Chapuisat, Michel
2016-04-01
Numerous studies show that humans tend to be more cooperative than expected given the assumption that they are rational maximizers of personal gain. As a result, theoreticians have proposed elaborated formal representations of human decision-making, in which utility functions including "altruistic" or "moral" preferences replace the purely self-oriented "Homo economicus" function. Here we review mathematical approaches that provide insights into the mathematical stability of alternative utility functions. Candidate utility functions may be evaluated with help of game theory, classical modeling of social evolution that focuses on behavioral strategies, and modeling of social evolution that focuses directly on utility functions. We present the advantages of the latter form of investigation and discuss one surprisingly precise result: "Homo economicus" as well as "altruistic" utility functions are less stable than a function containing a preference for the common welfare that is only expressed in social contexts composed of individuals with similar preferences. We discuss the contribution of mathematical models to our understanding of human other-oriented behavior, with a focus on the classical debate over psychological altruism. We conclude that human can be psychologically altruistic, but that psychological altruism evolved because it was generally expressed towards individuals that contributed to the actor's fitness, such as own children, romantic partners and long term reciprocators. PMID:26598465
Intervening in Expectation Communication: The "Alterability" of Teacher Expectations.
ERIC Educational Resources Information Center
Cooper, Harris M.
Theoretical and practical implications of the proposition that teachers' differential behavior toward high and low expectation students serves a control function were tested. As predicted, initial performance expectations were found related to later perceptions of control over performance, even when the initial relationship between expectations…
Price of anarchy is maximized at the percolation threshold.
Skinner, Brian
2015-05-01
When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called price of anarchy (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly placed congestible and incongestible links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold. PMID:26066138
The price of anarchy is maximized at the percolation threshold
NASA Astrophysics Data System (ADS)
Skinner, Brian
2015-03-01
When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called ``price of anarchy'' (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly-placed ``congestible'' and ``incongestible'' links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold.
Price of anarchy is maximized at the percolation threshold
NASA Astrophysics Data System (ADS)
Skinner, Brian
2015-05-01
When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called price of anarchy (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly placed congestible and incongestible links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold.
Loops and multiple edges in modularity maximization of networks
NASA Astrophysics Data System (ADS)
Cafieri, Sonia; Hansen, Pierre; Liberti, Leo
2010-04-01
The modularity maximization model proposed by Newman and Girvan for the identification of communities in networks works for general graphs possibly with loops and multiple edges. However, the applications usually correspond to simple graphs. These graphs are compared to a null model where the degree distribution is maintained but edges are placed at random. Therefore, in this null model there will be loops and possibly multiple edges. Sharp bounds on the expected number of loops, and their impact on the modularity, are derived. Then, building upon the work of Massen and Doye, but using algebra rather than simulation, we propose modified null models associated with graphs without loops but with multiple edges, graphs with loops but without multiple edges and graphs without loops nor multiple edges. We validate our models by using the exact algorithm for clique partitioning of Grötschel and Wakabayashi.
2011-09-30
The software package provides several utilities written in LabView. These utilities don't form independent programs, but rather can be used as a library or controls in other labview programs. The utilities include several new controls (xcontrols), VIs for input and output routines, as well as other 'helper'-functions not provided in the standard LabView environment.
Maximal CP violation in flavor neutrino masses
NASA Astrophysics Data System (ADS)
Kitabayashi, Teruyuki; Yasuè, Masaki
2016-03-01
Since flavor neutrino masses Mμμ,ττ,μτ can be expressed in terms of Mee,eμ,eτ, mutual dependence among Mμμ,ττ,μτ is derived by imposing some constraints on Mee,eμ,eτ. For appropriately imposed constraints on Mee,eμ,eτ giving rise to both maximal CP violation and the maximal atmospheric neutrino mixing, we show various specific textures of neutrino mass matrices including the texture with Mττ = Mμμ∗ derived as the simplest solution to the constraint of Mττ ‑ Mμμ = imaginary, which is required by the constraint of Meμcos θ23 ‑ Meτsin θ23 = real for cos 2θ23 = 0. It is found that Majorana CP violation depends on the phase of Mee.
Hamiltonian formalism and path entropy maximization
NASA Astrophysics Data System (ADS)
Davis, Sergio; González, Diego
2015-10-01
Maximization of the path information entropy is a clear prescription for constructing models in non-equilibrium statistical mechanics. Here it is shown that, following this prescription under the assumption of arbitrary instantaneous constraints on position and velocity, a Lagrangian emerges which determines the most probable trajectory. Deviations from the probability maximum can be consistently described as slices in time by a Hamiltonian, according to a nonlinear Langevin equation and its associated Fokker-Planck equation. The connections unveiled between the maximization of path entropy and the Langevin/Fokker-Planck equations imply that missing information about the phase space coordinate never decreases in time, a purely information-theoretical version of the second law of thermodynamics. All of these results are independent of any physical assumptions, and thus valid for any generalized coordinate as a function of time, or any other parameter. This reinforces the view that the second law is a fundamental property of plausible inference.
Nondecoupling of maximal supergravity from the superstring.
Green, Michael B; Ooguri, Hirosi; Schwarz, John H
2007-07-27
We consider the conditions necessary for obtaining perturbative maximal supergravity in d dimensions as a decoupling limit of type II superstring theory compactified on a (10-d) torus. For dimensions d=2 and d=3, it is possible to define a limit in which the only finite-mass states are the 256 massless states of maximal supergravity. However, in dimensions d>or=4, there are infinite towers of additional massless and finite-mass states. These correspond to Kaluza-Klein charges, wound strings, Kaluza-Klein monopoles, or branes wrapping around cycles of the toroidal extra dimensions. We conclude that perturbative supergravity cannot be decoupled from string theory in dimensions>or=4. In particular, we conjecture that pure N=8 supergravity in four dimensions is in the Swampland. PMID:17678349
Maximal temperature in a simple thermodynamical system
NASA Astrophysics Data System (ADS)
Dai, De-Chang; Stojkovic, Dejan
2016-06-01
Temperature in a simple thermodynamical system is not limited from above. It is also widely believed that it does not make sense talking about temperatures higher than the Planck temperature in the absence of the full theory of quantum gravity. Here, we demonstrate that there exist a maximal achievable temperature in a system where particles obey the laws of quantum mechanics and classical gravity before we reach the realm of quantum gravity. Namely, if two particles with a given center of mass energy come at the distance shorter than the Schwarzschild diameter apart, according to classical gravity they will form a black hole. It is possible to calculate that a simple thermodynamical system will be dominated by black holes at a critical temperature which is about three times lower than the Planck temperature. That represents the maximal achievable temperature in a simple thermodynamical system.
Experimental implementation of maximally synchronizable networks
NASA Astrophysics Data System (ADS)
Sevilla-Escoboza, R.; Buldú, J. M.; Boccaletti, S.; Papo, D.; Hwang, D.-U.; Huerta-Cuellar, G.; Gutiérrez, R.
2016-04-01
Maximally synchronizable networks (MSNs) are acyclic directed networks that maximize synchronizability. In this paper, we investigate the feasibility of transforming networks of coupled oscillators into their corresponding MSNs. By tuning the weights of any given network so as to reach the lowest possible eigenratio λN /λ2, the synchronized state is guaranteed to be maintained across the longest possible range of coupling strengths. We check the robustness of the resulting MSNs with an experimental implementation of a network of nonlinear electronic oscillators and study the propagation of the synchronization errors through the network. Importantly, a method to study the effects of topological uncertainties on the synchronizability is proposed and explored both theoretically and experimentally.
Increasing Expectations for Student Effort.
ERIC Educational Resources Information Center
Schilling, Karen Maitland; Schilling, Karl L.
1999-01-01
States that few higher education institutions have publicly articulated clear expectations of the knowledge and skills students are to attain. Describes gap between student and faculty expectations for academic effort. Reports that what is required in students' first semester appears to play a strong role in shaping the time investments made in…
Sibling Status Effects: Adult Expectations.
ERIC Educational Resources Information Center
Baskett, Linda Musun
1985-01-01
This study attempted to determine what expectations or beliefs adults might hold about a child based on his or her sibling status alone. Ratings on 50 adjective pairs for each of three sibling status types, only, oldest, and youngest child, were assessed in relation to adult expectations, birth order, and parental status of rater. (Author/DST)
Expectations of Garland [Junior College].
ERIC Educational Resources Information Center
Garland Junior Coll., Boston, MA.
A survey was conducted at Garland Junior College to determine the educational expectations of 69 new students, 122 parents, and 22 college faculty and administrators. Each group in this private women's college was asked to rank, in terms of expectations they held, the following items: learn job skills, mature in relations with others, become more…
Student Expectations of Grade Inflation.
ERIC Educational Resources Information Center
Landrum, R. Eric
1999-01-01
College students completed evaluation-of-teaching surveys in five different courses to develop an evaluation instrument that would provide results concerning faculty performance. Two questions examined students' expectations regarding grades. Results indicated a significant degree of expected grade inflation. Large proportions of students doing…
Institutional Differences: Expectations and Perceptions.
ERIC Educational Resources Information Center
Silver, Harold
1982-01-01
The history of higher education has paid scant attention to the attitudes and expectations of its customers, students, and employers of graduates. Recent research on student and employer attitudes toward higher education sectors has not taken into account these expectations in the context of recent higher education history. (Author/MSE)
Basic principles of maximizing dental office productivity.
Mamoun, John
2012-01-01
To maximize office productivity, dentists should focus on performing tasks that only they can perform and not spend office hours performing tasks that can be delegated to non-dentist personnel. An important element of maximizing productivity is to arrange the schedule so that multiple patients are seated simultaneously in different operatories. Doing so allows the dentist to work on one patient in one operatory without needing to wait for local anesthetic to take effect on another patient in another operatory, or for assistants to perform tasks (such as cleaning up, taking radiographs, performing prophylaxis, or transporting and preparing equipment and supplies) in other operatories. Another way to improve productivity is to structure procedures so that fewer steps are needed to set up and implement them. In addition, during procedures, four-handed dental passing methods can be used to provide the dentist with supplies or equipment when needed. This article reviews basic principles of maximizing dental office productivity, based on the author's observations of business logistics used by various dental offices. PMID:22414506
Formation Control for the MAXIM Mission
NASA Technical Reports Server (NTRS)
Luquette, Richard J.; Leitner, Jesse; Gendreau, Keith; Sanner, Robert M.
2004-01-01
Over the next twenty years, a wave of change is occurring in the space-based scientific remote sensing community. While the fundamental limits in the spatial and angular resolution achievable in spacecraft have been reached, based on today s technology, an expansive new technology base has appeared over the past decade in the area of Distributed Space Systems (DSS). A key subset of the DSS technology area is that which covers precision formation flying of space vehicles. Through precision formation flying, the baselines, previously defined by the largest monolithic structure which could fit in the largest launch vehicle fairing, are now virtually unlimited. Several missions including the Micro-Arcsecond X-ray Imaging Mission (MAXIM), and the Stellar Imager will drive the formation flying challenges to achieve unprecedented baselines for high resolution, extended-scene, interferometry in the ultraviolet and X-ray regimes. This paper focuses on establishing the feasibility for the formation control of the MAXIM mission. MAXIM formation flying requirements are on the order of microns, while Stellar Imager mission requirements are on the order of nanometers. This paper specifically addresses: (1) high-level science requirements for these missions and how they evolve into engineering requirements; and (2) the development of linearized equations of relative motion for a formation operating in an n-body gravitational field. Linearized equations of motion provide the ground work for linear formation control designs.
Revenue maximization in survivable WDM networks
NASA Astrophysics Data System (ADS)
Sridharan, Murari; Somani, Arun K.
2000-09-01
Service availability is an indispensable requirement for many current and future applications over the Internet and hence has to be addressed as part of the optical QoS service model. Network service providers can offer varying classes of services based on the choice of protection employed which can vary from full protection to no protection. Based on the service classes, traffic in the network falls into one of the three classes viz., full protection, no protection and best-effort. The network typically relies on the best-effort traffic for maximizing revenue. We consider two variations on the best-effort class, (1) all connections are accepted and network tries to protect as many as possible and (2) a mix of protected and unprotected connections and the goal is to maximize revenue. In this paper, we present a mathematical formulation, that captures service differentiation based on lightpath protection, for revenue maximization in a wavelength routed backbone networks. Our approach also captures the service disruption aspect into the problem formulation, as there may be a penalty for disrupting currently working connections.
Maximal acceleration is non-rotating
NASA Astrophysics Data System (ADS)
Page, Don N.
1998-06-01
In a stationary axisymmetric spacetime, the angular velocity of a stationary observer whose acceleration vector is Fermi-Walker transported is also the angular velocity that locally extremizes the magnitude of the acceleration of such an observer. The converse is also true if the spacetime is symmetric under reversing both t and 0264-9381/15/6/020/img1 together. Thus a congruence of non-rotating acceleration worldlines (NAW) is equivalent to a stationary congruence accelerating locally extremely (SCALE). These congruences are defined completely locally, unlike the case of zero angular momentum observers (ZAMOs), which requires knowledge around a symmetry axis. The SCALE subcase of a stationary congruence accelerating maximally (SCAM) is made up of stationary worldlines that may be considered to be locally most nearly at rest in a stationary axisymmetric gravitational field. Formulae for the angular velocity and other properties of the SCALEs are given explicitly on a generalization of an equatorial plane, infinitesimally near a symmetry axis, and in a slowly rotating gravitational field, including the far-field limit, where the SCAM is shown to be counter-rotating relative to infinity. These formulae are evaluated in particular detail for the Kerr-Newman metric. Various other congruences are also defined, such as a stationary congruence rotating at minimum (SCRAM), and stationary worldlines accelerating radially maximally (SWARM), both of which coincide with a SCAM on an equatorial plane of reflection symmetry. Applications are also made to the gravitational fields of maximally rotating stars, the Sun and the Solar System.
Maximal violation of tight Bell inequalities for maximal high-dimensional entanglement
Lee, Seung-Woo; Jaksch, Dieter
2009-07-15
We propose a Bell inequality for high-dimensional bipartite systems obtained by binning local measurement outcomes and show that it is tight. We find a binning method for even d-dimensional measurement outcomes for which this Bell inequality is maximally violated by maximally entangled states. Furthermore, we demonstrate that the Bell inequality is applicable to continuous variable systems and yields strong violations for two-mode squeezed states.
Uplink Array Calibration via Far-Field Power Maximization
NASA Technical Reports Server (NTRS)
Vilnrotter, V.; Mukai, R.; Lee, D.
2006-01-01
Uplink antenna arrays have the potential to greatly increase the Deep Space Network s high-data-rate uplink capabilities as well as useful range, and to provide additional uplink signal power during critical spacecraft emergencies. While techniques for calibrating an array of receive antennas have been addressed previously, proven concepts for uplink array calibration have yet to be demonstrated. This article describes a method of utilizing the Moon as a natural far-field reflector for calibrating a phased array of uplink antennas. Using this calibration technique, the radio frequency carriers transmitted by each antenna of the array are optimally phased to ensure that the uplink power received by the spacecraft is maximized.
Utility Static Generation Reliability
1993-03-05
PICES (Probabilistic Investigation of Capacity and Energy Shortages) was developed for estimating an electric utility''s expected frequency and duration of capacity deficiencies on a daily on and off-peak basis. In addition to the system loss-of-load probability (LOLP) and loss-of-load expectation (LOLE) indices, PICES calculates the expected frequency and duration of system capacity deficiencies and the probability, expectation, and expected frequency and duration of a range of system reserve margin states. Results are aggregated and printedmore » on a weekly, monthly, or annual basis. The program employs hourly load data and either the two-state (on/off) or a more sophisticated three-state (on/partially on/fully off) generating unit representation. Unit maintenance schedules are determined on a weekly, levelized reserve margin basis. In addition to the 8760-hour annual load record, the user provides the following information for each unit: plant capacity, annual maintenance requirement, two or three-state unit failure and repair rates, and for three-state models, the partial state capacity deficiency. PICES can also supply default failure and repair rate values, based on the Edison Electric Institute''s 1979 Report on Equipment Availability for the Ten-Year Period 1968 Through 1977, for many common plant types. Multi-year analysis can be performed by specifying as input data the annual peak load growth rates and plant addition and retirement schedules for each year in the study.« less
Physical activity extends life expectancy
Leisure-time physical activity is associated with longer life expectancy, even at relatively low levels of activity and regardless of body weight, according to a study by a team of researchers led by the NCI.
Dialysis centers - what to expect
... what to expect; Renal replacement therapy - dialysis centers; End-stage renal disease - dialysis centers; Kidney failure - dialysis ... swells and the hand on that side feels cold Your hand gets cold, numb, or weak Also ...
Maternal Competence, Expectation, and Involvement
ERIC Educational Resources Information Center
Heath, Douglas H.
1977-01-01
Presents a study of maternal competence, expectations and involvement in child rearing decisions in relation to paternal personality and marital characteristics. Subjects were 45 thirty-year-old mothers. (BD)
Maximizing versus satisficing: happiness is a matter of choice.
Schwartz, Barry; Ward, Andrew; Monterosso, John; Lyubomirsky, Sonja; White, Katherine; Lehman, Darrin R
2002-11-01
Can people feel worse off as the options they face increase? The present studies suggest that some people--maximizers--can. Study 1 reported a Maximization Scale, which measures individual differences in desire to maximize. Seven samples revealed negative correlations between maximization and happiness, optimism, self-esteem, and life satisfaction, and positive correlations between maximization and depression, perfectionism, and regret. Study 2 found maximizers less satisfied than nonmaximizers (satisficers) with consumer decisions, and more likely to engage in social comparison. Study 3 found maximizers more adversely affected by upward social comparison. Study 4 found maximizers more sensitive to regret and less satisfied in an ultimatum bargaining game. The interaction between maximizing and choice is discussed in terms of regret, adaptation, and self-blame. PMID:12416921
Electromagnetically induced grating with maximal atomic coherence
Carvalho, Silvania A.; Araujo, Luis E. E. de
2011-10-15
We describe theoretically an atomic diffraction grating that combines an electromagnetically induced grating with a coherence grating in a double-{Lambda} atomic system. With the atom in a condition of maximal coherence between its lower levels, the combined gratings simultaneously diffract both the incident probe beam as well as the signal beam generated through four-wave mixing. A special feature of the atomic grating is that it will diffract any beam resonantly tuned to any excited state of the atom accessible by a dipole transition from its ground state.
Coloring random graphs and maximizing local diversity.
Bounkong, S; van Mourik, J; Saad, D
2006-11-01
We study a variation of the graph coloring problem on random graphs of finite average connectivity. Given the number of colors, we aim to maximize the number of different colors at neighboring vertices (i.e., one edge distance) of any vertex. Two efficient algorithms, belief propagation and Walksat, are adapted to carry out this task. We present experimental results based on two types of random graphs for different system sizes and identify the critical value of the connectivity for the algorithms to find a perfect solution. The problem and the suggested algorithms have practical relevance since various applications, such as distributed storage, can be mapped onto this problem. PMID:17280022
Using molecular biology to maximize concurrent training.
Baar, Keith
2014-11-01
Very few sports use only endurance or strength. Outside of running long distances on a flat surface and power-lifting, practically all sports require some combination of endurance and strength. Endurance and strength can be developed simultaneously to some degree. However, the development of a high level of endurance seems to prohibit the development or maintenance of muscle mass and strength. This interaction between endurance and strength is called the concurrent training effect. This review specifically defines the concurrent training effect, discusses the potential molecular mechanisms underlying this effect, and proposes strategies to maximize strength and endurance in the high-level athlete. PMID:25355186
Great Expectations: Expectation Based Reasoning in Medical Diagnosis
Fisher, Paul R.; Miller, Perry L.; Swett, Henry A.
1988-01-01
Several different approaches to knowledge representation for medical expert systems have been explored. We suggest that a modified version of the script formalism, which we term “expectation-based reasoning”, may offer an additional knowledge representation for medical information, addressing certain shortcomings of previous approaches. This representation can drive expert system analysis for diagnosis and workup advice. The script formalism structures the knowledge base around a set of temporally sequenced event frames, each containing a list of default expectations. This model, we believe, allows straightforward knowledge generation from a domain expert, since it may closely parallel a central aspect of human clinical decision-making: that of projecting assumptions for a “hypothesize-and-test” inference mechanism. A prototype expectation-based expert system, OSCAR, is under development to explore this approach.
Optimizing Population Variability to Maximize Benefit
Izu, Leighton T.; Bányász, Tamás; Chen-Izu, Ye
2015-01-01
Variability is inherent in any population, regardless whether the population comprises humans, plants, biological cells, or manufactured parts. Is the variability beneficial, detrimental, or inconsequential? This question is of fundamental importance in manufacturing, agriculture, and bioengineering. This question has no simple categorical answer because research shows that variability in a population can have both beneficial and detrimental effects. Here we ask whether there is a certain level of variability that can maximize benefit to the population as a whole. We answer this question by using a model composed of a population of individuals who independently make binary decisions; individuals vary in making a yes or no decision, and the aggregated effect of these decisions on the population is quantified by a benefit function (e.g. accuracy of the measurement using binary rulers, aggregate income of a town of farmers). Here we show that an optimal variance exists for maximizing the population benefit function; this optimal variance quantifies what is often called the “right mix” of individuals in a population. PMID:26650247
Factors affecting maximal momentary grip strength.
Martin, S; Neale, G; Elia, M
1985-03-01
Maximal voluntary grip strength has been measured in normal adults aged 18-70 years (17 f, 18 m) and compared with other indices of body muscle mass. Grip strength (dominant side) was directly proportional to creatinine excretion (r = 0.81); to forearm muscle area (r = 0.73); to upper arm muscle area (r = 0.71) and to lean body mass (r = 0.65). Grip strength relative to forearm muscle area decreased with age. The study of a subgroup of normal subjects revealed a small but significant postural and circadian effect on grip strength. The effect on maximal voluntary grip strength of sedatives in elderly subjects undergoing routine endoscopy (n = 6), and of acute infections in otherwise healthy individuals (n = 6), severe illness in patients requiring intensive care (n = 6), chronic renal failure (n = 7) and anorexia nervosa (n = 6) has been assessed. Intravenous diazepam and buscopan produced a 50 per cent reduction in grip strength which returned to normal within the next 2-3 h. Acute infections reduced grip strength by a mean of 35 per cent and severe illness in patients in intensive care by 60 per cent. In patients with chronic renal failure grip strength was 80-85 per cent of that predicted from forearm 'muscle area' (P less than 0.05). In anorectic patients the values were appropriate for their forearm muscle area. Nevertheless nutritional rehabilitation of one anorectic patient did not lead to a consistent improvement in grip strength. PMID:3926728
Spiders Tune Glue Viscosity to Maximize Adhesion.
Amarpuri, Gaurav; Zhang, Ci; Diaz, Candido; Opell, Brent D; Blackledge, Todd A; Dhinojwala, Ali
2015-11-24
Adhesion in humid conditions is a fundamental challenge to both natural and synthetic adhesives. Yet, glue from most spider species becomes stickier as humidity increases. We find the adhesion of spider glue, from five diverse spider species, maximizes at very different humidities that matches their foraging habitats. By using high-speed imaging and spreading power law, we find that the glue viscosity varies over 5 orders of magnitude with humidity for each species, yet the viscosity at maximal adhesion for each species is nearly identical, 10(5)-10(6) cP. Many natural systems take advantage of viscosity to improve functional response, but spider glue's humidity responsiveness is a novel adaptation that makes the glue stickiest in each species' preferred habitat. This tuning is achieved by a combination of proteins and hygroscopic organic salts that determines water uptake in the glue. We therefore anticipate that manipulation of polymer-salts interaction to control viscosity can provide a simple mechanism to design humidity responsive smart adhesives. PMID:26513350
Maximizing strain in miniaturized dielectric elastomer actuators
NASA Astrophysics Data System (ADS)
Rosset, Samuel; Araromi, Oluwaseun; Shea, Herbert
2015-04-01
We present a theoretical model to optimise the unidirectional motion of a rigid object bonded to a miniaturized dielectric elastomer actuator (DEA), a configuration found for example in AMI's haptic feedback devices, or in our tuneable RF phase shifter. Recent work has shown that unidirectional motion is maximized when the membrane is both anistropically prestretched and subjected to a dead load in the direction of actuation. However, the use of dead weights for miniaturized devices is clearly highly impractical. Consequently smaller devices use the membrane itself to generate the opposing force. Since the membrane covers the entire frame, one has the same prestretch condition in the active (actuated) and passive zones. Because the passive zone contracts when the active zone expands, it does not provide a constant restoring force, reducing the maximum achievable actuation strain. We have determined the optimal ratio between the size of the electrode (active zone) and the passive zone, as well as the optimal prestretch in both in-plane directions, in order to maximize the absolute displacement of the rigid object placed at the active/passive border. Our model and experiments show that the ideal active ratio is 50%, with a displacement twice smaller than what can be obtained with a dead load. We expand our fabrication process to also show how DEAs can be laser-post-processed to remove carefully chosen regions of the passive elastomer membrane, thereby increasing the actuation strain of the device.
Maximal lactate steady state in Judo
de Azevedo, Paulo Henrique Silva Marques; Pithon-Curi, Tania; Zagatto, Alessandro Moura; Oliveira, João; Perez, Sérgio
2014-01-01
Summary Background: the purpose of this study was to verify the validity of respiratory compensation threshold (RCT) measured during a new single judo specific incremental test (JSIT) for aerobic demand evaluation. Methods: to test the validity of the new test, the JSIT was compared with Maximal Lactate Steady State (MLSS), which is the gold standard procedure for aerobic demand measuring. Eight well-trained male competitive judo players (24.3 ± 7.9 years; height of 169.3 ± 6.7cm; fat mass of 12.7 ± 3.9%) performed a maximal incremental specific test for judo to assess the RCT and performed on 30-minute MLSS test, where both tests were performed mimicking the UchiKomi drills. Results: the intensity at RCT measured on JSIT was not significantly different compared to MLSS (p=0.40). In addition, it was observed high and significant correlation between MLSS and RCT (r=0.90, p=0.002), as well as a high agreement. Conclusions: RCT measured during JSIT is a valid procedure to measure the aerobic demand, respecting the ecological validity of Judo. PMID:25332923
The ethics of life expectancy.
Small, Robin
2002-08-01
Some ethical dilemmas in health care, such as over the use of age as a criterion of patient selection, appeal to the notion of life expectancy. However, some features of this concept have not been discussed. Here I look in turn at two aspects: one positive--our expectation of further life--and the other negative--the loss of potential life brought about by death. The most common method of determining this loss, by counting only the period of time between death and some particular age, implies that those who die at ages not far from that one are regarded as losing very little potential life, while those who die at greater ages are regarded as losing none at all. This approach has methodological advantages but ethical disadvantages, in that it fails to correspond to our strong belief that anyone who dies is losing some period of life that he or she would otherwise have had. The normative role of life expectancy expressed in the 'fair innings' attitude arises from a particular historical situation: not the increase of life expectancy in modern societies, but a related narrowing in the distribution of projected life spans. Since life expectancy is really a representation of existing patterns of mortality, which in turn are determined by many influences, including the present allocation of health resources, it should not be taken as a prediction, and still less as a statement of entitlement. PMID:12956176
Excap: Maximization of Haplotypic Diversity of Linked Markers
Kahles, André; Sarqume, Fahad; Savolainen, Peter; Arvestad, Lars
2013-01-01
Genetic markers, defined as variable regions of DNA, can be utilized for distinguishing individuals or populations. As long as markers are independent, it is easy to combine the information they provide. For nonrecombinant sequences like mtDNA, choosing the right set of markers for forensic applications can be difficult and requires careful consideration. In particular, one wants to maximize the utility of the markers. Until now, this has mainly been done by hand. We propose an algorithm that finds the most informative subset of a set of markers. The algorithm uses a depth first search combined with a branch-and-bound approach. Since the worst case complexity is exponential, we also propose some data-reduction techniques and a heuristic. We implemented the algorithm and applied it to two forensic caseworks using mitochondrial DNA, which resulted in marker sets with significantly improved haplotypic diversity compared to previous suggestions. Additionally, we evaluated the quality of the estimation with an artificial dataset of mtDNA. The heuristic is shown to provide extensive speedup at little cost in accuracy. PMID:24244403
Broken Expectations: Violation of Expectancies, Not Novelty, Captures Auditory Attention
ERIC Educational Resources Information Center
Vachon, Francois; Hughes, Robert W.; Jones, Dylan M.
2012-01-01
The role of memory in behavioral distraction by auditory attentional capture was investigated: We examined whether capture is a product of the novelty of the capturing event (i.e., the absence of a recent memory for the event) or its violation of learned expectancies on the basis of a memory for an event structure. Attentional capture--indicated…
Expectant Fathers: Changes and Concerns
Rockwell, Beverly
1989-01-01
The author conducted a compreshensive literature review on expectant fatherhood to determine the needs of men participating in the childbearing cycle. A sparse but growing body of knowledge exists about this population. A number of authors reported distinct changes and concerns. Most of the study subjects were participatns in prenatal classes, a factor which suggests that the findings may not reflect the needs of all expectant fathers. All partners were experiencing a normal pregnancy. This precluded the anxiety of a high-risk situation as a confounding variable. Most information given to expectant fathers was intended to assist them to support their partners. There was little evidence that men received much professional guidance to prepare them for fatherhood. PMID:21249006
TRENDS IN SENESCENT LIFE EXPECTANCY
Bongaarts, John
2009-01-01
The distinction between senescent and non-senescent mortality proves to be very valuable for describing and analyzing age patterns of death rates. Unfortunately, standard methods for estimating these mortality components are lacking. The first part of this study discusses alternative methods for estimating background and senescent mortality among adults and proposes a simple approach based on death rates by causes of death. The second part examines trends in senescent life expectancy (i.e. the life expectancy implied by senescent mortality) and compares them with trends in conventional longevity indicators between 1960 and 2000 in a group of 17 developed countries with low mortality. Senescent life expectancy for females rises at an average rate of 1.54 years per decade between 1960 and 2000 in these countries. The shape of the distribution of senescent deaths by age remains relatively invariant while the entire distribution shifts over time to higher ages as longevity rose. PMID:19851933
Dispatch Scheduling to Maximize Exoplanet Detection
NASA Astrophysics Data System (ADS)
Johnson, Samson; McCrady, Nate; MINERVA
2016-01-01
MINERVA is a dedicated exoplanet detection telescope array using radial velocity measurements of nearby stars to detect planets. MINERVA will be a completely robotic facility, with a goal of maximizing the number of exoplanets detected. MINERVA requires a unique application of queue scheduling due to its automated nature and the requirement of high cadence observations. A dispatch scheduling algorithm is employed to create a dynamic and flexible selector of targets to observe, in which stars are chosen by assigning values through a weighting function. I designed and have begun testing a simulation which implements the functions of a dispatch scheduler and records observations based on target selections through the same principles that will be used at the commissioned site. These results will be used in a larger simulation that incorporates weather, planet occurrence statistics, and stellar noise to test the planet detection capabilities of MINERVA. This will be used to heuristically determine an optimal observing strategy for the MINERVA project.
Maximally polarized states for quantum light fields
Sanchez-Soto, Luis L.; Yustas, Eulogio C.; Bjoerk, Gunnar; Klimov, Andrei B.
2007-10-15
The degree of polarization of a quantum field can be defined as its distance to an appropriate set of states. When we take unpolarized states as this reference set, the states optimizing this degree for a fixed average number of photons N present a fairly symmetric, parabolic photon statistic, with a variance scaling as N{sup 2}. Although no standard optical process yields such a statistic, we show that, to an excellent approximation, a highly squeezed vacuum can be taken as maximally polarized. We also consider the distance of a field to the set of its SU(2) transformed, finding that certain linear superpositions of SU(2) coherent states make this degree to be unity.
Maximal energy extraction under discrete diffusive exchange
Hay, M. J.; Schiff, J.; Fisch, N. J.
2015-10-15
Waves propagating through a bounded plasma can rearrange the densities of states in the six-dimensional velocity-configuration phase space. Depending on the rearrangement, the wave energy can either increase or decrease, with the difference taken up by the total plasma energy. In the case where the rearrangement is diffusive, only certain plasma states can be reached. It turns out that the set of reachable states through such diffusive rearrangements has been described in very different contexts. Building upon those descriptions, and making use of the fact that the plasma energy is a linear functional of the state densities, the maximal extractable energy under diffusive rearrangement can then be addressed through linear programming.
Mixtures of maximally entangled pure states
NASA Astrophysics Data System (ADS)
Flores, M. M.; Galapon, E. A.
2016-09-01
We study the conditions when mixtures of maximally entangled pure states remain entangled. We found that the resulting mixed state remains entangled when the number of entangled pure states to be mixed is less than or equal to the dimension of the pure states. For the latter case of mixing a number of pure states equal to their dimension, we found that the mixed state is entangled provided that the entangled pure states to be mixed are not equally weighted. We also found that one can restrict the set of pure states that one can mix from in order to ensure that the resulting mixed state is genuinely entangled. Also, we demonstrate how these results could be applied as a way to detect entanglement in mixtures of the entangled pure states with noise.
Characterizing maximally singular phase-space distributions
NASA Astrophysics Data System (ADS)
Sperling, J.
2016-07-01
Phase-space distributions are widely applied in quantum optics to access the nonclassical features of radiations fields. In particular, the inability to interpret the Glauber-Sudarshan distribution in terms of a classical probability density is the fundamental benchmark for quantum light. However, this phase-space distribution cannot be directly reconstructed for arbitrary states, because of its singular behavior. In this work, we perform a characterization of the Glauber-Sudarshan representation in terms of distribution theory. We address important features of such distributions: (i) the maximal degree of their singularities is studied, (ii) the ambiguity of representation is shown, and (iii) their dual space for nonclassicality tests is specified. In this view, we reconsider the methods for regularizing the Glauber-Sudarshan distribution for verifying its nonclassicality. This treatment is supported with comprehensive examples and counterexamples.
Primary expectations of secondary metabolites
Technology Transfer Automated Retrieval System (TEKTRAN)
Plant secondary metabolites (e.g., phenolics) are important for human health, in addition to the organoleptic properties they impart to fresh and processed foods. Consumer expectations such as appearance, taste, or texture influence their purchasing decisions. Thorough identification of phenolic com...
Career Expectations of Accounting Students
ERIC Educational Resources Information Center
Elam, Dennis; Mendez, Francis
2010-01-01
The demographic make-up of accounting students is dramatically changing. This study sets out to measure how well the profession is ready to accommodate what may be very different needs and expectations of this new generation of students. Non-traditional students are becoming more and more of a tradition in the current college classroom.…
Reasonable Expectation of Adult Behavior.
ERIC Educational Resources Information Center
Todaro, Julie
1999-01-01
Discusses staff behavioral problems that prove difficult for successful library management. Suggests that reasonable expectations for behavior need to be established in such areas as common courtesies, environmental issues such as temperature and noise levels, work relationships and values, diverse work styles and ways of communicating, and…
Great Expectations and New Beginnings
ERIC Educational Resources Information Center
Davis, Frances A.
2009-01-01
Great Expectation and New Beginnings is a prenatal family support program run by the Family, Infant, and Preschool Program (FIPP) in North Carolina. FIPP has developed an evidence-based integrated framework of early childhood intervention and family support that includes three primary components: providing intervention in everyday family…
Undergraduates' Perceptions of Employer Expectations
ERIC Educational Resources Information Center
DuPre, Carrie; Williams, Kate
2011-01-01
Research conducted by the National Association of Colleges and Employers (NACE) indicates that employers across industries seek similar skills in job applicants; yet employers often report finding these desired skills lacking in new hires. This study closes the gap in understanding between employer expectations and student perceptions regarding…
Life Expectancy of Kibbutz Members.
ERIC Educational Resources Information Center
Leviatan, Uri; And Others
1986-01-01
Data are presented demonstrating that the life expectancy of kibbutz members--both men and women--is higher than that of the overall Jewish population in Israel. These data add to and support other research findings illustrating the more positive mental health and well-being found among kibbutz members than among other comparative populations.…
Evaluation of Behavioral Expectation Scales.
ERIC Educational Resources Information Center
Zedeck, Sheldon; Baker, Henry T.
Behavioral Expectation Scales developed by Smith and Kendall were evaluated. Results indicated slight interrater reliability between Head Nurses and Supervisors, moderate dependence among five performance dimensions, and correlation between two scales and tenure. Results are discussed in terms of procedural problems, critical incident problems,…
Corporate diversification: expectations and outcomes.
Clement, J P
1988-01-01
A review of the research concerning the diversification experience of firms in other industries shows that expectations of higher profit rates and lower risk are not entirely realistic. However, there are many ways in which the probability of financially successful diversification may be increased. PMID:3384656
Supervising Prerelease Offenders: Clarifying Expectations.
ERIC Educational Resources Information Center
Benekos, Peter J.
1986-01-01
Presents and discusses a conceptual model of the concerns of prerelease offenders and community supervisors. The conceptualization suggests that "perceptual differences" of the concerns of prerelease status is one alternative for examining the supervisorial relationship. Attempts to identify and confront the different expectations of supervisors…
Metaphors As Storehouses of Expectation.
ERIC Educational Resources Information Center
Beavis, Allan K.; Thomas, A. Ross
1996-01-01
Explores how metaphors are used to identify and store some expectations that structure schools' interactions and communications. Outlines a systems-theoretical view of schools derived from Niklas Luhmann's social theories. Illustrates how the metaphors identified in an earlier study provide material contexts for identifying and storing structures…
Differentiated Staffing: Expectations and Pitfalls.
ERIC Educational Resources Information Center
Barbee, Don
Once a differentiated staffing pattern has been adopted--with the understanding that it is not a panacea--staff members have an obligation to minimize distinctions of rank and prevent organizational rigidity by contributing in role areas other than their own and sharing in decisionmaking. Teacher aides are not expected to be substitutes for…
Privacy Expectations in Online Contexts
ERIC Educational Resources Information Center
Pure, Rebekah Abigail
2013-01-01
Advances in digital networked communication technology over the last two decades have brought the issue of personal privacy into sharper focus within contemporary public discourse. In this dissertation, I explain the Fourth Amendment and the role that privacy expectations play in the constitutional protection of personal privacy generally, and…
Education: Expectation and the Unexpected
ERIC Educational Resources Information Center
Fulford, Amanda
2016-01-01
This paper considers concepts of expectation and responsibility, and how these drive dialogic interactions between tutor and student in an age of marketised Higher Education. In thinking about such interactions in terms of different forms of exchange, the paper considers the philosophy of Martin Buber and Emmanuel Levinas on dialogic…
Expectation Effects in Organizational Change
ERIC Educational Resources Information Center
King, Albert S.
1974-01-01
The experiment reported here was conducted during a 12-month period at four plants owned by the same company. Managers were given artificial reports about previous findings obtained in implementing job enlargement and job rotation programs. Led to expect higher productivity as a result of these organizational innovations, the managers increased…
... years of age by sex, race and Hispanic origin Health, United States 2015, table 15 [PDF - 9.8 MB] Life expectancy at birth and at 65 years of age, by sex: Organisation for Economic Co-operation and Development (OECD) countries Health, United States 2015, table 14 [PDF - 9. ...
ERIC Educational Resources Information Center
Wyse, Adam E.; Babcock, Ben
2016-01-01
A common suggestion made in the psychometric literature for fixed-length classification tests is that one should design tests so that they have maximum information at the cut score. Designing tests in this way is believed to maximize the classification accuracy and consistency of the assessment. This article uses simulated examples to illustrate…
From entropy-maximization to equality-maximization: Gauss, Laplace, Pareto, and Subbotin
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2014-12-01
The entropy-maximization paradigm of statistical physics is well known to generate the omnipresent Gauss law. In this paper we establish an analogous socioeconomic model which maximizes social equality, rather than physical disorder, in the context of the distributions of income and wealth in human societies. We show that-on a logarithmic scale-the Laplace law is the socioeconomic equality-maximizing counterpart of the physical entropy-maximizing Gauss law, and that this law manifests an optimized balance between two opposing forces: (i) the rich and powerful, striving to amass ever more wealth, and thus to increase social inequality; and (ii) the masses, struggling to form more egalitarian societies, and thus to increase social equality. Our results lead from log-Gauss statistics to log-Laplace statistics, yield Paretian power-law tails of income and wealth distributions, and show how the emergence of a middle-class depends on the underlying levels of socioeconomic inequality and variability. Also, in the context of asset-prices with Laplace-distributed returns, our results imply that financial markets generate an optimized balance between risk and predictability.
Maximizing the potential of process engineering databases
McGuire, M.L.; Jones, K. )
1989-11-01
The authors discuss their work with a major oil and gas production company. It shows that technical computing and, particularly, the utilization of integration databases, high-performance engineering workstations, and data networking can create major profit opportunities. Properly utilized technical computing can make more time available for optimizing the conceptual design process that critically affects the life-cycle economic performance of process plants. Computer-aided drafting has little influence on total economic performance once a plant is operating, but an investment in process engineering effectiveness can earn a leveraged benefit through its effect on both capital investment and future operating costs.
NASA Astrophysics Data System (ADS)
Zhang, Jun; Nan, Hua; Tao, Yuan-Hong; Fei, Shao-Ming
2016-02-01
The mutually unbiasedness between a maximally entangled basis (MEB) and an unextendible maximally entangled system (UMES) in the bipartite system C2⊗ C^{2k} (k>1) are introduced and discussed first in this paper. Then two mutually unbiased pairs of a maximally entangled basis and an unextendible maximally entangled system are constructed; lastly, explicit constructions are obtained for mutually unbiased MEB and UMES in C2⊗ C4 and C2⊗ C8, respectively.
Trust regions in Kriging-based optimization with expected improvement
NASA Astrophysics Data System (ADS)
Regis, Rommel G.
2016-06-01
The Kriging-based Efficient Global Optimization (EGO) method works well on many expensive black-box optimization problems. However, it does not seem to perform well on problems with steep and narrow global minimum basins and on high-dimensional problems. This article develops a new Kriging-based optimization method called TRIKE (Trust Region Implementation in Kriging-based optimization with Expected improvement) that implements a trust-region-like approach where each iterate is obtained by maximizing an Expected Improvement (EI) function within some trust region. This trust region is adjusted depending on the ratio of the actual improvement to the EI. This article also develops the Kriging-based CYCLONE (CYClic Local search in OptimizatioN using Expected improvement) method that uses a cyclic pattern to determine the search regions where the EI is maximized. TRIKE and CYCLONE are compared with EGO on 28 test problems with up to 32 dimensions and on a 36-dimensional groundwater bioremediation application in appendices supplied as an online supplement available at http://dx.doi.org/10.1080/0305215X.2015.1082350. The results show that both algorithms yield substantial improvements over EGO and they are competitive with a radial basis function method.
Dekker, Aldo; Chénard, Gilles; Stockhofe, Norbert; Eblé, Phaedra L
2016-01-01
We investigated to what extent maternally derived antibodies interfere with foot-and-mouth disease (FMD) vaccination in order to determine the factors that influence the correct vaccination for piglets. Groups of piglets with maternally derived antibodies were vaccinated at different time points following birth, and the antibody titers to FMD virus (FMDV) were measured using virus neutralization tests (VNT). We used 50 piglets from 5 sows that had been vaccinated 3 times intramuscularly in the neck during pregnancy with FMD vaccine containing strains of FMDV serotypes O, A, and Asia-1. Four groups of 10 piglets were vaccinated intramuscularly in the neck at 3, 5, 7, or 9 weeks of age using a monovalent Cedivac-FMD vaccine (serotype A TUR/14/98). One group of 10 piglets with maternally derived antibodies was not vaccinated, and another group of 10 piglets without maternally derived antibodies was vaccinated at 3 weeks of age and served as a control group. Sera samples were collected, and antibody titers were determined using VNT. In our study, the antibody responses of piglets with maternally derived antibodies vaccinated at 7 or 9 weeks of age were similar to the responses of piglets without maternally derived antibodies vaccinated at 3 weeks of age. The maternally derived antibody levels in piglets depended very strongly on the antibody titer in the sow, so the optimal time for vaccination of piglets will depend on the vaccination scheme and quality of vaccine used in the sows and should, therefore, be monitored and reviewed on regular basis in countries that use FMD prophylactic vaccination. PMID:27446940
Dekker, Aldo; Chénard, Gilles; Stockhofe, Norbert; Eblé, Phaedra L.
2016-01-01
We investigated to what extent maternally derived antibodies interfere with foot-and-mouth disease (FMD) vaccination in order to determine the factors that influence the correct vaccination for piglets. Groups of piglets with maternally derived antibodies were vaccinated at different time points following birth, and the antibody titers to FMD virus (FMDV) were measured using virus neutralization tests (VNT). We used 50 piglets from 5 sows that had been vaccinated 3 times intramuscularly in the neck during pregnancy with FMD vaccine containing strains of FMDV serotypes O, A, and Asia-1. Four groups of 10 piglets were vaccinated intramuscularly in the neck at 3, 5, 7, or 9 weeks of age using a monovalent Cedivac-FMD vaccine (serotype A TUR/14/98). One group of 10 piglets with maternally derived antibodies was not vaccinated, and another group of 10 piglets without maternally derived antibodies was vaccinated at 3 weeks of age and served as a control group. Sera samples were collected, and antibody titers were determined using VNT. In our study, the antibody responses of piglets with maternally derived antibodies vaccinated at 7 or 9 weeks of age were similar to the responses of piglets without maternally derived antibodies vaccinated at 3 weeks of age. The maternally derived antibody levels in piglets depended very strongly on the antibody titer in the sow, so the optimal time for vaccination of piglets will depend on the vaccination scheme and quality of vaccine used in the sows and should, therefore, be monitored and reviewed on regular basis in countries that use FMD prophylactic vaccination. PMID:27446940
Singh, Vivek; Marinescu, Dan C.; Baker, Timothy S.
2014-01-01
Three-dimensional reconstruction of large macromolecules like viruses at resolutions below 10 ÅA requires a large set of projection images. Several automatic and semi-automatic particle detection algorithms have been developed along the years. Here we present a general technique designed to automatically identify the projection images of particles. The method is based on Markov random field modelling of the projected images and involves a pre-processing of electron micrographs followed by image segmentation and post-processing. The image is modelled as a coupling of two fields—a Markovian and a non-Markovian. The Markovian field represents the segmented image. The micrograph is the non-Markovian field. The image segmentation step involves an estimation of coupling parameters and the maximum áa posteriori estimate of the realization of the Markovian field i.e, segmented image. Unlike most current methods, no bootstrapping with an initial selection of particles is required. PMID:15065680
Quantum Mechanics and the Principle of Maximal Variety
NASA Astrophysics Data System (ADS)
Smolin, Lee
2016-03-01
Quantum mechanics is derived from the principle that the universe contain as much variety as possible, in the sense of maximizing the distinctiveness of each subsystem. The quantum state of a microscopic system is defined to correspond to an ensemble of subsystems of the universe with identical constituents and similar preparations and environments. A new kind of interaction is posited amongst such similar subsystems which acts to increase their distinctiveness, by extremizing the variety. In the limit of large numbers of similar subsystems this interaction is shown to give rise to Bohm's quantum potential. As a result the probability distribution for the ensemble is governed by the Schroedinger equation. The measurement problem is naturally and simply solved. Microscopic systems appear statistical because they are members of large ensembles of similar systems which interact non-locally. Macroscopic systems are unique, and are not members of any ensembles of similar systems. Consequently their collective coordinates may evolve deterministically. This proposal could be tested by constructing quantum devices from entangled states of a modest number of quits which, by its combinatorial complexity, can be expected to have no natural copies.
Quantum Mechanics and the Principle of Maximal Variety
NASA Astrophysics Data System (ADS)
Smolin, Lee
2016-06-01
Quantum mechanics is derived from the principle that the universe contain as much variety as possible, in the sense of maximizing the distinctiveness of each subsystem. The quantum state of a microscopic system is defined to correspond to an ensemble of subsystems of the universe with identical constituents and similar preparations and environments. A new kind of interaction is posited amongst such similar subsystems which acts to increase their distinctiveness, by extremizing the variety. In the limit of large numbers of similar subsystems this interaction is shown to give rise to Bohm's quantum potential. As a result the probability distribution for the ensemble is governed by the Schroedinger equation. The measurement problem is naturally and simply solved. Microscopic systems appear statistical because they are members of large ensembles of similar systems which interact non-locally. Macroscopic systems are unique, and are not members of any ensembles of similar systems. Consequently their collective coordinates may evolve deterministically. This proposal could be tested by constructing quantum devices from entangled states of a modest number of quits which, by its combinatorial complexity, can be expected to have no natural copies.
Maximizing NGL recovery by refrigeration optimization
Baldonedo H., A.H.
1999-07-01
PDVSA--Petroleo y Gas, S.A. has within its facilities in Lake Maracaibo two plants that extract liquids from natural gas (NGL), They use a combined mechanic refrigeration absorption with natural gasoline. Each of these plants processes 420 MMsccfd with a pressure of 535 psig and 95 F that comes from the compression plants PCTJ-2 and PCTJ-3 respectively. About 40 MMscfd of additional rich gas comes from the high pressure system. Under the present conditions these plants produce in the order of 16,800 and 23,800 b/d of NGL respectively, with a propane recovery percentage of approximately 75%, limited by the capacity of the refrigeration system. To optimize the operation and the design of the refrigeration system and to maximize the NGL recovery, a conceptual study was developed in which the following aspects about the process were evaluated: capacity of the refrigeration system, refrigeration requirements, identification of limitations and evaluation of the system improvements. Based on the results obtained it was concluded that by relocating some condensers, refurbishing the main refrigeration system turbines and using HIGH FLUX piping in the auxiliary refrigeration system of the evaporators, there will be an increase of 85% on the propane recovery, with an additional production of 25,000 b/d of NGL and 15 MMscfd of ethane rich gas.
Maximizing exosome colloidal stability following electroporation.
Hood, Joshua L; Scott, Michael J; Wickline, Samuel A
2014-03-01
Development of exosome-based semisynthetic nanovesicles for diagnostic and therapeutic purposes requires novel approaches to load exosomes with cargo. Electroporation has previously been used to load exosomes with RNA. However, investigations into exosome colloidal stability following electroporation have not been considered. Herein, we report the development of a unique trehalose pulse media (TPM) that minimizes exosome aggregation following electroporation. Dynamic light scattering (DLS) and RNA absorbance were employed to determine the extent of exosome aggregation and electroextraction post electroporation in TPM compared to common PBS pulse media or sucrose pulse media (SPM). Use of TPM to disaggregate melanoma exosomes post electroporation was dependent on both exosome concentration and electric field strength. TPM maximized exosome dispersal post electroporation for both homogenous B16 melanoma and heterogeneous human serum-derived populations of exosomes. Moreover, TPM enabled heavy cargo loading of melanoma exosomes with 5nm superparamagnetic iron oxide nanoparticles (SPION5) while maintaining original exosome size and minimizing exosome aggregation as evidenced by transmission electron microscopy. Loading exosomes with SPION5 increased exosome density on sucrose gradients. This provides a simple, label-free means of enriching exogenously modified exosomes and introduces the potential for MRI-driven theranostic exosome investigations in vivo. PMID:24333249
Core Facilities: Maximizing the Return on Investment
Farber, Gregory K.; Weiss, Linda
2011-01-01
To conduct high-quality state-of-the-art research, clinical and translational scientists need access to specialized core facilities and appropriately trained staff. In this time of economic constraints and increasing research costs, organized and efficient core facilities are essential for researchers who seek to investigate complex translational research questions. Here, we describe efforts at the U.S . National Institutes of Health and academic medical centers to enhance the utility of cores. PMID:21832235
ERIC Educational Resources Information Center
Crank, Ron
This instructional unit is one of 10 developed by students on various energy-related areas that deals specifically with lighting utilization. Its objective is for the student to be able to outline the development of lighting use and conservation and identify major types and operating characteristics of lamps used in electric lighting. Some topics…
Rehabilitation Professionals' Participation Intensity and Expectations of Transition Roles
ERIC Educational Resources Information Center
Oertle, Kathleen Marie
2009-01-01
In this mixed-methods study, an on-line survey and interviews were utilized to gather data regarding the level of participation and expectations rehabilitation professionals have of teachers, youth with disabilities, parents, and themselves during the transition process. The survey response rate was 73.0% (N = 46). Six were selected for interviews…
Critique of ``Expected Value`` models
May, W.L.
1996-06-01
There are a number of models in the defense community which use a methodology referred to as ``Expected Value`` to perform sequential calculations of unit attritions or expenditures. The methodology applied to two-sided, dependent, sequential events can result in an incorrect model. An example of such an incorrect model is offered to show that these models may yield results which deviate significantly from a stochastic or Markov process approach. The example was derived from an informal discussion at the Center for Naval Analyses.
Utility solar water heating workshops
Barrett, L.B.
1992-01-01
The objective of this project was to explore the problems and opportunities for utility participation with solar water heating as a DSM measure. Expected benefits from the workshops included an increased awareness and interest by utilities in solar water heating as well as greater understanding by federal research and policy officials of utility perspectives for purposes of planning and programming. Ultimately, the project could result in better information transfer, increased implementation of solar water heating programs, greater penetration of solar systems, and more effective research projects. The objective of the workshops was satisfied. Each workshop succeeded in exploring the problems and opportunities for utility participation with solar water heating as a DSM option. The participants provided a range of ideas and suggestions regarding useful next steps for utilities and NREL. According to evaluations, the participants believed the workshops were very valuable, and they returned to their utilities with new information, ideas, and commitment.
Maximizing the liquid fuel yield in a biorefining process.
Zhang, Bo; von Keitz, Marc; Valentas, Kenneth
2008-12-01
Biorefining strives to recover the maximum value from each fraction, at minimum energy cost. In order to seek an unbiased and thorough assessment of the alleged opportunity offered by biomass fuels, the direct conversion of various lignocellulosic biomass was studied: aspen pulp wood (Populus tremuloides), aspen wood pretreated with dilute acid, aspen lignin, aspen logging residues, corn stalk, corn spathe, corn cob, corn stover, corn stover pellet, corn stover pretreated with dilute acid, and lignin extracted from corn stover. Besides the heating rate, the yield of liquid products was found to be dependent on the final liquefaction temperature and the length of liquefaction time. The major compounds of the liquid products from various origins were identified by GC-MS. The lignin was found to be a good candidate for the liquefaction process, and biomass fractionation was necessary to maximize the yield of the liquid bio-fuel. The results suggest a biorefinery process accompanying pretreatment, fermentation to ethanol, liquefaction to bio-crude oil, and other thermo-conversion technologies, such as gasification. Other biorefinery options, including supercritical water gasification and the effectual utilization of the bio-crude oil, are also addressed. PMID:18781691
On the statistical analysis of maximal magnitude
NASA Astrophysics Data System (ADS)
Holschneider, M.; Zöller, G.; Hainzl, S.
2012-04-01
We show how the maximum expected magnitude within a time horizon [0,T] may be estimated from earthquake catalog data within the context of truncated Gutenberg-Richter statistics. We present the results in a frequentist and in a Bayesian setting. Instead of deriving point estimations of this parameter and reporting its performance in terms of expectation value and variance, we focus on the calculation of confidence intervals based on an imposed level of confidence α. We present an estimate of the maximum magnitude within an observational time interval T in the future, given a complete earthquake catalog for a time period Tc in the past and optionally some paleoseismic events. We argue that from a statistical point of view the maximum magnitude in a time window is a reasonable parameter for probabilistic seismic hazard assessment, while the commonly used maximum possible magnitude for all times does almost certainly not allow the calculation of useful (i.e. non-trivial) confidence intervals. In the context of an unbounded GR law we show, that Jeffreys invariant prior distribtution yields normalizable posteriors. The predictive distribution based on this prior is explicitely computed.
Rare flavor processes in Maximally Natural Supersymmetry
NASA Astrophysics Data System (ADS)
García, Isabel García; March-Russell, John
2015-01-01
We study CP-conserving rare flavor violating processes in the recently proposed theory of Maximally Natural Supersymmetry (MNSUSY). MNSUSY is an unusual supersymmetric (SUSY) extension of the Standard Model (SM) which, remarkably, is untuned at present LHC limits. It employs Scherk-Schwarz breaking of SUSY by boundary conditions upon compactifying an underlying 5-dimensional (5D) theory down to 4D, and is not well-described by softly-broken SUSY, with much different phenomenology than the Minimal Supersymmetric Standard Model (MSSM) and its variants. The usual CP-conserving SUSY-flavor problem is automatically solved in MNSUSY due to a residual almost exact U(1) R symmetry, naturally heavy and highly degenerate 1st- and 2nd-generation sfermions, and heavy gauginos and Higgsinos. Depending on the exact implementation of MNSUSY there exist important new sources of flavor violation involving gauge boson Kaluza-Klein (KK) excitations. The spatial localization properties of the matter multiplets, in particular the brane localization of the 3rd generation states, imply KK-parity is broken and tree-level contributions to flavor changing neutral currents are present in general. Nevertheless, we show that simple variants of the basic MNSUSY model are safe from present flavor constraints arising from kaon and B-meson oscillations, the rare decays B s, d → μ + μ -, μ → ēee and μ- e conversion in nuclei. We also briefly discuss some special features of the radiative decays μ → eγ and . Future experiments, especially those concerned with lepton flavor violation, should see deviations from SM predictions unless one of the MNSUSY variants with enhanced flavor symmetries is realized.
Maximal exercise performance after adaptation to microgravity.
Levine, B D; Lane, L D; Watenpaugh, D E; Gaffney, F A; Buckey, J C; Blomqvist, C G
1996-08-01
The cardiovascular system appears to adapt well to microgravity but is compromised on reestablishment of gravitational forces leading to orthostatic intolerance and a reduction in work capacity. However, maximal systemic oxygen uptake (Vo2) and transport, which may be viewed as a measure of the functional integrity of the cardiovascular system and its regulatory mechanisms, has not been systematically measured in space or immediately after return to Earth after spaceflight. We studied six astronauts (4 men and 2 women, age 35-50 yr) before, during, and immediately after 9 or 14 days of microgravity on two Spacelab Life Sciences flights (SLS-1 and SLS-2). Peak Vo2 (Vo2peak) was measured with an incremental protocol on a cycle ergometer after prolonged submaximal exercise at 30 and 60% of Vo2peak. We measured gas fractions by mass spectrometer and ventilation via turbine flowmeter for the calculation of breath-by-breath Vo2, heart rate via electrocardiogram, and cardiac output (Qc) via carbon dioxide rebreathing. Peak power and Vo2 were well maintained during spaceflight and not significantly different compared with 2 wk preflight. Vo2peak was reduced by 22% immediately postflight (P < 0.05), entirely because of a decrease in peak stroke volume and Qc. Peak heart rate, blood pressure, and systemic arteriovenous oxygen difference were unchanged. We conclude that systemic Vo2peak is well maintained in the absence of gravity for 9-14 days but is significantly reduced immediately on return to Earth, most likely because of reduced intravascular blood volume, stroke volume, and Qc. PMID:8872635
Primary Care Clinician Expectations Regarding Aging
ERIC Educational Resources Information Center
Davis, Melinda M.; Bond, Lynne A.; Howard, Alan; Sarkisian, Catherine A.
2011-01-01
Purpose: Expectations regarding aging (ERA) in community-dwelling older adults are associated with personal health behaviors and health resource usage. Clinicians' age expectations likely influence patients' expectations and care delivery patterns; yet, limited research has explored clinicians' age expectations. The Expectations Regarding Aging…
Maximizing physician performance: a systems approach.
Smith, R
1997-12-01
Managed care organizations are aware of the importance of managing the quality of care and controlling costs associated with the delivery of care. By utilizing physician-level performance reporting, an organization can help its physicians manage the organization's resources across the continuum of care. Physician participation can be obtained by developing a multicomponent program that includes opportunities for physician input regarding resource allocation and benefit packages; by articulating and documenting the organization's goals and priorities; by providing physicians with systemwide data related to indicators of their performance levels; and by offering financial incentives. PMID:10174784
Maximizing the utilization of computer-aided technology for fabrication of composite structures
NASA Astrophysics Data System (ADS)
Pyle, Glenn T.; Rao, Carlo S.
The introduction of the computer in today's engineering environment presents new opportunities for the optimization of the product definition process. The certification effort on McDonnell Douglas Helicopter Company's MD 520N Program is used as a case study to show how the management of digital data can be used as a tool to dramatically reduce the cycle time of producing advanced composite structures. The MD 520N Product Definition Team used MDHC's Unigraphics 3-D CAD/CAM System to develop, design, fabricate, and test a production thruster assembly in just 74 days. This paper documents that effort and discusses the application this process may have in the normal production design environment.
NASA Astrophysics Data System (ADS)
Bai, Xian-Xu; Wereley, Norman M.; Hu, Wei
2015-05-01
A single-degree-of-freedom (SDOF) semi-active vibration control system based on a magnetorheological (MR) damper with an inner bypass is investigated in this paper. The MR damper employing a pair of concentric tubes, between which the key structure, i.e., the inner bypass, is formed and MR fluids are energized, is designed to provide large dynamic range (i.e., ratio of field-on damping force to field-off damping force) and damping force range. The damping force performance of the MR damper is modeled using phenomenological model and verified by the experimental tests. In order to assess its feasibility and capability in vibration control systems, the mathematical model of a SDOF semi-active vibration control system based on the MR damper and skyhook control strategy is established. Using an MTS 244 hydraulic vibration exciter system and a dSPACE DS1103 real-time simulation system, experimental study for the SDOF semi-active vibration control system is also conducted. Simulation results are compared to experimental measurements.
IMPORTANCE OF MITOCHONDRIAL PO2 IN MAXIMAL O2 TRANSPORT AND UTILIZATION: A THEORETICAL ANALYSIS
Cano, I; Mickael, M; Gomez-Cabrero, D.; Tegnér, J; Roca, J; Wagner, PD
2013-01-01
In previous calculations of how the O2 transport system limits V̇O2max, it was reasonably assumed that mitochondrial PO2 (PmO2) could be neglected (set to zero). However, in reality, PmO2 must exceed zero and the red cell to mitochondrion diffusion gradient may therefore be reduced, impairing diffusive transport of O2 and V̇O2max. Accordingly, we investigated the influence of PmO2 on these calculations by coupling previously used equations for O2 transport to one for mitochondrial respiration relating mitochondrial V̇O2 to PO2. This hyperbolic function, characterized by its P50 and V̇MAX, allowed PmO2 to become a model output (rather than set to zero as previously). Simulations using data from exercising normal subjects showed that at V̇O2max, PmO2was usually < 1 mm Hg, and that the effects on V̇O2max were minimal. However, when O2 transport capacity exceeded mitochondrial V̇MAX, or if P50 were elevated, PmO2 often reached double digit values, thereby reducing the diffusion gradient and significantly decreasing V̇O2max. PMID:24012990
Currens, J.C.
1999-01-01
Analytical data for nitrate and triazines from 566 samples collected over a 3-year period at Pleasant Grove Spring, Logan County, KY, were statistically analyzed to determine the minimum data set needed to calculate meaningful yearly averages for a conduit-flow karst spring. Results indicate that a biweekly sampling schedule augmented with bihourly samples from high-flow events will provide meaningful suspended-constituent and dissolved-constituent statistics. Unless collected over an extensive period of time, daily samples may not be representative and may also be autocorrelated. All high-flow events resulting in a significant deflection of a constituent from base-line concentrations should be sampled. Either the geometric mean or the flow-weighted average of the suspended constituents should be used. If automatic samplers are used, then they may be programmed to collect storm samples as frequently as every few minutes to provide details on the arrival time of constituents of interest. However, only samples collected bihourly should be used to calculate averages. By adopting a biweekly sampling schedule augmented with high-flow samples, the need to continuously monitor discharge, or to search for and analyze existing data to develop a statistically valid monitoring plan, is lessened.Analytical data for nitrate and triazines from 566 samples collected over a 3-year period at Pleasant Grove Spring, Logan County, KY, were statistically analyzed to determine the minimum data set needed to calculate meaningful yearly averages for a conduit-flow karst spring. Results indicate that a biweekly sampling schedule augmented with bihourly samples from high-flow events will provide meaningful suspended-constituent and dissolved-constituent statistics. Unless collected over an extensive period of time, daily samples may not be representative and may also be autocorrelated. All high-flow events resulting in a significant deflection of a constituent from base-line concentrations should be sampled. Either the geometric mean or the flow-weighted average of the suspended constituents should be used. If automatic samplers are used, then they may be programmed to collect storm samples as frequently as every few minutes to provide details on the arrival time of constituents of interest. However, only samples collected bihourly should be used to calculate averages. By adopting a biweekly sampling schedule augmented with high-flow samples, the need to continuously monitor discharge, or to search for and analyze existing data to develop a statistically valid monitoring plan, is lessened.
Improving Simulated Annealing by Replacing Its Variables with Game-Theoretic Utility Maximizers
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Bandari, Esfandiar; Tumer, Kagan
2001-01-01
The game-theory field of Collective INtelligence (COIN) concerns the design of computer-based players engaged in a non-cooperative game so that as those players pursue their self-interests, a pre-specified global goal for the collective computational system is achieved as a side-effect. Previous implementations of COIN algorithms have outperformed conventional techniques by up to several orders of magnitude, on domains ranging from telecommunications control to optimization in congestion problems. Recent mathematical developments have revealed that these previously developed algorithms were based on only two of the three factors determining performance. Consideration of only the third factor would instead lead to conventional optimization techniques like simulated annealing that have little to do with non-cooperative games. In this paper we present an algorithm based on all three terms at once. This algorithm can be viewed as a way to modify simulated annealing by recasting it as a non-cooperative game, with each variable replaced by a player. This recasting allows us to leverage the intelligent behavior of the individual players to substantially improve the exploration step of the simulated annealing. Experiments are presented demonstrating that this recasting significantly improves simulated annealing for a model of an economic process run over an underlying small-worlds topology. Furthermore, these experiments reveal novel small-worlds phenomena, and highlight the shortcomings of conventional mechanism design in bounded rationality domains.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-10
...), Consumer Specialty Products Association (CSPA), and the Responsible Industry for a Sound Environment (RISE... language for your requested changes. iv. Describe any assumptions and provide any technical information and... Association and the Responsible Industry for a Sound Environment requesting that the Agency: (1)...
Bai, Xian-Xu; Wereley, Norman M.; Hu, Wei
2015-05-07
A single-degree-of-freedom (SDOF) semi-active vibration control system based on a magnetorheological (MR) damper with an inner bypass is investigated in this paper. The MR damper employing a pair of concentric tubes, between which the key structure, i.e., the inner bypass, is formed and MR fluids are energized, is designed to provide large dynamic range (i.e., ratio of field-on damping force to field-off damping force) and damping force range. The damping force performance of the MR damper is modeled using phenomenological model and verified by the experimental tests. In order to assess its feasibility and capability in vibration control systems, the mathematical model of a SDOF semi-active vibration control system based on the MR damper and skyhook control strategy is established. Using an MTS 244 hydraulic vibration exciter system and a dSPACE DS1103 real-time simulation system, experimental study for the SDOF semi-active vibration control system is also conducted. Simulation results are compared to experimental measurements.
Ground truth spectrometry and imagery of eruption clouds to maximize utility of satellite imagery
NASA Technical Reports Server (NTRS)
Rose, William I.
1993-01-01
Field experiments with thermal imaging infrared radiometers were performed and a laboratory system was designed for controlled study of simulated ash clouds. Using AVHRR (Advanced Very High Resolution Radiometer) thermal infrared bands 4 and 5, a radiative transfer method was developed to retrieve particle sizes, optical depth and particle mass involcanic clouds. A model was developed for measuring the same parameters using TIMS (Thermal Infrared Multispectral Scanner), MODIS (Moderate Resolution Imaging Spectrometer), and ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer). Related publications are attached.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-02
... published on January 3, 2002, at 67 FR 369-378 (reprinted February 5, 2002, at 67 FR 5365). The Bureau's..., identified by the title of this notice, by any of the following methods: Electronic:...
Maximal Oxygen Uptake, Sweating and Tolerance to Exercise in the Heat
NASA Technical Reports Server (NTRS)
Greenleaf, J. E.; Castle, B. L.; Ruff, W. K.
1972-01-01
The physiological mechanisms that facilitate acute acclimation to heat have not been fully elucidated, but the result is the establishment of a more efficient cardiovascular system to increase heat dissipation via increased sweating that allows the acclimated man to function with a cooler internal environment and to extend his performance. Men in good physical condition with high maximal oxygen uptakes generally acclimate to heat more rapidly and retain it longer than men in poorer condition. Also, upon first exposure trained men tolerate exercise in the heat better than untrained men. Both resting in heat and physical training in a cool environment confer only partial acclimation when first exposed to work in the heat. These observations suggest separate additive stimuli of metabolic heat from exercise and environmental heat to increase sweating during the acclimation process. However, the necessity of utilizing physical exercise during acclimation has been questioned. Bradbury et al. (1964) have concluded exercise has no effect on the course of heat acclimation since increased sweating can be induced by merely heating resting subjects. Preliminary evidence suggests there is a direct relationship between the maximal oxygen uptake and the capacity to maintain thermal regulation, particularly through the control of sweating. Since increased sweating is an important mechanism for the development of heat acclimation, and fit men have high sweat rates, it follows that upon initial exposure to exercise in the heat, men with high maximal oxygen uptakes should exhibit less strain than men with lower maximal oxygen uptakes. The purpose of this study was: (1) to determine if men with higher maximal oxygen uptakes exhibit greater tolerance than men with lower oxygen uptakes during early exposure to exercise in the heat, and (2) to investigate further the mechanism of the relationship between sweating and maximal work capacity.
Maximizing industrial infrastructure efficiency in Iceland
NASA Astrophysics Data System (ADS)
Ingason, Helgi Thor; Sigfusson, Thorsteinn I.
2010-08-01
As a consequence of the increasing aluminum production in Iceland, local processing of aluminum skimmings has become a feasible business opportunity. A recycling plant for this purpose was built in Helguvik on the Reykjanes peninsula in 2003. The case of the recycling plant reflects increased concern regarding environmental aspects of the industry. An interesting characteristic of this plant is the fact that it is run in the same facilities as a large fishmeal production installation. It is operated by the same personnel and uses—partly—the same equipment and infrastructure. This paper reviews the grounds for these decisions and the experience of this merger of a traditional fish melting industry and a more recent aluminum melting industry after 6 years of operation. The paper is written by the original entrepreneurs behind the company, who provide observations on how the aluminum industry in Iceland has evolved since the starting of Alur’s operation and what might be expected in the near future.
Who are maximizers? Future oriented and highly numerate individuals.
Misuraca, Raffaella; Teuscher, Ursina; Carmeci, Floriana Antonella
2016-08-01
Two studies investigated cognitive mechanisms that may be associated with people's tendency to maximize. Maximizers are individuals who are spending a great amount of effort in order to find the very best option in a decision situation, rather than stopping the decision process when they encounter a satisfying option. These studies show that maximizers are more future oriented than other people, which may motivate them to invest the extra energy into optimal choices. Maximizers also have higher numerical skills, possibly facilitating the cognitive processes involved with decision trade-offs. PMID:25960435
Motor Activity Improves Temporal Expectancy
Fautrelle, Lilian; Mareschal, Denis; French, Robert; Addyman, Caspar; Thomas, Elizabeth
2015-01-01
Certain brain areas involved in interval timing are also important in motor activity. This raises the possibility that motor activity might influence interval timing. To test this hypothesis, we assessed interval timing in healthy adults following different types of training. The pre- and post-training tasks consisted of a button press in response to the presentation of a rhythmic visual stimulus. Alterations in temporal expectancy were evaluated by measuring response times. Training consisted of responding to the visual presentation of regularly appearing stimuli by either: (1) pointing with a whole-body movement, (2) pointing only with the arm, (3) imagining pointing with a whole-body movement, (4) simply watching the stimulus presentation, (5) pointing with a whole-body movement in response to a target that appeared at irregular intervals (6) reading a newspaper. Participants performing a motor activity in response to the regular target showed significant improvements in judgment times compared to individuals with no associated motor activity. Individuals who only imagined pointing with a whole-body movement also showed significant improvements. No improvements were observed in the group that trained with a motor response to an irregular stimulus, hence eliminating the explanation that the improved temporal expectations of the other motor training groups was purely due to an improved motor capacity to press the response button. All groups performed a secondary task equally well, hence indicating that our results could not simply be attributed to differences in attention between the groups. Our results show that motor activity, even when it does not play a causal or corrective role, can lead to improved interval timing judgments. PMID:25806813
Larsen, Filip J; Weitzberg, Eddie; Lundberg, Jon O; Ekblom, Björn
2010-01-15
The anion nitrate-abundant in our diet-has recently emerged as a major pool of nitric oxide (NO) synthase-independent NO production. Nitrate is reduced stepwise in vivo to nitrite and then NO and possibly other bioactive nitrogen oxides. This reductive pathway is enhanced during low oxygen tension and acidosis. A recent study shows a reduction in oxygen consumption during submaximal exercise attributable to dietary nitrate. We went on to study the effects of dietary nitrate on various physiological and biochemical parameters during maximal exercise. Nine healthy, nonsmoking volunteers (age 30+/-2.3 years, VO(2max) 3.72+/-0.33 L/min) participated in this study, which had a randomized, double-blind crossover design. Subjects received dietary supplementation with sodium nitrate (0.1 mmol/kg/day) or placebo (NaCl) for 2 days before the test. This dose corresponds to the amount found in 100-300 g of a nitrate-rich vegetable such as spinach or beetroot. The maximal exercise tests consisted of an incremental exercise to exhaustion with combined arm and leg cranking on two separate ergometers. Dietary nitrate reduced VO(2max) from 3.72+/-0.33 to 3.62+/-0.31 L/min, P<0.05. Despite the reduction in VO(2max) the time to exhaustion trended to an increase after nitrate supplementation (524+/-31 vs 563+/-30 s, P=0.13). There was a correlation between the change in time to exhaustion and the change in VO(2max) (R(2)=0.47, P=0.04). A moderate dietary dose of nitrate significantly reduces VO(2max) during maximal exercise using a large active muscle mass. This reduction occurred with a trend toward increased time to exhaustion implying that two separate mechanisms are involved: one that reduces VO(2max) and another that improves the energetic function of the working muscles. PMID:19913611
Boudewyn, Megan A; Long, Debra L; Swaab, Tamara Y
2015-09-01
The goal of this study was to investigate the use of the local and global contexts for incoming words during listening comprehension. Local context was manipulated by presenting a target noun (e.g., "cake," "veggies") that was preceded by a word that described a prototypical or atypical feature of the noun (e.g., "sweet," "healthy"). Global context was manipulated by presenting the noun in a scenario that was consistent or inconsistent with the critical noun (e.g., a birthday party). Event-related potentials (ERPs) were examined at the feature word and at the critical noun. An N400 effect was found at the feature word, reflecting the effect of compatibility with the global context. Global predictability and the local feature word consistency interacted at the critical noun: A larger N200 was found to nouns that mismatched predictions when the context was maximally constraining, relative to nouns in the other conditions. A graded N400 response was observed at the critical noun, modulated by global predictability and feature consistency. Finally, post-N400 positivity effects of context updating were observed to nouns that were supported by one contextual cue (global/local) but were unsupported by the other. These results indicate that (1) incoming words that are compatible with context-based expectations receive a processing benefit; (2) when the context is sufficiently constraining, specific lexical items may be activated; and (3) listeners dynamically adjust their expectations when input is inconsistent with their predictions, provided that the inconsistency has some level of support from either the global or the local context. PMID:25673006
Evaluation of anti-hyperglycemic effect of Actinidia kolomikta (Maxim. etRur.) Maxim. root extract.
Hu, Xuansheng; Cheng, Delin; Wang, Linbo; Li, Shuhong; Wang, Yuepeng; Li, Kejuan; Yang, Yingnan; Zhang, Zhenya
2015-05-01
This study aimed to evaluate the anti-hyperglycemic effect of ethanol extract from Actinidia kolomikta (Maxim. etRur.) Maxim. root (AKE).An in vitro evaluation was performed by using rat intestinal α-glucosidase (maltase and sucrase), the key enzymes linked with type 2 diabetes. And an in vivo evaluation was also performed by loading maltose, sucrose, glucose to normal rats. As a result, AKE showed concentration-dependent inhibition effects on rat intestinal maltase and rat intestinal sucrase with IC(50) values of 1.83 and 1.03mg/mL, respectively. In normal rats, after loaded with maltose, sucrose and glucose, administration of AKE significantly reduced postprandial hyperglycemia, which is similar to acarbose used as an anti-diabetic drug. High contents of total phenolics (80.49 ± 0.05mg GAE/g extract) and total flavonoids (430.69 ± 0.91mg RE/g extract) were detected in AKE. In conclusion, AKE possessed anti-hyperglycemic effects and the possible mechanisms were associated with its inhibition on α-glucosidase and the improvement on insulin release and/or insulin sensitivity as well. The anti-hyperglycemic activity possessed by AKE maybe attributable to its high contents of phenolic and flavonoid compounds. PMID:26051735
D2-brane Chern-Simons theories: F -maximization = a-maximization
NASA Astrophysics Data System (ADS)
Fluder, Martin; Sparks, James
2016-01-01
We study a system of N D2-branes probing a generic Calabi-Yau three-fold singularity in the presence of a non-zero quantized Romans mass n. We argue that the low-energy effective c N=2 Chern-Simons quiver gauge theory flows to a superconformal fixed point in the IR, and construct the dual AdS4 solution in massive IIA supergravity. We compute the free energy F of the gauge theory on S 3 using localization. In the large N limit we find F = c ( nN )1/3 a 2/3, where c is a universal constant and a is the a-function of the "parent" four-dimensional N=1 theory on N D3-branes probing the same Calabi-Yau singularity. It follows that maximizing F over the space of admissible R-symmetries is equivalent to maximizing a for this class of theories. Moreover, we show that the gauge theory result precisely matches the holographic free energy of the supergravity solution, and provide a similar matching of the VEV of a BPS Wilson loop operator.
Maximal and sub-maximal functional lifting performance at different platform heights.
Savage, Robert J; Jaffrey, Mark A; Billing, Daniel C; Ham, Daniel J
2015-01-01
Introducing valid physical employment tests requires identifying and developing a small number of practical tests that provide broad coverage of physical performance across the full range of job tasks. This study investigated discrete lifting performance across various platform heights reflective of common military lifting tasks. Sixteen Australian Army personnel performed a discrete lifting assessment to maximal lifting capacity (MLC) and maximal acceptable weight of lift (MAWL) at four platform heights between 1.30 and 1.70 m. There were strong correlations between platform height and normalised lifting performance for MLC (R(2) = 0.76 ± 0.18, p < 0.05) and MAWL (R(2) = 0.73 ± 0.21, p < 0.05). The developed relationship allowed prediction of lifting capacity at one platform height based on lifting capacity at any of the three other heights, with a standard error of < 4.5 kg and < 2.0 kg for MLC and MAWL, respectively. PMID:25420678
Anaerobic capacity: a maximal anaerobic running test versus the maximal accumulated oxygen deficit.
Maxwell, N S; Nimmo, M A
1996-02-01
The present investigation evaluates a maximal anaerobic running test (MART) against the maximal accumulated oxygen deficit (MAOD) for the determination of anaerobic capacity. Essentially, this involved comparing 18 male students performing two randomly assigned supramaximal runs to exhaustion on separate days. Post warm-up and 1, 3, and 6 min postexercise capillary blood samples were taken during both tests for plasma blood lactate (BLa) determination. In the MART only, blood ammonia (BNH3) concentration was measured, while capillary blood samples were additionally taken after every second sprint for BLa determination. Anaerobic capacity, measured as oxygen equivalents in the MART protocol, averaged 112.2 +/- 5.2 ml.kg-1.min-1. Oxygen deficit, representing the anaerobic capacity in the MAOD test, was an average of 74.6 +/- 7.3 ml.kg-1. There was a significant correlation between the MART and MAOD (r = .83, p < .001). BLa values obtained over time in the two tests showed no significant difference, nor was there any difference in the peak BLa recorded. Peak BNH3 concentration recorded was significantly increased from resting levels at exhaustion during the MART. PMID:8664845
Maximal stochastic transport in the Lorenz equations
NASA Astrophysics Data System (ADS)
Agarwal, Sahil; Wettlaufer, J. S.
2016-01-01
We calculate the stochastic upper bounds for the Lorenz equations using an extension of the background method. In analogy with Rayleigh-Bénard convection the upper bounds are for heat transport versus Rayleigh number. As might be expected, the stochastic upper bounds are larger than the deterministic counterpart of Souza and Doering [1], but their variation with noise amplitude exhibits interesting behavior. Below the transition to chaotic dynamics the upper bounds increase monotonically with noise amplitude. However, in the chaotic regime this monotonicity depends on the number of realizations in the ensemble; at a particular Rayleigh number the bound may increase or decrease with noise amplitude. The origin of this behavior is the coupling between the noise and unstable periodic orbits, the degree of which depends on the degree to which the ensemble represents the ergodic set. This is confirmed by examining the close returns plots of the full solutions to the stochastic equations and the numerical convergence of the noise correlations. The numerical convergence of both the ensemble and time averages of the noise correlations is sufficiently slow that it is the limiting aspect of the realization of these bounds. Finally, we note that the full solutions of the stochastic equations demonstrate that the effect of noise is equivalent to the effect of chaos.
Maximizing Singlet Fission by Intermolecular Packing.
Wang, Linjun; Olivier, Yoann; Prezhdo, Oleg V; Beljonne, David
2014-10-01
A novel nonadiabatic molecular dynamics scheme is applied to study the singlet fission (SF) process in pentacene dimers as a function of longitudinal and lateral displacements of the molecular backbones. Detailed two-dimensional mappings of both instantaneous and long-term triplet yields are obtained, characterizing the advantageous and unfavorable stacking arrangements, which can be achieved by chemical substitutions to the bare pentacene molecule. We show that the SF rate can be increased by more than an order of magnitude through tuning the intermolecular packing, most notably when going from cofacial to the slipped stacked arrangements encountered in some pentacene derivatives. The simulations indicate that the SF process is driven by thermal electron-phonon fluctuations at ambient and high temperatures, expected in solar cell applications. Although charge-transfer states are key to construct continuous channels for SF, a large charge-transfer character of the photoexcited state is found to be not essential for efficient SF. The reported time domain study mimics directly numerous laser experiments and provides novel guidelines for designing efficient photovoltaic systems exploiting the SF process with optimum intermolecular packing. PMID:26278443
Maximizing Exposure Therapy: An Inhibitory Learning Approach
Craske, Michelle G.; Treanor, Michael; Conway, Chris; Zbozinek, Tomislav; Vervliet, Bram
2014-01-01
Exposure therapy is an effective approach for treating anxiety disorders, although a substantial number of individuals fail to benefit or experience a return of fear after treatment. Research suggests that anxious individuals show deficits in the mechanisms believed to underlie exposure therapy, such as inhibitory learning. Targeting these processes may help improve the efficacy of exposure-based procedures. Although evidence supports an inhibitory learning model of extinction, there has been little discussion of how to implement this model in clinical practice. The primary aim of this paper is to provide examples to clinicians for how to apply this model to optimize exposure therapy with anxious clients, in ways that distinguish it from a ‘fear habituation’ approach and ‘belief disconfirmation’ approach within standard cognitive-behavior therapy. Exposure optimization strategies include 1) expectancy violation, 2) deepened extinction, 3) occasional reinforced extinction, 4) removal of safety signals, 5) variability, 6) retrieval cues, 7) multiple contexts, and 8) affect labeling. Case studies illustrate methods of applying these techniques with a variety of anxiety disorders, including obsessive-compulsive disorder, posttraumatic stress disorder, social phobia, specific phobia, and panic disorder. PMID:24864005
Drabek, A G
1992-01-01
Private and bilateral donors currently place a great deal of emphasis on the potential importance of nongovernmental organizations (NGOs) in delivering services to dispossessed populations and nascent democracies around the world. This is particularly the case in Southern Africa. The expectation that NGOs can utilize resources more effectively than government may ultimately imperil the credibility of NGOs internationally. NGOs are expected to be: agents of development; community organizers and educators; institution builders; social service providers; humanitarian relief providers; political activists; human rights protectors; police watchdogs and advocates; organizational and financial managers; technical experts in agriculture and health; democracy promoters; innovators and testers of new ideas and technologies; fund raisers; employment creators; credit providers; and an alternative to governments. African NGOs must identify creative solutions that work; improve NGO capacity for research and evaluation, including definition of their own criteria for evaluation; test technologies and monitor results; and refine participatory and action research methods. 1) NGOs need to make decisions about whether they want to become mere service providers or whether they are going to make a long-term commitment to institution-building. 2) NGOs need to fend off attempts by donors to buy into agendas that are not their own. 3) Another major challenge is to reduce NGO dependency on donors and to increase their accountability to their own constituencies. 4) It must also be ensured that institutionalization does not lead to a lack of responsiveness within the NGO community. In Zimbabwe, workshops have been held to assist the NGO community in developing skills in coalition building around various issues to influence governments and donors, whether on women's issues, environmental issues, or cooperatives. If donors promote more effective development work both by encouraging linkages
WHO expectation and industry goals.
Vandersmissen, W
2001-02-01
It is expected the world's vaccine market will show a robust growth over the next few years, yet this growth will predominantly come from introduction of new vaccines in industrialised countries. Economic market forces will increasingly direct vaccine sales and vaccine development towards the needs of markets with effective purchasing power. Yet the scientific and technological progress that drives the development of such innovative vaccines holds the promise of applicability for vaccines that are highly desirable for developing countries. Corrective measures that take into account economic and industrial reality must be considered to span the widening gap between richer and poorer countries in terms of availability and general use of current and recent vaccines. Such measures must help developing countries to get access to future vaccines for diseases that predominantly or exclusively affect them, but for which the poor economic prospects do not provide a basis for the vaccine industry to undertake costly research and development programmes. Recent initiatives such as GAVI, including the establishment of a reliable, guaranteed purchase fund, could provide a solution to the problem. PMID:11166883
Note on maximally entangled Eisert-Lewenstein-Wilkens quantum games
NASA Astrophysics Data System (ADS)
Bolonek-Lasoń, Katarzyna; Kosiński, Piotr
2015-12-01
Maximally entangled Eisert-Lewenstein-Wilkens games are analyzed. For a general class of gates defined in the previous papers of the first author, the general conditions are derived which allow to determine the form of gate leading to maximally entangled games. The construction becomes particularly simple provided one does distinguish between games differing by relabeling of strategies. Some examples are presented.
Detrimental Relations of Maximization with Academic and Career Attitudes
ERIC Educational Resources Information Center
Dahling, Jason J.; Thompson, Mindi N.
2013-01-01
Maximization refers to a decision-making style that involves seeking the single best option when making a choice, which is generally dysfunctional because people are limited in their ability to rationally evaluate all options and identify the single best outcome. The vocational consequences of maximization are examined in two samples, college…
Pace's Maxims for Homegrown Library Projects. Coming Full Circle
ERIC Educational Resources Information Center
Pace, Andrew K.
2005-01-01
This article discusses six maxims by which to run library automation. The following maxims are discussed: (1) Solve only known problems; (2) Avoid changing data to fix display problems; (3) Aut viam inveniam aut faciam; (4) If you cannot make it yourself, buy something; (5) Kill the alligator closest to the boat; and (6) Just because yours is…
Minimal Length, Maximal Momentum and the Entropic Force Law
NASA Astrophysics Data System (ADS)
Nozari, Kourosh; Pedram, Pouria; Molkara, M.
2012-04-01
Different candidates of quantum gravity proposal such as string theory, noncommutative geometry, loop quantum gravity and doubly special relativity, all predict the existence of a minimum observable length and/or a maximal momentum which modify the standard Heisenberg uncertainty principle. In this paper, we study the effects of minimal length and maximal momentum on the entropic force law formulated recently by E. Verlinde.
A Method for Maximizing the Internal Consistency Coefficient Alpha.
ERIC Educational Resources Information Center
Pepin, Michel
This paper presents three different ways of computing the internal consistency coefficient alpha for a same set of data. The main objective of the paper is the illustration of a method for maximizing coefficient alpha. The maximization of alpha can be achieved with the aid of a principal component analysis. The relation between alpha max. and the…
Effect of Age and Other Factors on Maximal Heart Rate.
ERIC Educational Resources Information Center
Londeree, Ben R.; Moeschberger, Melvin L.
1982-01-01
To reduce confusion regarding reported effects of age on maximal exercise heart rate, a comprehensive review of the relevant English literature was conducted. Data on maximal heart rate after exercising with a bicycle, a treadmill, and after swimming were analyzed with regard to physical fitness and to age, sex, and racial differences. (Authors/PP)
Improving information technology to maximize fenestration energyefficiency
Arasteh, Dariush; Mitchell, Robin; Kohler, Christian; Huizenga,Charlie; Curcija, Dragan
2001-06-06
Improving software for the analysis of fenestration product energy efficiency and developing related information technology products that aid in optimizing the use of fenestration products for energy efficiency are essential steps toward ensuring that more efficient products are developed and that existing and emerging products are utilized in the applications where they will produce the greatest energy savings. Given the diversity of building types and designs and the climates in the U.S., no one fenestration product or set of properties is optimal for all applications. Future tools and procedures to analyze fenestration product energy efficiency will need to both accurately analyze fenestration product performance under a specific set of conditions and to look at whole fenestration product energy performance over the course of a yearly cycle and in the context of whole buildings. Several steps have already been taken toward creating fenestration product software that will provide the information necessary to determine which details of a fenestration product's design can be improved to have the greatest impact on energy efficiency, what effects changes in fenestration product design will have on the comfort parameters that are important to consumers, and how specific fenestration product designs will perform in specific applications. Much work remains to be done, but the energy savings potential justifies the effort. Information is relatively cheap compared to manufacturing. Information technology has already been responsible for many improvements in the global economy--it can similarly facilitate many improvements in fenestration product energy efficiency.
NASA Astrophysics Data System (ADS)
Jakubski, Thomas; Piechoncinski, Michal; Moses, Raphael; Bugata, Bharathi; Schmalfuss, Heiko; Köhler, Ines; Lisowski, Jan; Klobes, Jens; Fenske, Robert
2009-01-01
Especially for advanced masks the reticle inspection operation is a very significant cost factor, since it is a time consuming process and inspection tools are becoming disproportionately expensive. Analyzing and categorizing historical equipment utilization times of the reticle inspection tools however showed a significant amount of time which can be classified as non productive. In order to reduce the inspection costs the equipment utilization needed to be improved. The main contributors to non productive time were analyzed and several use cases identified, where automation utilizing a SECS1 equipment interface was expected to help to reduce these non productive times. The paper demonstrates how real time access to equipment utilization data can be applied to better control manufacturing resources. Scenarios are presented where remote monitoring and control of the inspection equipment can be used to avoid setup errors or save inspection time by faster response to problem situations. Additionally a solution to the second important need, the maximization of tool utilization in cases where not all of the intended functions are available, is explained. Both the models and the software implementation are briefly explained. For automation of the so called inspection strategy a new approach which allows separation of the business rules from the automation infrastructure was chosen. Initial results of inspection equipment performance data tracked through the SECS interface are shown. Furthermore a system integration overview is presented and examples of how the inspection strategy rules are implemented and managed are given.
Maximal entanglement versus entropy for mixed quantum states
Wei, T.-C.; Goldbart, Paul M.; Kwiat, Paul G.; Nemoto, Kae; Munro, William J.; Verstraete, Frank
2003-02-01
Maximally entangled mixed states are those states that, for a given mixedness, achieve the greatest possible entanglement. For two-qubit systems and for various combinations of entanglement and mixedness measures, the form of the corresponding maximally entangled mixed states is determined primarily analytically. As measures of entanglement, we consider entanglement of formation, relative entropy of entanglement, and negativity; as measures of mixedness, we consider linear and von Neumann entropies. We show that the forms of the maximally entangled mixed states can vary with the combination of (entanglement and mixedness) measures chosen. Moreover, for certain combinations, the forms of the maximally entangled mixed states can change discontinuously at a specific value of the entropy. Along the way, we determine the states that, for a given value of entropy, achieve maximal violation of Bell's inequality.
System performance evaluation of the MAXIM concept with integrated modeling
NASA Astrophysics Data System (ADS)
Lieber, Michael D.; Gallagher, Dennis J.; Cash, Webster C.; Shipley, Ann F.
2003-03-01
The MAXIM (Mico-Arcsecond X-Ray Imaging Mission) and MAXIM Pathfinder, a technology precursor mission, is considered by NASA as 'visionary missions' in space astronomy. Currently the MAXIM mission design would fly multiple spacecraft in formation, each carrying precision optics, to direct x-rays from an astronomical source to collector and imaging spacecrafts. The mission architecture is complex and provides technical challenges in formaiton flying and external metrology, and target acquisition. To further develop the concept, an integrated model (IM) of the MAXIM and MAXIM Pathfinder was developed. Individual subsystem models from disciplines in structural dynamics, optics, controls, signal processing, detector physics and disturbance modelign are seamlessly integrated into one cohesive model to efficiently support system level trades and analysis. The optical system design is a unique combination of optical concepts and therefore results from the IM were extensively compared with ASAP optical software.
Expected geoneutrino signal at JUNO
NASA Astrophysics Data System (ADS)
Strati, Virginia; Baldoncini, Marica; Callegari, Ivan; Mantovani, Fabio; McDonough, William F.; Ricci, Barbara; Xhixha, Gerti
2015-12-01
Constraints on the Earth's composition and on its radiogenic energy budget come from the detection of geoneutrinos. The Kamioka Liquid scintillator Antineutrino Detector (KamLAND) and Borexino experiments recently reported the geoneutrino flux, which reflects the amount and distribution of U and Th inside the Earth. The Jiangmen Underground Neutrino Observatory (JUNO) neutrino experiment, designed as a 20 kton liquid scintillator detector, will be built in an underground laboratory in South China about 53 km from the Yangjiang and Taishan nuclear power plants, each one having a planned thermal power of approximately 18 GW. Given the large detector mass and the intense reactor antineutrino flux, JUNO aims not only to collect high statistics antineutrino signals from reactors but also to address the challenge of discriminating the geoneutrino signal from the reactor background. The predicted geoneutrino signal at JUNO is terrestrial neutrino unit (TNU), based on the existing reference Earth model, with the dominant source of uncertainty coming from the modeling of the compositional variability in the local upper crust that surrounds (out to approximately 500 km) the detector. A special focus is dedicated to the 6° × 4° local crust surrounding the detector which is estimated to contribute for the 44% of the signal. On the basis of a worldwide reference model for reactor antineutrinos, the ratio between reactor antineutrino and geoneutrino signals in the geoneutrino energy window is estimated to be 0.7 considering reactors operating in year 2013 and reaches a value of 8.9 by adding the contribution of the future nuclear power plants. In order to extract useful information about the mantle's composition, a refinement of the abundance and distribution of U and Th in the local crust is required, with particular attention to the geochemical characterization of the accessible upper crust where 47% of the expected geoneutrino signal originates and this region contributes
Expectation-Based Control of Noise and Chaos
NASA Technical Reports Server (NTRS)
Zak, Michael
2006-01-01
A proposed approach to control of noise and chaos in dynamic systems would supplement conventional methods. The approach is based on fictitious forces composed of expectations governed by Fokker-Planck or Liouville equations that describe the evolution of the probability densities of the controlled parameters. These forces would be utilized as feedback control forces that would suppress the undesired diffusion of the controlled parameters. Examples of dynamic systems in which the approach is expected to prove beneficial include spacecraft, electronic systems, and coupled lasers.
Assigning values to intermediate health states for cost-utility analysis: theory and practice.
Cohen, B J
1996-01-01
Cost-utility analysis (CUA) was developed to guide the allocation of health care resources under a budget constraint. As the generally stated goal of CUA is to maximize aggregate health benefits, the philosophical underpinning of this method is classic utilitarianism. Utilitarianism has been criticized as a basis for social choice because of its emphasis on the net sum of benefits without regard to the distribution of benefits. For example, it has been argued that absolute priority should be given to the worst off when making social choices affecting basic needs. Application of classic utilitarianism requires use of strength-of-preference utilities, assessed under conditions of certainty, to assign quality-adjustment factors to intermediate health states. The two methods commonly used to measure strength-of-preference utility, categorical scaling and time tradeoff, produce rankings that systematically give priority to those who are better off. Alternatively, von Neumann-Morgenstern utilities, assessed under conditions of uncertainty, could be used to assign values to intermediate health states. The theoretical basis for this would be Harsanyi's proposal that social choice be made under the hypothetical assumption that one had an equal chance of being anyone in society. If this proposal is accepted, as well as the expected-utility axioms applied to both individual choice and social choice, the preferred societal arrangement is that with the highest expected von Neumann-Morgenstern utility. In the presence of risk aversion, this will give some priority to the worst-off relative to classic utilitarianism. Another approach is to raise the values obtained by time-tradeoff assessments to a power a between 0 and 1. This would explicitly give priority to the worst off, with the degree of priority increasing as a decreases. Results could be presented over a range of a. The results of CUA would then provide useful information to those holding a range of philosophical points
Hurst, Laurence D.; Ghanbarian, Avazeh T.; Forrest, Alistair R. R.; Huminiecki, Lukasz
2015-01-01
X chromosomes are unusual in many regards, not least of which is their nonrandom gene content. The causes of this bias are commonly discussed in the context of sexual antagonism and the avoidance of activity in the male germline. Here, we examine the notion that, at least in some taxa, functionally biased gene content may more profoundly be shaped by limits imposed on gene expression owing to haploid expression of the X chromosome. Notably, if the X, as in primates, is transcribed at rates comparable to the ancestral rate (per promoter) prior to the X chromosome formation, then the X is not a tolerable environment for genes with very high maximal net levels of expression, owing to transcriptional traffic jams. We test this hypothesis using The Encyclopedia of DNA Elements (ENCODE) and data from the Functional Annotation of the Mammalian Genome (FANTOM5) project. As predicted, the maximal expression of human X-linked genes is much lower than that of genes on autosomes: on average, maximal expression is three times lower on the X chromosome than on autosomes. Similarly, autosome-to-X retroposition events are associated with lower maximal expression of retrogenes on the X than seen for X-to-autosome retrogenes on autosomes. Also as expected, X-linked genes have a lesser degree of increase in gene expression than autosomal ones (compared to the human/Chimpanzee common ancestor) if highly expressed, but not if lowly expressed. The traffic jam model also explains the known lower breadth of expression for genes on the X (and the Z of birds), as genes with broad expression are, on average, those with high maximal expression. As then further predicted, highly expressed tissue-specific genes are also rare on the X and broadly expressed genes on the X tend to be lowly expressed, both indicating that the trend is shaped by the maximal expression level not the breadth of expression per se. Importantly, a limit to the maximal expression level explains biased tissue of expression
Hurst, Laurence D; Ghanbarian, Avazeh T; Forrest, Alistair R R; Huminiecki, Lukasz
2015-12-01
X chromosomes are unusual in many regards, not least of which is their nonrandom gene content. The causes of this bias are commonly discussed in the context of sexual antagonism and the avoidance of activity in the male germline. Here, we examine the notion that, at least in some taxa, functionally biased gene content may more profoundly be shaped by limits imposed on gene expression owing to haploid expression of the X chromosome. Notably, if the X, as in primates, is transcribed at rates comparable to the ancestral rate (per promoter) prior to the X chromosome formation, then the X is not a tolerable environment for genes with very high maximal net levels of expression, owing to transcriptional traffic jams. We test this hypothesis using The Encyclopedia of DNA Elements (ENCODE) and data from the Functional Annotation of the Mammalian Genome (FANTOM5) project. As predicted, the maximal expression of human X-linked genes is much lower than that of genes on autosomes: on average, maximal expression is three times lower on the X chromosome than on autosomes. Similarly, autosome-to-X retroposition events are associated with lower maximal expression of retrogenes on the X than seen for X-to-autosome retrogenes on autosomes. Also as expected, X-linked genes have a lesser degree of increase in gene expression than autosomal ones (compared to the human/Chimpanzee common ancestor) if highly expressed, but not if lowly expressed. The traffic jam model also explains the known lower breadth of expression for genes on the X (and the Z of birds), as genes with broad expression are, on average, those with high maximal expression. As then further predicted, highly expressed tissue-specific genes are also rare on the X and broadly expressed genes on the X tend to be lowly expressed, both indicating that the trend is shaped by the maximal expression level not the breadth of expression per se. Importantly, a limit to the maximal expression level explains biased tissue of expression
Preschoolers can recognize violations of the Gricean maxims
Eskritt, Michelle; Whalen, Juanita; Lee, Kang
2010-01-01
Grice (Syntax and semantics: Speech acts, 1975, pp. 41–58, Vol. 3) proposed that conversation is guided by a spirit of cooperation that involves adherence to several conversational maxims. Three types of maxims were explored in the current study: 1) Quality, to be truthful; 2) Relation, to say only what is relevant to a conversation; and 3) Quantity, to provide as much information as required. Three- to five-year-olds were tested to determine the age at which an awareness of these Gricean maxims emerges. Children requested the help of one of two puppets in finding a hidden sticker. One puppet always adhered to the maxim being tested, while the other always violated it. Consistently choosing the puppet that adhered to the maxim was considered indicative of an understanding of that maxim. The results indicate that children were initially only successful in the Relation condition. While in general, children performed better at first in the Quantity condition compared with the Quality condition, 3-year-olds never performed above chance in the Quantity condition. The findings of the present study indicate that preschool children are sensitive to the violation of the Relation, Quality, and Quantity maxims at least under some conditions. PMID:20953298
A taxonomic approach to communicating maxims in interstellar messages
NASA Astrophysics Data System (ADS)
Vakoch, Douglas A.
2011-02-01
Previous discussions of interstellar messages that could be sent to extraterrestrial intelligence have focused on descriptions of mathematics, science, and aspects of human culture and civilization. Although some of these depictions of humanity have implicitly referred to our aspirations, this has not clearly been separated from descriptions of our actions and attitudes as they are. In this paper, a methodology is developed for constructing interstellar messages that convey information about our aspirations by developing a taxonomy of maxims that provide guidance for living. Sixty-six maxims providing guidance for living were judged for degree of similarity to each of other. Quantitative measures of the degree of similarity between all pairs of maxims were derived by aggregating similarity judgments across individual participants. These composite similarity ratings were subjected to a cluster analysis, which yielded a taxonomy that highlights perceived interrelationships between individual maxims and that identifies major classes of maxims. Such maxims can be encoded in interstellar messages through three-dimensional animation sequences conveying narratives that highlight interactions between individuals. In addition, verbal descriptions of these interactions in Basic English can be combined with these pictorial sequences to increase intelligibility. Online projects to collect messages such as the SETI Institute's Earth Speaks and La Tierra Habla, can be used to solicit maxims from participants around the world.
Multicultural Differences in Women's Expectations of Birth.
Moore, Marianne F
2016-01-01
This review surveyed qualitative and quantitative studies to explore the expectations around birth that are held by women from different cultures. These studies are grouped according to expectations of personal control expectations of support from partner/others/family; expectations of carel behavior from providers such as nurses, doctors, and/or midwives; expectations about the health of the baby; and expectations about pain in childbirth. Discussed are the findings and the role that Western culture in medicine, power and privilege are noted in providing care to these women. PMID:27263233
Nursing students' expectations of the college experience.
Zysberg, Leehu; Zisberg, Anna
2008-09-01
Nursing students' expectations of college have not received much attention in the empirical literature. These expectations may be important in better understanding nurses' motivations, role acquisition, and academic and professional success. The first study discussed in this article examined the reliability and construct validity of an instrument designed to assess students' (N = 95) expectations of their college experience. The results indicate good reliability and validity. The second study discussed in this article examined differences in expectations, comparing nursing and non-nursing students (N = 160) in an urban college setting. The results suggest expectations emphasizing practical and professional aspects (i.e., acquiring a profession, earning more money), followed by self-betterment and social life expectations. Nursing students differed from non-nursing students by reporting higher self-betterment and professional expectations but lower academic expectations. Implications for application and further research are discussed. PMID:18792705
Maximizing Your Investment in Building Automation System Technology.
ERIC Educational Resources Information Center
Darnell, Charles
2001-01-01
Discusses how organizational issues and system standardization can be important factors that determine an institution's ability to fully exploit contemporary building automation systems (BAS). Further presented is management strategy for maximizing BAS investments. (GR)
Maximal slicing of D-dimensional spherically symmetric vacuum spacetime
Nakao, Ken-ichi; Abe, Hiroyuki; Yoshino, Hirotaka; Shibata, Masaru
2009-10-15
We study the foliation of a D-dimensional spherically symmetric black-hole spacetime with D{>=}5 by two kinds of one-parameter families of maximal hypersurfaces: a reflection-symmetric foliation with respect to the wormhole slot and a stationary foliation that has an infinitely long trumpetlike shape. As in the four-dimensional case, the foliations by the maximal hypersurfaces avoid the singularity irrespective of the dimensionality. This indicates that the maximal slicing condition will be useful for simulating higher-dimensional black-hole spacetimes in numerical relativity. For the case of D=5, we present analytic solutions of the intrinsic metric, the extrinsic curvature, the lapse function, and the shift vector for the foliation by the stationary maximal hypersurfaces. These data will be useful for checking five-dimensional numerical-relativity codes based on the moving puncture approach.
Carnot cycle at finite power: attainability of maximal efficiency.
Allahverdyan, Armen E; Hovhannisyan, Karen V; Melkikh, Alexey V; Gevorkian, Sasun G
2013-08-01
We want to understand whether and to what extent the maximal (Carnot) efficiency for heat engines can be reached at a finite power. To this end we generalize the Carnot cycle so that it is not restricted to slow processes. We show that for realistic (i.e., not purposefully designed) engine-bath interactions, the work-optimal engine performing the generalized cycle close to the maximal efficiency has a long cycle time and hence vanishing power. This aspect is shown to relate to the theory of computational complexity. A physical manifestation of the same effect is Levinthal's paradox in the protein folding problem. The resolution of this paradox for realistic proteins allows to construct engines that can extract at a finite power 40% of the maximally possible work reaching 90% of the maximal efficiency. For purposefully designed engine-bath interactions, the Carnot efficiency is achievable at a large power. PMID:23952379
Sensitivity to conversational maxims in deaf and hearing children.
Surian, Luca; Tedoldi, Mariantonia; Siegal, Michael
2010-09-01
We investigated whether access to a sign language affects the development of pragmatic competence in three groups of deaf children aged 6 to 11 years: native signers from deaf families receiving bimodal/bilingual instruction, native signers from deaf families receiving oralist instruction and late signers from hearing families receiving oralist instruction. The performance of these children was compared to a group of hearing children aged 6 to 7 years on a test designed to assess sensitivity to violations of conversational maxims. Native signers with bimodal/bilingual instruction were as able as the hearing children to detect violations that concern truthfulness (Maxim of Quality) and relevance (Maxim of Relation). On items involving these maxims, they outperformed both the late signers and native signers attending oralist schools. These results dovetail with previous findings on mindreading in deaf children and underscore the role of early conversational experience and instructional setting in the development of pragmatics. PMID:19719886
Interpreting Negative Results in an Angle Maximization Problem.
ERIC Educational Resources Information Center
Duncan, David R.; Litwiller, Bonnie H.
1995-01-01
Presents a situation in which differential calculus is used with inverse trigonometric tangent functions to maximize an angle measure. A negative distance measure ultimately results, requiring a reconsideration of assumptions inherent in the initial figure. (Author/MKR)
A new augmentation based algorithm for extracting maximal chordal subgraphs
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2014-10-18
If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’ parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.
A new augmentation based algorithm for extracting maximal chordal subgraphs
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2014-10-18
If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’more » parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.« less
A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2014-01-01
A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’ parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph. PMID:25767331
Community Expectations of College Completion and Attendance
ERIC Educational Resources Information Center
Derden, Michael Wade
2011-01-01
Communities relay expectations of behavior that influence residents' decision making processes. The study's purpose was to define and identify social, cultural, and human capital variables relevant to understanding community expectations of postsecondary attainment. The study sought an operational model of community expectancy that would allow…
7 CFR 760.636 - Expected revenue.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 7 2010-01-01 2010-01-01 false Expected revenue. 760.636 Section 760.636 Agriculture... SPECIAL PROGRAMS INDEMNITY PAYMENT PROGRAMS Supplemental Revenue Assistance Payments Program § 760.636 Expected revenue. The expected revenue for each crop on a farm is: (a) For each insurable crop,...
Parental Expectations of Their Adolescents' Teachers.
ERIC Educational Resources Information Center
Tatar, Moshe; Horenczyk, Gabriel
2000-01-01
Examines parental expectations of their children's teachers through use of the Expectations of Teachers questionnaire. Participating parents (N=765) reported greater expectations for help and assistance, followed by teaching competence and fairness on the part of the teacher. Mothers were found to hold higher fairness, help, and assistance…
Are Grade Expectations Rational? A Classroom Experiment
ERIC Educational Resources Information Center
Hossain, Belayet; Tsigaris, Panagiotis
2015-01-01
This study examines students' expectations about their final grade. An attempt is made to determine whether students form expectations rationally. Expectations in economics, rational or otherwise, carry valuable information and have important implications in terms of both teaching effectiveness and the role of grades as an incentive structure…
Employer Expectations from a Business Education.
ERIC Educational Resources Information Center
Karakaya, Fahri; Karakaya, Fera
1996-01-01
In a survey, 80 businesses, mostly small, ranked 13 educational attributes expected of college students with a business education. Factor analysis shows four distinct skill areas expected from an ideal business education program: research, interpersonal, basic, and quantitative skills. In general, employers expect to hire well-rounded students,…
Siting Samplers to Minimize Expected Time to Detection
Walter, Travis; Lorenzetti, David M.; Sohn, Michael D.
2012-05-02
We present a probabilistic approach to designing an indoor sampler network for detecting an accidental or intentional chemical or biological release, and demonstrate it for a real building. In an earlier paper, Sohn and Lorenzetti(1) developed a proof of concept algorithm that assumed samplers could return measurements only slowly (on the order of hours). This led to optimal detect to treat architectures, which maximize the probability of detecting a release. This paper develops a more general approach, and applies it to samplers that can return measurements relatively quickly (in minutes). This leads to optimal detect to warn architectures, which minimize the expected time to detection. Using a model of a real, large, commercial building, we demonstrate the approach by optimizing networks against uncertain release locations, source terms, and sampler characteristics. Finally, we speculate on rules of thumb for general sampler placement.
Maximal oxygen uptake during exercise using trained or untrained muscles.
Moreira-da-Costa, M; Russo, A K; Piçarro, I C; Silva, A C; Leite-de-Barros-Neto, T; Tarasantchi, J; Barbosa, A S
1984-01-01
Maximal oxygen uptake, VO2 max, was determined for cyclists, long-distance runners and non-athletes during uphill running (treadmill) and cycling (cycloergometer) to compare trained and untrained muscles. Blood lactate, maximal heart rate and maximal ventilation during work were also measured. VO2 max was higher for runners and non-athletes during exercise on the treadmill and higher for cyclists during exercise on the cycloergometer. For runners and non-athletes, maximal heart rate accompanied the increase in VO2 max, whereas similar values were obtained for cyclists on both ergometers. Maximal ventilation during work accompanied the difference in VO2 max in both groups of athletes but among non-athletes it was similar during exercise on both the cycloergometer and the treadmill. Blood lactate was similar during exercise on both ergometers for all groups. These results suggest that the quantitative effects of training on cardiovascular and respiratory functions may only be properly evaluated by using an ergometer which requires an activity similar to that usually performed by the subjects. Cycle riding may possibly induce significant and specific alterations in the muscles involved in the exercise, thus increasing peripheral O2 uptake even after stabilization of maximal cardiac output, whereas running may well induce an improvement of all factors which are responsible for aerobic work power. PMID:6518340
Evolution of Shanghai STOCK Market Based on Maximal Spanning Trees
NASA Astrophysics Data System (ADS)
Yang, Chunxia; Shen, Ying; Xia, Bingying
2013-01-01
In this paper, using a moving window to scan through every stock price time series over a period from 2 January 2001 to 11 March 2011 and mutual information to measure the statistical interdependence between stock prices, we construct a corresponding weighted network for 501 Shanghai stocks in every given window. Next, we extract its maximal spanning tree and understand the structure variation of Shanghai stock market by analyzing the average path length, the influence of the center node and the p-value for every maximal spanning tree. A further analysis of the structure properties of maximal spanning trees over different periods of Shanghai stock market is carried out. All the obtained results indicate that the periods around 8 August 2005, 17 October 2007 and 25 December 2008 are turning points of Shanghai stock market, at turning points, the topology structure of the maximal spanning tree changes obviously: the degree of separation between nodes increases; the structure becomes looser; the influence of the center node gets smaller, and the degree distribution of the maximal spanning tree is no longer a power-law distribution. Lastly, we give an analysis of the variations of the single-step and multi-step survival ratios for all maximal spanning trees and find that two stocks are closely bonded and hard to be broken in a short term, on the contrary, no pair of stocks remains closely bonded for a long time.
Maximal non-classicality in multi-setting Bell inequalities
NASA Astrophysics Data System (ADS)
Tavakoli, Armin; Zohren, Stefan; Pawlowski, Marcin
2016-04-01
The discrepancy between maximally entangled states and maximally non-classical quantum correlations is well-known but still not well understood. We aim to investigate the relation between quantum correlations and entanglement in a family of Bell inequalities with N-settings and d outcomes. Using analytical as well as numerical techniques, we derive both maximal quantum violations and violations obtained from maximally entangled states. Furthermore, we study the most non-classical quantum states in terms of their entanglement entropy for large values of d and many measurement settings. Interestingly, we find that the entanglement entropy behaves very differently depending on whether N = 2 or N\\gt 2: when N = 2 the entanglement entropy is a monotone function of d and the most non-classical state is far from maximally entangled, whereas when N\\gt 2 the entanglement entropy is a non-monotone function of d and converges to that of the maximally entangled state in the limit of large d.
Maximizing the detection of near-Earth objects
NASA Astrophysics Data System (ADS)
Albin, T.; Albrecht, S.; Koschny, D.; Drolshagen, G.
2014-07-01
Planetary bodies with a perihelion equal or less than 1.3 astronomical units (au) are called near-Earth objects (NEOs). These objects are divided into 4 sub-families, two of them cross Earth's orbit and may be a potential hazard for the planet. The Tunguska event and the incident in Chelyabinsk last year have shown the devastating destructiveness of NEOs with a size of only approximately 40 and 20 meters, respectively. To predict and identify further threats, telescopic NEO surveys currently extend our knowledge of the population of these objects. Today (March 2014) approximately 10,700 NEOs are known. Based on an extrapolation of the current population, Bottke et al. (2002) predict a total number of N≈(1.0±0.5)×10^{8} NEOs up to an absolute magnitude of H = 30.5 mag. Additionally, Bottke et al. (2002) computed a de-biased model of the expected orbital elements distribution of the NEOs. They have investigated the theoretical distribution of NEOs by a dynamical simulation, following the orbital evolution of these objects from several source regions. Based on both models we performed simulations of the detectability of the theoretical NEO population for certain telescopes with certain properties. The goal of these simulations is to optimize the search strategies of NEO surveys. Our simulation models the optical telescope attributes (main and secondary mirror size, optical throughput, field-of-view), the electronics (CCD Camera, pixel size, quantum efficiency, gain, exposure time, pixel binning, dark / bias noise, Signal-to-Noise ratio), atmospheric effects (seeing, sky background illumination) and the brightness and angular velocity of the NEOs. We present exemplarily results for two telescopes, currently developed by the European Space Agency for a future NEO survey: the so-called Fly-Eye Telescope, a 1-m effective aperture telescope with a field of view of 6.5×6.5 deg^2 and the Test-Bed Telescope, with an aperture of 56 cm and a field of view of 2.2×2.2 deg^2
Analytical Properties of Credibilistic Expectation Functions
Wang, Bo; Watada, Junzo
2014-01-01
The expectation function of fuzzy variable is an important and widely used criterion in fuzzy optimization, and sound properties on the expectation function may help in model analysis and solution algorithm design for the fuzzy optimization problems. The present paper deals with some analytical properties of credibilistic expectation functions of fuzzy variables that lie in three aspects. First, some continuity theorems on the continuity and semicontinuity conditions are proved for the expectation functions. Second, a differentiation formula of the expectation function is derived which tells that, under certain conditions, the derivative of the fuzzy expectation function with respect to the parameter equals the expectation of the derivative of the fuzzy function with respect to the parameter. Finally, a law of large numbers for fuzzy variable sequences is obtained leveraging on the Chebyshev Inequality of fuzzy variables. Some examples are provided to verify the results obtained. PMID:24723800
Expectancy effects in memory for melodies.
Schmuckler, M A
1997-12-01
Two experiments explored the relation between melodic expectancy and melodic memory. In Experiment 1, listeners rated the degree to which different endings confirmed their expectations for a set of melodies. After providing these expectancy ratings, listeners received a recognition memory test in which they discriminated previously heard melodies from new melodies. Recognition memory in this task positively correlated with perceived expectancy, and was related to the estimated tonal coherence of these melodies. Experiment 2 extended these results, demonstrating better recognition memory for high expectancy melodies, relative to medium and low expectancy melodies. This experiment also observed asymmetrical memory confusions as a function of perceived expectancy. These findings fit with a model of musical memory in which schematically central events are better remembered than schematically peripheral events. PMID:9606947
NASA Astrophysics Data System (ADS)
Jafferis, Noah T.; Smith, Michael J.; Wood, Robert J.
2015-06-01
Increasing the energy and power density of piezoelectric actuators is very important for any weight-sensitive application, and is especially crucial for enabling autonomy in micro/milli-scale robots and devices utilizing this technology. This is achieved by maximizing the mechanical flexural strength and electrical dielectric strength through the use of laser-induced melting or polishing, insulating edge coating, and crack-arresting features, combined with features for rigid ground attachments to maximize force output. Manufacturing techniques have also been developed to enable mass customization, in which sheets of material are pre-stacked to form a laminate from which nearly arbitrary planar actuator designs can be fabricated using only laser cutting. These techniques have led to a 70% increase in energy density and an increase in mean lifetime of at least 15× compared to prior manufacturing methods. In addition, measurements have revealed a doubling of the piezoelectric coefficient when operating at the high fields necessary to achieve maximal energy densities, along with an increase in the Young’s modulus at the high compressive strains encountered—these two effects help to explain the higher performance of our actuators as compared to that predicted by linear models.
Moving multiple sinks through wireless sensor networks for lifetime maximization.
Petrioli, Chiara; Carosi, Alessio; Basagni, Stefano; Phillips, Cynthia Ann
2008-01-01
Unattended sensor networks typically watch for some phenomena such as volcanic events, forest fires, pollution, or movements in animal populations. Sensors report to a collection point periodically or when they observe reportable events. When sensors are too far from the collection point to communicate directly, other sensors relay messages for them. If the collection point location is static, sensor nodes that are closer to the collection point relay far more messages than those on the periphery. Assuming all sensor nodes have roughly the same capabilities, those with high relay burden experience battery failure much faster than the rest of the network. However, since their death disconnects the live nodes from the collection point, the whole network is then dead. We consider the problem of moving a set of collectors (sinks) through a wireless sensor network to balance the energy used for relaying messages, maximizing the lifetime of the network. We show how to compute an upper bound on the lifetime for any instance using linear and integer programming. We present a centralized heuristic that produces sink movement schedules that produce network lifetimes within 1.4% of the upper bound for realistic settings. We also present a distributed heuristic that produces lifetimes at most 25:3% below the upper bound. More specifically, we formulate a linear program (LP) that is a relaxation of the scheduling problem. The variables are naturally continuous, but the LP relaxes some constraints. The LP has an exponential number of constraints, but we can satisfy them all by enforcing only a polynomial number using a separation algorithm. This separation algorithm is a p-median facility location problem, which we can solve efficiently in practice for huge instances using integer programming technology. This LP selects a set of good sensor configurations. Given the solution to the LP, we can find a feasible schedule by selecting a subset of these configurations, ordering them
NASA Technical Reports Server (NTRS)
Sutliff, Thomas J.; Otero, Angel M.; Urban, David L.
2002-01-01
The Physical Sciences Research Program of NASA sponsors a broad suite of peer-reviewed research investigating fundamental combustion phenomena and applied combustion research topics. This research is performed through both ground-based and on-orbit research capabilities. The International Space Station (ISS) and two facilities, the Combustion Integrated Rack and the Microgravity Science Glovebox, are key elements in the execution of microgravity combustion flight research planned for the foreseeable future. This paper reviews the Microgravity Combustion Science research planned for the International Space Station implemented from 2003 through 2012. Examples of selected research topics, expected outcomes, and potential benefits will be provided. This paper also summarizes a multi-user hardware development approach, recapping the progress made in preparing these research hardware systems. Within the description of this approach, an operational strategy is presented that illustrates how utilization of constrained ISS resources may be maximized dynamically to increase science through design decisions made during hardware development.
STOCK MARKET CRASH AND EXPECTATIONS OF AMERICAN HOUSEHOLDS*
HUDOMIET, PÉTER; KÉZDI, GÁBOR; WILLIS, ROBERT J.
2011-01-01
SUMMARY This paper utilizes data on subjective probabilities to study the impact of the stock market crash of 2008 on households’ expectations about the returns on the stock market index. We use data from the Health and Retirement Study that was fielded in February 2008 through February 2009. The effect of the crash is identified from the date of the interview, which is shown to be exogenous to previous stock market expectations. We estimate the effect of the crash on the population average of expected returns, the population average of the uncertainty about returns (subjective standard deviation), and the cross-sectional heterogeneity in expected returns (disagreement). We show estimates from simple reduced-form regressions on probability answers as well as from a more structural model that focuses on the parameters of interest and separates survey noise from relevant heterogeneity. We find a temporary increase in the population average of expectations and uncertainty right after the crash. The effect on cross-sectional heterogeneity is more significant and longer lasting, which implies substantial long-term increase in disagreement. The increase in disagreement is larger among the stockholders, the more informed, and those with higher cognitive capacity, and disagreement co-moves with trading volume and volatility in the market. PMID:21547244
Rapid Expectation Adaptation during Syntactic Comprehension
Fine, Alex B.; Jaeger, T. Florian; Farmer, Thomas A.; Qian, Ting
2013-01-01
When we read or listen to language, we are faced with the challenge of inferring intended messages from noisy input. This challenge is exacerbated by considerable variability between and within speakers. Focusing on syntactic processing (parsing), we test the hypothesis that language comprehenders rapidly adapt to the syntactic statistics of novel linguistic environments (e.g., speakers or genres). Two self-paced reading experiments investigate changes in readers’ syntactic expectations based on repeated exposure to sentences with temporary syntactic ambiguities (so-called “garden path sentences”). These sentences typically lead to a clear expectation violation signature when the temporary ambiguity is resolved to an a priori less expected structure (e.g., based on the statistics of the lexical context). We find that comprehenders rapidly adapt their syntactic expectations to converge towards the local statistics of novel environments. Specifically, repeated exposure to a priori unexpected structures can reduce, and even completely undo, their processing disadvantage (Experiment 1). The opposite is also observed: a priori expected structures become less expected (even eliciting garden paths) in environments where they are hardly ever observed (Experiment 2). Our findings suggest that, when changes in syntactic statistics are to be expected (e.g., when entering a novel environment), comprehenders can rapidly adapt their expectations, thereby overcoming the processing disadvantage that mistaken expectations would otherwise cause. Our findings take a step towards unifying insights from research in expectation-based models of language processing, syntactic priming, and statistical learning. PMID:24204909
Premenstrual symptoms and smoking-related expectancies.
Pang, Raina D; Bello, Mariel S; Stone, Matthew D; Kirkpatrick, Matthew G; Huh, Jimi; Monterosso, John; Haselton, Martie G; Fales, Melissa R; Leventhal, Adam M
2016-06-01
Given that prior research implicates smoking abstinence in increased premenstrual symptoms, tobacco withdrawal, and smoking behaviors, it is possible that women with more severe premenstrual symptoms have stronger expectancies about the effects of smoking and abstaining from smoking on mood and withdrawal. However, such relations have not been previously explored. This study examined relations between premenstrual symptoms experienced in the last month and expectancies that abstaining from smoking results in withdrawal (i.e., smoking abstinence withdrawal expectancies), that smoking is pleasurable (i.e., positive reinforcement smoking expectancies), and smoking relieves negative mood (i.e., negative reinforcement smoking expectancies). In a cross-sectional design, 97 non-treatment seeking women daily smokers completed self-report measures of smoking reinforcement expectancies, smoking abstinence withdrawal expectancies, premenstrual symptoms, mood symptoms, and nicotine dependence. Affect premenstrual symptoms were associated with increased negative reinforcement smoking expectancies, but not over and above covariates. Affect and pain premenstrual symptoms were associated with increased positive reinforcement smoking expectancies, but only affect premenstrual symptoms remained significant in adjusted models. Affect, pain, and water retention premenstrual symptoms were associated with increased smoking abstinence withdrawal expectancies, but only affect premenstrual symptoms remained significant in adjusted models. Findings from this study suggest that addressing concerns about withdrawal and alternatives to smoking may be particularly important in women who experience more severe premenstrual symptoms, especially affect-related changes. PMID:26869196
Integrated life sciences technology utilization development program
NASA Technical Reports Server (NTRS)
1975-01-01
The goal of the TU program was to maximize the development of operable hardware and systems which will be of substantial benefit to the public. Five working prototypes were developed, and a meal system for the elderly is now undergoing evaluation. Manpower utilization is shown relative to the volume of requests in work for each month. The ASTP mobile laboratories and post Skylab bedrest study are also described.
Disk Density Tuning of a Maximal Random Packing
Ebeida, Mohamed S.; Rushdi, Ahmad A.; Awad, Muhammad A.; Mahmoud, Ahmed H.; Yan, Dong-Ming; English, Shawn A.; Owens, John D.; Bajaj, Chandrajit L.; Mitchell, Scott A.
2016-01-01
We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations. PMID:27563162
Molecular maximizing characterizes choice on Vaughan's (1981) procedure
Silberberg, Alan; Ziriax, John M.
1985-01-01
Pigeons keypecked on a two-key procedure in which their choice ratios during one time period determined the reinforcement rates assigned to each key during the next period (Vaughan, 1981). During each of four phases, which differed in the reinforcement rates they provided for different choice ratios, the duration of these periods was four minutes, duplicating one condition from Vaughan's study. During the other four phases, these periods lasted six seconds. When these periods were long, the results were similar to Vaughan's and appeared compatible with melioration theory. But when these periods were short, the data were consistent with molecular maximizing (see Silberberg & Ziriax, 1982) and were incompatible with melioration, molar maximizing, and matching. In a simulation, stat birds following a molecular-maximizing algorithm responded on the short- and long-period conditions of this experiment. When the time periods lasted four minutes, the results were similar to Vaughan's and to the results of the four-minute conditions of this study; when the time periods lasted six seconds, the choice data were similar to the data from real subjects for the six-second conditions. Thus, a molecular-maximizing response rule generated choice data comparable to those from the short- and long-period conditions of this experiment. These data show that, among extant accounts, choice on the Vaughan procedure is most compatible with molecular maximizing. PMID:16812409
Ventilatory patterns differ between maximal running and cycling.
Tanner, David A; Duke, Joseph W; Stager, Joel M
2014-01-15
To determine the effect of exercise mode on ventilatory patterns, 22 trained men performed two maximal graded exercise tests; one running on a treadmill and one cycling on an ergometer. Tidal flow-volume (FV) loops were recorded during each minute of exercise with maximal loops measured pre and post exercise. Running resulted in a greater VO2peak than cycling (62.7±7.6 vs. 58.1±7.2mLkg(-1)min(-1)). Although maximal ventilation (VE) did not differ between modes, ventilatory equivalents for O2 and CO2 were significantly larger during maximal cycling. Arterial oxygen saturation (estimated via ear oximeter) was also greater during maximal cycling, as were end-expiratory (EELV; 3.40±0.54 vs. 3.21±0.55L) and end-inspiratory lung volumes, (EILV; 6.24±0.88 vs. 5.90±0.74L). Based on these results we conclude that ventilatory patterns differ as a function of exercise mode and these observed differences are likely due to the differences in posture adopted during exercise in these modes. PMID:24211317
Predicted maximal heart rate for upper body exercise testing.
Hill, M; Talbot, C; Price, M
2016-03-01
Age-predicted maximal heart rate (HRMAX ) equations are commonly used for the purpose of prescribing exercise regimens, as criteria for achieving maximal exertion and for diagnostic exercise testing. Despite the growing popularity of upper body exercise in both healthy and clinical settings, no recommendations are available for exercise modes using the smaller upper body muscle mass. The purpose of this study was to determine how well commonly used age-adjusted prediction equations for HRMAX estimate actual HRMAX for upper body exercise in healthy young and older adults. A total of 30 young (age: 20 ± 2 years, height: 171·9 ± 32·8 cm, mass: 77·7 ± 12·6 kg) and 20 elderly adults (age: 66 ± 6 years, height: 162 ± 8·1 cm, mass: 65·3 ± 12·3 kg) undertook maximal incremental exercise tests on a conventional arm crank ergometer. Age-adjusted maximal heart rate was calculated using prediction equations based on leg exercise and compared with measured HRMAX data for the arms. Maximal HR for arm exercise was significantly overpredicted compared with age-adjusted prediction equations in both young and older adults. Subtracting 10-20 beats min(-1) from conventional prediction equations provides a reasonable estimate of HRMAX for upper body exercise in healthy older and younger adults. PMID:25319169
Stock Market Expectations of Dutch Households
Hurd, Michael; van Rooij, Maarten; Winter, Joachim
2013-01-01
Despite its importance for the analysis of life-cycle behavior and, in particular, retirement planning, stock ownership by private households is poorly understood. Among other approaches to investigate this puzzle, recent research has started to elicit private households’ expectations of stock market returns. This paper reports findings from a study that collected data over a two-year period both on households’ stock market expectations (subjective probabilities of gains or losses) and on whether they own stocks. We document substantial heterogeneity in financial market expectations. Expectations are correlated with stock ownership. Over the two years of our data, stock market prices increased, and expectations of future stock market price changes also increased, lending support to the view that expectations are influenced by recent stock gains or losses. PMID:23997423
Increasing hope by addressing clients' outcome expectations.
Swift, Joshua K; Derthick, Annie O
2013-09-01
Addressing clients' outcome expectations is an important clinical process that can lead to a strong therapeutic alliance, more positive treatment outcomes, and decreased rates of premature termination from psychotherapy. Five interventions designed to foster appropriate outcome expectations are discussed, including presenting a convincing treatment rationale, increasing clients' faith in their therapists, expressing faith in clients, providing outcome education, and comparing progress with expectations. Clinical examples and research support are provided for each. PMID:24000836
Maximizing the Impact of e-Therapy and Serious Gaming: Time for a Paradigm Shift.
Fleming, Theresa M; de Beurs, Derek; Khazaal, Yasser; Gaggioli, Andrea; Riva, Giuseppe; Botella, Cristina; Baños, Rosa M; Aschieri, Filippo; Bavin, Lynda M; Kleiboer, Annet; Merry, Sally; Lau, Ho Ming; Riper, Heleen
2016-01-01
Internet interventions for mental health, including serious games, online programs, and apps, hold promise for increasing access to evidence-based treatments and prevention. Many such interventions have been shown to be effective and acceptable in trials; however, uptake and adherence outside of trials is seldom reported, and where it is, adherence at least, generally appears to be underwhelming. In response, an international Collaboration On Maximizing the impact of E-Therapy and Serious Gaming (COMETS) was formed. In this perspectives' paper, we call for a paradigm shift to increase the impact of internet interventions toward the ultimate goal of improved population mental health. We propose four pillars for change: (1) increased focus on user-centered approaches, including both user-centered design of programs and greater individualization within programs, with the latter perhaps utilizing increased modularization; (2) Increased emphasis on engagement utilizing processes such as gaming, gamification, telepresence, and persuasive technology; (3) Increased collaboration in program development, testing, and data sharing, across both sectors and regions, in order to achieve higher quality, more sustainable outcomes with greater reach; and (4) Rapid testing and implementation, including the measurement of reach, engagement, and effectiveness, and timely implementation. We suggest it is time for researchers, clinicians, developers, and end-users to collaborate on these aspects in order to maximize the impact of e-therapies and serious gaming. PMID:27148094
Maximizing the Impact of e-Therapy and Serious Gaming: Time for a Paradigm Shift
Fleming, Theresa M.; de Beurs, Derek; Khazaal, Yasser; Gaggioli, Andrea; Riva, Giuseppe; Botella, Cristina; Baños, Rosa M.; Aschieri, Filippo; Bavin, Lynda M.; Kleiboer, Annet; Merry, Sally; Lau, Ho Ming; Riper, Heleen
2016-01-01
Internet interventions for mental health, including serious games, online programs, and apps, hold promise for increasing access to evidence-based treatments and prevention. Many such interventions have been shown to be effective and acceptable in trials; however, uptake and adherence outside of trials is seldom reported, and where it is, adherence at least, generally appears to be underwhelming. In response, an international Collaboration On Maximizing the impact of E-Therapy and Serious Gaming (COMETS) was formed. In this perspectives’ paper, we call for a paradigm shift to increase the impact of internet interventions toward the ultimate goal of improved population mental health. We propose four pillars for change: (1) increased focus on user-centered approaches, including both user-centered design of programs and greater individualization within programs, with the latter perhaps utilizing increased modularization; (2) Increased emphasis on engagement utilizing processes such as gaming, gamification, telepresence, and persuasive technology; (3) Increased collaboration in program development, testing, and data sharing, across both sectors and regions, in order to achieve higher quality, more sustainable outcomes with greater reach; and (4) Rapid testing and implementation, including the measurement of reach, engagement, and effectiveness, and timely implementation. We suggest it is time for researchers, clinicians, developers, and end-users to collaborate on these aspects in order to maximize the impact of e-therapies and serious gaming. PMID:27148094
Tabita, F. Robert
2013-07-30
In this study, the Principal Investigator, F.R. Tabita has teemed up with J. C. Liao from UCLA. This project's main goal is to manipulate regulatory networks in phototrophic bacteria to affect and maximize the production of large amounts of hydrogen gas under conditions where wild-type organisms are constrained by inherent regulatory mechanisms from allowing this to occur. Unrestrained production of hydrogen has been achieved and this will allow for the potential utilization of waste materials as a feed stock to support hydrogen production. By further understanding the means by which regulatory networks interact, this study will seek to maximize the ability of currently available “unrestrained” organisms to produce hydrogen. The organisms to be utilized in this study, phototrophic microorganisms, in particular nonsulfur purple (NSP) bacteria, catalyze many significant processes including the assimilation of carbon dioxide into organic carbon, nitrogen fixation, sulfur oxidation, aromatic acid degradation, and hydrogen oxidation/evolution. Moreover, due to their great metabolic versatility, such organisms highly regulate these processes in the cell and since virtually all such capabilities are dispensable, excellent experimental systems to study aspects of molecular control and biochemistry/physiology are available.
NASA Astrophysics Data System (ADS)
Du, Sijun; Jia, Yu; Seshia, Ashwin
2015-12-01
A resonant vibration energy harvester typically comprises of a clamped anchor and a vibrating shuttle with a proof mass. Piezoelectric materials are embedded in locations of high strain in order to transduce mechanical deformation into electric charge. Conventional design for piezoelectric vibration energy harvesters (PVEH) usually utilizes piezoelectric material and metal electrode layers covering the entire surface area of the cantilever with no consideration provided to examining the trade-off involved with respect to maximizing output power. This paper reports on the theory and experimental verification underpinning optimization of the active electrode area of a cantilevered PVEH in order to maximize output power. The analytical formulation utilizes Euler-Bernoulli beam theory to model the mechanical response of the cantilever. The expression for output power is reduced to a fifth order polynomial expression as a function of the electrode area. The maximum output power corresponds to the case when 44% area of the cantilever is covered by electrode metal. Experimental results are also provided to verify the theory.
Column generation algorithms for exact modularity maximization in networks.
Aloise, Daniel; Cafieri, Sonia; Caporossi, Gilles; Hansen, Pierre; Perron, Sylvain; Liberti, Leo
2010-10-01
Finding modules, or clusters, in networks currently attracts much attention in several domains. The most studied criterion for doing so, due to Newman and Girvan [Phys. Rev. E 69, 026113 (2004)], is modularity maximization. Many heuristics have been proposed for maximizing modularity and yield rapidly near optimal solution or sometimes optimal ones but without a guarantee of optimality. There are few exact algorithms, prominent among which is a paper by Xu [Eur. Phys. J. B 60, 231 (2007)]. Modularity maximization can also be expressed as a clique partitioning problem and the row generation algorithm of Grötschel and Wakabayashi [Math. Program. 45, 59 (1989)] applied. We propose to extend both of these algorithms using the powerful column generation methods for linear and non linear integer programming. Performance of the four resulting algorithms is compared on problems from the literature. Instances with up to 512 entities are solved exactly. Moreover, the computing time of previously solved problems are reduced substantially. PMID:21230350
Force Irregularity Following Maximal Effort: The After-Peak Reduction.
Doucet, Barbara M; Mettler, Joni A; Griffin, Lisa; Spirduso, Waneen
2016-08-01
Irregularities in force output are present throughout human movement and can impair task performance. We investigated the presence of a large force discontinuity (after-peak reduction, APR) that appeared immediately following peak in maximal effort ramp contractions performed with the thumb adductor and ankle dorsiflexor muscles in 25 young adult participants (76% males, 24% females; M age 24.4 years, SD = 7.1). The after-peak reduction displayed similar parameters in both muscle groups with comparable drops in force during the after-peak reduction minima (thumb adductor: 27.5 ± 7.5% maximal voluntary contraction; ankle dorsiflexor: 25.8 ± 6.2% maximal voluntary contraction). A trend for the presence of fewer after-peak reductions with successive ramp trials was observed, suggesting a learning effect. Further investigation should explore underlying neural mechanisms contributing to the after-peak reduction. PMID:27502241
Complex and Reoperative Colorectal Surgery: Setting Expectations and Learning from Experience.
Kin, Cindy
2016-06-01
A range of topics are covered in this issue dedicated to complex and reoperative colorectal surgery, from radiation-induced surgical problems, to enterocutaneous fistulas and locally advanced or recurrent rectal cancer. Common themes include the importance of operative planning and patient counseling on the expected functional outcomes. Experts in the field offer their technical tips and clinical lessons to maximize outcomes and minimize complications in these challenging cases. PMID:27247530
Predicting Problem Behaviors with Multiple Expectancies: Expanding Expectancy-Value Theory
ERIC Educational Resources Information Center
Borders, Ashley; Earleywine, Mitchell; Huey, Stanley J.
2004-01-01
Expectancy-value theory emphasizes the importance of outcome expectancies for behavioral decisions, but most tests of the theory focus on a single behavior and a single expectancy. However, the matching law suggests that individuals consider expected outcomes for both the target behavior and alternative behaviors when making decisions. In this…
Cardiovascular consequences of bed rest: effect on maximal oxygen uptake.
Convertino, V A
1997-02-01
Maximal oxygen uptake (VO2max) is reduced in healthy individuals confined to bed rest, suggesting it is independent of any disease state. The magnitude of reduction in VO2max is dependent on duration of bed rest and the initial level of aerobic fitness (VO2max), but it appears to be independent of age or gender. Bed rest induces an elevated maximal heart rate which, in turn, is associated with decreased cardiac vagal tone, increased sympathetic catecholamine secretion, and greater cardiac beta-receptor sensitivity. Despite the elevation in heart rate, VO2max is reduced primarily from decreased maximal stroke volume and cardiac output. An elevated ejection fraction during exercise following bed rest suggests that the lower stroke volume is not caused by ventricular dysfunction but is primarily the result of decreased venous return associated with lower circulating blood volume, reduced central venous pressure, and higher venous compliance in the lower extremities. VO2max, stroke volume, and cardiac output are further compromised by exercise in the upright posture. The contribution of hypovolemia to reduced cardiac output during exercise following bed rest is supported by the close relationship between the relative magnitude (% delta) and time course of change in blood volume and VO2max during bed rest, and also by the fact that retention of plasma volume is associated with maintenance of VO2max after bed rest. Arteriovenous oxygen difference during maximal exercise is not altered by bed rest, suggesting that peripheral mechanisms may not contribute significantly to the decreased VO2max. However reduction in baseline and maximal muscle blood flow, red blood cell volume, and capillarization in working muscles represent peripheral mechanisms that may contribute to limited oxygen delivery and, subsequently, lowered VO2max. Thus, alterations in cardiac and vascular functions induced by prolonged confinement to bed rest contribute to diminution of maximal oxygen uptake
Cardiovascular consequences of bed rest: effect on maximal oxygen uptake
NASA Technical Reports Server (NTRS)
Convertino, V. A.
1997-01-01
Maximal oxygen uptake (VO2max) is reduced in healthy individuals confined to bed rest, suggesting it is independent of any disease state. The magnitude of reduction in VO2max is dependent on duration of bed rest and the initial level of aerobic fitness (VO2max), but it appears to be independent of age or gender. Bed rest induces an elevated maximal heart rate which, in turn, is associated with decreased cardiac vagal tone, increased sympathetic catecholamine secretion, and greater cardiac beta-receptor sensitivity. Despite the elevation in heart rate, VO2max is reduced primarily from decreased maximal stroke volume and cardiac output. An elevated ejection fraction during exercise following bed rest suggests that the lower stroke volume is not caused by ventricular dysfunction but is primarily the result of decreased venous return associated with lower circulating blood volume, reduced central venous pressure, and higher venous compliance in the lower extremities. VO2max, stroke volume, and cardiac output are further compromised by exercise in the upright posture. The contribution of hypovolemia to reduced cardiac output during exercise following bed rest is supported by the close relationship between the relative magnitude (% delta) and time course of change in blood volume and VO2max during bed rest, and also by the fact that retention of plasma volume is associated with maintenance of VO2max after bed rest. Arteriovenous oxygen difference during maximal exercise is not altered by bed rest, suggesting that peripheral mechanisms may not contribute significantly to the decreased VO2max. However reduction in baseline and maximal muscle blood flow, red blood cell volume, and capillarization in working muscles represent peripheral mechanisms that may contribute to limited oxygen delivery and, subsequently, lowered VO2max. Thus, alterations in cardiac and vascular functions induced by prolonged confinement to bed rest contribute to diminution of maximal oxygen uptake
ERIC Educational Resources Information Center
Varga, Julia
2006-01-01
This paper analyses students' application strategies to higher education, the effects of labour market expectations and admission probabilities. The starting hypothesis of this study is that students consider the expected utility of their choices, a function of expected net lifetime earnings and the probability of admission. Based on a survey…
Recruitment of some respiratory muscles during three maximal inspiratory manoeuvres.
Nava, S; Ambrosino, N; Crotti, P; Fracchia, C; Rampulla, C
1993-01-01
BACKGROUND--A study was undertaken to determine the level of recruitment of the muscles used in the generation of respiratory muscle force, and to ascertain whether maximal diaphragmatic force and maximal inspiratory muscle force need to be measured by separate tests. The level of activity of three inspiratory muscles and one expiratory muscle during three maximal respiratory manoeuvres was studied: (1) maximal inspiration against a closed airway (Muller manoeuvre or maximal inspiratory pressure (MIP)); (2) maximal inspired manoeuvre followed by a maximal expiratory effort (combined manoeuvre); and (3) maximal inspiratory sniff through the nose (sniff manoeuvre). METHODS--All the manoeuvres were performed from functional residual capacity. The gastric (PGA) and oesophageal (POES) pressures and their difference, transdiaphragmatic pressure (PDI), and the integrated EMG activity of the diaphragm (EDI), the sternomastoid (ESTR), the intercostal parasternals (ERIC), and the rectus abdominis muscles (ERA) were recorded. RESULTS--Mean (SD) PDI values for the Muller, combined, and sniff manoeuvres were: 127.6 (19.4), 162.7 (22.2), and 136.6 (24.8) cm H2O, respectively. The pattern of rib cage muscle recruitment (POES/PDI) was similar for the Muller and sniff manoeuvres (88% and 80% respectively), and was 58% in the combined manoeuvre, confirming data previously reported in the literature. Peak EDI amplitude was greater during the sniff manoeuvre in all subjects (100%) than during the combined (88.1%) and Muller (61.1%) manoeuvres. ESTR and EIC were more active in the Muller and the sniff manoeuvres. The contribution of the expiratory muscle (ERA) to the three manoeuvres was 100% in the combined, 26.1% for the sniff, and 11.5% for the Muller manoeuvre. CONCLUSIONS--Each of these three manoeuvres results in different mechanisms of inspiratory and expiratory muscle activation and the intrathoracic and intra-abdominal pressures generated are a reflection of the interaction
Maximal expiratory flow volume curve in quarry workers.
Subhashini, Arcot Sadagopa; Satchidhanandam, Natesa
2002-01-01
Maximal Expiratory Flow Volume (MEFV) curves were recorded with a computerized Spirometer (Med Spiror). Forced Vital Capacity (FVC), Forced Expiratory Volumes (FEV), mean and maximal flow rates were obtained in 25 quarry workers who were free from respiratory disorders and 20 healthy control subjects. All the functional values are lower in quarry workers than in the control subject, the largest reduction in quarry workers with a work duration of over 15 years, especially for FEF75. The effects are probably due to smoking rather than dust exposure. PMID:12024961
Stability region maximization by decomposition-aggregation method. [Skylab stability
NASA Technical Reports Server (NTRS)
Siljak, D. D.; Cuk, S. M.
1974-01-01
This work is to improve the estimates of the stability regions by formulating and resolving a proper maximization problem. The solution of the problem provides the best estimate of the maximal value of the structural parameter and at the same time yields the optimum comparison system, which can be used to determine the degree of stability of the Skylab. The analysis procedure is completely computerized, resulting in a flexible and powerful tool for stability considerations of large-scale linear as well as nonlinear systems.
Projection of two biphoton qutrits onto a maximally entangled state.
Halevy, A; Megidish, E; Shacham, T; Dovrat, L; Eisenberg, H S
2011-04-01
Bell state measurements, in which two quantum bits are projected onto a maximally entangled state, are an essential component of quantum information science. We propose and experimentally demonstrate the projection of two quantum systems with three states (qutrits) onto a generalized maximally entangled state. Each qutrit is represented by the polarization of a pair of indistinguishable photons-a biphoton. The projection is a joint measurement on both biphotons using standard linear optics elements. This demonstration enables the realization of quantum information protocols with qutrits, such as teleportation and entanglement swapping. PMID:21517363
Koichi, Shungo; Arisaka, Masaki; Koshino, Hiroyuki; Aoki, Atsushi; Iwata, Satoru; Uno, Takeaki; Satoh, Hiroko
2014-04-28
Computer-assisted chemical structure elucidation has been intensively studied since the first use of computers in chemistry in the 1960s. Most of the existing elucidators use a structure-spectrum database to obtain clues about the correct structure. Such a structure-spectrum database is expected to grow on a daily basis. Hence, the necessity to develop an efficient structure elucidation system that can adapt to the growth of a database has been also growing. Therefore, we have developed a new elucidator using practically efficient graph algorithms, including the convex bipartite matching, weighted bipartite matching, and Bron-Kerbosch maximal clique algorithms. The utilization of the two matching algorithms especially is a novel point of our elucidator. Because of these sophisticated algorithms, the elucidator exactly produces a correct structure if all of the fragments are included in the database. Even if not all of the fragments are in the database, the elucidator proposes relevant substructures that can help chemists to identify the actual chemical structures. The elucidator, called the CAST/CNMR Structure Elucidator, plays a complementary role to the CAST/CNMR Chemical Shift Predictor, and together these two functions can be used to analyze the structures of organic compounds. PMID:24655374
Bison distribution under conflicting foraging strategies: site fidelity vs. energy maximization.
Merkle, Jerod A; Cherry, Seth G; Fortin, Daniel
2015-07-01
Foraging strategies based on site fidelity and maximization of energy intake rate are two adaptive forces shaping animal behavior. Whereas these strategies can both be evolutionarily stable, they predict conflicting optimal behaviors when population abundance is in decline. In such a case, foragers employing an energy-maximizing strategy should reduce their use of low-quality patches as interference competition becomes less intense for high-quality patches. Foragers using a site fidelity strategy, however, should continue to use familiar patches. Because natural fluctuations in population abundance provide the only non-manipulative opportunity to evaluate adaptation to these evolutionary forces, few studies have examined these foraging strategies simultaneously. Using abundance and space use data from a free-ranging bison (Bison bison) population living in a meadow-forest matrix in Prince Albert National Park, Canada, we determined how individuals balance the trade-off between site fidelity and energy-maximizing patch choice strategies with respect to changes in population abundance. From 1996 to 2005, bison abundance increased from 225 to 475 and then decreased to 225 by 2013. During the period of population increase, population range size increased. This expansion involved the addition of relatively less profitable areas and patches, leading to a decrease in the mean expected profitability of the range. Yet, during the period of population decline, we detected neither a subsequent retraction in population range size nor an increase in mean expected profitability of the range. Further, patch selection models. during the population decline indicated that, as density decreased, bison portrayed stronger fidelity to previously visited meadows, but no increase in selection strength for profitable meadows. Our analysis reveals that an energy-maximizing patch choice strategy alone cannot explain the distribution ofindividuals and populations, and site fidelity is an
47 CFR 90.743 - Renewal expectancy.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 5 2013-10-01 2013-10-01 false Renewal expectancy. 90.743 Section 90.743 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES PRIVATE LAND MOBILE RADIO SERVICES Regulations Governing Licensing and Use of Frequencies in the 220-222 MHz Band § 90.743 Renewal expectancy. (a)...
47 CFR 90.743 - Renewal expectancy.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 5 2011-10-01 2011-10-01 false Renewal expectancy. 90.743 Section 90.743 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES PRIVATE LAND MOBILE RADIO SERVICES Regulations Governing Licensing and Use of Frequencies in the 220-222 MHz Band § 90.743 Renewal expectancy. (a)...
What Respondents Really Expect from Researchers
ERIC Educational Resources Information Center
Kolar, Tomaz; Kolar, Iztok
2008-01-01
This article addresses the issue of falling response rates in telephone surveys. To better understand and maintain respondent goodwill, concepts of psychological contract and respondent expectations are introduced and explored. Results of the qualitative study show that respondent expectations are not only socially contingent but also…
Do Students Expect Compensation for Wage Risk?
ERIC Educational Resources Information Center
Schweri, Juerg; Hartog, Joop; Wolter, Stefan C.
2011-01-01
We use a unique data set about the wage distribution that Swiss students expect for themselves ex ante, deriving parametric and non-parametric measures to capture expected wage risk. These wage risk measures are unfettered by heterogeneity which handicapped the use of actual market wage dispersion as risk measure in earlier studies. Students in…
Cross-Cultural Differences in Student Expectations.
ERIC Educational Resources Information Center
Shank, Matthew D.; And Others
1996-01-01
A survey of 686 United States and 338 Australian business students compared student expectations of service provision on campus. Results indicated that Australian students had higher expectations on three dimensions of service quality: professors' willingness to help students develop academic skills; professor sympathy and reassurance; and…
Grief Experiences and Expectance of Suicide
ERIC Educational Resources Information Center
Wojtkowiak, Joanna; Wild, Verena; Egger, Jos
2012-01-01
Suicide is generally viewed as an unexpected cause of death. However, some suicides might be expected to a certain extent, which needs to be further studied. The relationships between expecting suicide, feeling understanding for the suicide, and later grief experiences were explored. In total, 142 bereaved participants completed the Grief…
Rising Tides: Faculty Expectations of Library Websites
ERIC Educational Resources Information Center
Nicol, Erica Carlson; O'English, Mark
2012-01-01
Looking at 2003-2009 LibQUAL+ responses at research-oriented universities in the United States, faculty library users report a significant and consistent rise in desires and expectations for library-provided online tools and websites, even as student user groups show declining or leveling expectations. While faculty, like students, also report…
Raising Expectations is Aim of New Effort
ERIC Educational Resources Information Center
Sparks, Sarah D.
2010-01-01
Researchers and policymakers agree that teachers' expectations of what their students can do can become self-fulfilling prophecies for children's academic performance. Yet while the "soft bigotry of low expectations" has become an education catchphrase, scholars and advocates are just beginning to explore whether it is possible to prevent such…
Sex Differences in Educational Aspirations and Expectations
ERIC Educational Resources Information Center
Marini, Margaret Mooney; Greenberger, Ellen
1978-01-01
Goals for educational attainment were studied in eleventh grade students. The males aspired to and expected higher levels of attainment. At higher aspiration levels, the discrepancy between aspiration and expectation was greater for females. Both socioeconomic background and academic ability had a greater effect on educational ambition for males.…
Emergency Health Preparedness: Expectations for Teachers.
ERIC Educational Resources Information Center
Winkelman, Jack L.
Specific issues relevant to the emergency health preparedness of schools and the key roles and expectations applicable to teachers are outlined. It is noted that, while issues of legal liability relevant to teachers are complex, teachers are expected to: (1) anticipate possible risk or harm involved in activities; (2) give adequate warning of…
Teacher Expectations and the Able Child.
ERIC Educational Resources Information Center
Lee-Corbin, Hilary
1994-01-01
Two middle school teachers and two students in each of the teacher's classes were assessed for field dependence-independence (FDI). The teachers were interviewed about their students. Found that one teacher had higher expectations and one had lower expectations for the student who had the same FDI orientation as the teacher than for the student…
Irrational Expectations in the Job Search Process.
ERIC Educational Resources Information Center
Liptak, John J.
1989-01-01
Discusses expectations held by client beginning a job search. Describes Ellis's Rational-Emotive Therapy, designed to teach clients to think rationally prior to the job search. Assesses various irrational beliefs surrounding the job search. Concludes that clients can be taught to combat irrational expectations. (Author/BHK)
Smokers’ Treatment Expectancies Predict Smoking Cessation Success
Fucito, Lisa M.; Toll, Benjamin A.; Roos, Corey R.; King, Andrea C.
2014-01-01
Introduction Smokers’ treatment expectancies may influence their choice of a particular medication as well as their medication experience. Aims This study examined the role of smokers’ treatment expectancies to their smoking cessation outcomes in a completed, randomized, placebo-controlled trial of naltrexone for smoking cessation, controlling for perceptions of treatment assignment. Methods Treatment seeking cigarette smokers (N = 315) were randomized to receive either naltrexone (50 mg) or placebo in combination with nicotine patch and behavioral counseling. Expectancies for naltrexone as a smoking cessation aid were assessed at baseline and 4 weeks after the quit date. Results More positive baseline medication expectancies predicted higher quit rates at one month in the naltrexone (OR =1.45, p =.04) group but were associated with lower quit rates in the placebo group (OR =.66, p =.03). Maintaining and/or increasing positive medication expectancies in the first month of treatment was associated with better pill adherence during this interval in the naltrexone group (ps <.05). Positive baseline medication expectancies were also associated with the perception of having received naltrexone over placebo among all participants. Conclusions Positive medication expectancies in smokers may contribute to better treatment response. Assessing treatment expectancies and attempting to maintain or improve them may be important for the delivery, evaluation, and targeting of smoking cessation treatments.
Intentions and Expectations in Differential Residential Selection
ERIC Educational Resources Information Center
Michelson, William; And Others
1973-01-01
This paper summarizes intentions and expectations in differential residential selection among families who had chose to move. Wives appear at face value to assess alternatives in the selection process rationally, to be aware of limitations in housing and location they will experience, and to have expectations about behavioral changes consistant…
Framing expectations in early HIV cure research.
Dubé, Karine; Henderson, Gail E; Margolis, David M
2014-10-01
Language used to describe clinical research represents a powerful opportunity to educate volunteers. In the case of HIV cure research there is an emerging need to manage expectations by using the term 'experiment'. Cure experiments are proof-of-concept studies designed to evaluate novel paradigms to reduce persistent HIV-1 reservoirs, without any expectation of medical benefit. PMID:25280965
First Grade Teacher Expectations in Mathematics.
ERIC Educational Resources Information Center
Funkhouser, Charles P.
The focus of this study was on the expectations that first-grade teachers have of the mathematics skills of their incoming first-grade students. At the end of one school year and at the beginning of the next school year, first-grade teachers (n=64) in rural and urban settings completed the Mathematics Skills Expectations Survey (MSES). The MSES…
5 CFR 470.301 - Program expectations.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Program expectations. 470.301 Section 470.301 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL MANAGEMENT RESEARCH PROGRAMS AND DEMONSTRATIONS PROJECTS Regulatory Requirements Pertaining to Demonstration Projects § 470.301 Program expectations....
Parents' Role in Adolescents' Educational Expectations
ERIC Educational Resources Information Center
Rimkute, Laura; Hirvonen, Riikka; Tolvanen, Asko; Aunola, Kaisa; Nurmi, Jari-Erik
2012-01-01
The present study examined the extent to which mothers' and fathers' expectations for their offspring's future education, their level of education, and adolescents' academic achievement predict adolescents' educational expectations. To investigate this, 230 adolescents were examined twice while they were in comprehensive school (in the 7th and 9th…
Parenthood, Life Course Expectations, and Mental Health.
Carlson, Daniel; Williams, Kristi
2011-03-01
Although past research indicates that early and premarital childbearing negatively affect mental health, little is known about the role of individual expectations in shaping these associations. Using data from the National Longitudinal Survey of Youth 1979, we consider how individual expectations, measured prior to the entry into parenthood, shape mental health outcomes associated with premarital childbearing and birth timing, and consider gender and race/ethnic variations. Results indicate that expecting children before marriage ameliorates the negative mental health consequences of premarital first births and that subsequently deviating from expected birth timing, either early or late, results in increased distress at all birth ages. In both cases, however, the degree and manner in which expectations matter differ by gender and race/ethnicity. Results indicate that expectations for premarital childbearing matter only for African-Americans' mental health and although later than expected births are associated with decreased mental health for all groups, earlier than expected births are only associated with decreased mental health for women, Hispanics, and non-Hispanic whites. PMID:22229115
The Expectant Reader in Theory and Practice.
ERIC Educational Resources Information Center
Fowler, Lois Josephs; McCormick, Kathleen
1986-01-01
Offers a method of using reader response theory that emphasizes the expectations about a text and how those expectations are fulfilled or deflated. Specifically, students read traditional fables, fairy tales, and parables, and compare them to contemporary works such as Kafka's "Metamorphosis" and Marquez's "The Very Old Man With Enormous Wings."…
International Variations in Measuring Customer Expectations.
ERIC Educational Resources Information Center
Calvert, Philip J.
2001-01-01
Discussion of customer expectations of library service quality and SERVQUAL as a measurement tool focuses on two studies: one that compared a survey of Chinese university students' expectations of service quality to New Zealand students; and one that investigated national culture as a source of attitudes to customer service. (Author/LRW)
Patterns of Drug Use and Expectations in Methadone Patients
Joe, George W.; Flynn, Patrick M.; Broome, Kirk M.; Simpson, D. Dwayne
2007-01-01
Expectations about future behavior have been shown to have a positive relationship with subsequent behavior. For patients in drug treatment, recovery should manifest changes in drug use and in cognitive perceptions of being able to refrain from use. The present study identified latent patterns of the longitudinal relationship between drug use expectation and illegal drug use during treatment. Latent variable mixture modeling identified three patterns of change over successive 3-month intervals during treatment: Improvers (48%), Decliners (33%), and Continuing Users (19%). The sample consisted of 497 patients in community-based outpatient methadone treatment. The utility of the latent patterns was shown through their relationship to treatment engagement, where Continuing Users had lower counseling rapport and time in treatment. These latent patterns also differed on drug use measures at follow-up. Additional analyses of expectations with measures of opioid use, cocaine use, or criminality yielded similar latent patterns. Expectations about future drug use were found to be a useful measure of cognitive change corresponding to drug use change. Its potential as a brief treatment management tool is noted. PMID:17218066
Assessment and mitigation of DNA loss utilizing centrifugal filtration devices.
Doran, Ashley E; Foran, David R
2014-11-01
Maximizing DNA recovery during its isolation can be vital in forensic casework, particularly when DNA yields are expected to be low, such as from touch samples. Many forensic laboratories utilize centrifugal filtration devices to purify and concentrate the DNA; however, DNA loss has been reported when using them. In this study, all centrifugal filtration devices tested caused substantial DNA loss, affecting low molecular weight DNA (PCR product) somewhat more than high molecular weight DNA. Strategies for mitigating DNA loss were then examined, including pre-treatment with glucose, glycogen, silicone (RainX(®)), bovine serum albumin, yeast RNA, or high molecular weight DNA. The length of pre-treatment and UV irradiation of pre-treatment reagents were also investigated. Pre-treatments with glucose and glycogen resulted in little or no improvement in DNA recovery, and most or all DNA was lost after silicone pre-treatment. Devices pre-treated with BSA produced irregular and uninterpretable quantitative PCR amplification curves for the DNA and internal PCR control. On the other hand, nucleic acid pre-treatments greatly improved recovery of all DNAs. Pre-treatment time and its UV irradiation did not influence DNA recovery. Overall, the results show that centrifugal filtration devices trap DNA, yet their proper pre-treatment can circumvent that loss, which is critical in the case of low copy forensic DNA samples. PMID:25173492
Maximizing the Online Learning Experience: Suggestions for Educators and Students
ERIC Educational Resources Information Center
Cicco, Gina
2011-01-01
This article will discuss ways of maximizing the online course experience for teachers- and counselors-in-training. The widespread popularity of online instruction makes it a necessary learning experience for future teachers and counselors (Ash, 2011). New teachers and counselors take on the responsibility of preparing their students for real-life…
How Managerial Ownership Affects Profit Maximization in Newspaper Firms.
ERIC Educational Resources Information Center
Busterna, John C.
1989-01-01
Explores whether different levels of a manager's ownership of a newspaper affects the manager's profit maximizing attitudes and behavior. Finds that owner-managers tend to place less emphasis on profits than non-owner-controlled newspapers, contrary to economic theory and empirical evidence from other industries. (RS)
Modifying Softball for Maximizing Learning Outcomes in Physical Education
ERIC Educational Resources Information Center
Brian, Ali; Ward, Phillip; Goodway, Jacqueline D.; Sutherland, Sue
2014-01-01
Softball is taught in many physical education programs throughout the United States. This article describes modifications that maximize learning outcomes and that address the National Standards and safety recommendations. The modifications focus on tasks and equipment, developmentally appropriate motor-skill acquisition, increasing number of…
Bernoulli equation and the nonexistence of maximal jets
NASA Astrophysics Data System (ADS)
Zdziarski, Andrzej A.
2016-02-01
We discuss the idea of maximal jets introduced by Falcke & Biermann (1995, A&A, 293, 665). According to it, the maximum possible jet power in its internal energy equals the kinetic power in its rest mass. We show this result is incorrect because of an unfortunate algebraic mistake.
Mentoring as Professional Development for Novice Entrepreneurs: Maximizing the Learning
ERIC Educational Resources Information Center
St-Jean, Etienne
2012-01-01
Mentoring can be seen as relevant if not essential in the continuing professional development of entrepreneurs. In the present study, we seek to understand how to maximize the learning that occurs through the mentoring process. To achieve this, we consider various elements that the literature suggested are associated with successful mentoring and…
Fertilizer placement to maximize nitrogen use by fescue
Technology Transfer Automated Retrieval System (TEKTRAN)
The method of fertilizer nitrogen(N) application can affect N uptake in tall fescue and therefore its yield and quality. Subsurface-banding (knife) of fertilizer maximizes fescue N uptake in the poorly-drained clay–pan soils of southeastern Kansas. This study was conducted to determine if knifed N r...
Density-metric unimodular gravity: Vacuum maximal symmetry
Abbassi, A.H.; Abbassi, A.M.
2011-05-15
We have investigated the vacuum maximally symmetric solutions of recently proposed density-metric unimodular gravity theory. The results are widely different from inflationary scenario. The exponential dependence on time in deSitter space is substituted by a power law. Open space-times with non-zero cosmological constant are excluded.
An effective theory of metrics with maximal proper acceleration
NASA Astrophysics Data System (ADS)
Gallego Torromé, Ricardo
2015-12-01
A geometric theory for spacetimes whose world lines associated with physical particles have an upper bound for the proper acceleration is developed. After some fundamental remarks on the requirements that the classical dynamics for point particles should hold, the notion of a generalized metric and a theory of maximal proper acceleration are introduced. A perturbative approach to metrics of maximal proper acceleration is discussed and we show how it provides a consistent theory where the associated Lorentzian metric corresponds to the limit when the maximal proper acceleration goes to infinity. Then several of the physical and kinematical properties of the maximal acceleration metric are investigated, including a discussion of the rudiments of the causal theory and the introduction of the notions of radar distance and celerity function. We discuss the corresponding modification of the Einstein mass-energy relation when the associated Lorentzian geometry is flat. In such a context it is also proved that the physical dispersion relation is relativistic. Two possible physical scenarios where the modified mass-energy relation could be confronted against the experiment are briefly discussed.
Curriculum and Testing Strategies to Maximize Special Education STAAR Achievement
ERIC Educational Resources Information Center
Johnson, William L.; Johnson, Annabel M.; Johnson, Jared W.
2015-01-01
This document is from a presentation at the 2015 annual conference of the Science Teachers Association of Texas (STAT). The two sessions (each listed as feature sessions at the state conference) examined classroom strategies the presenter used in his chemistry classes to maximize Texas end-of-course chemistry test scores for his special population…
Optimal technique for maximal forward rotating vaults in men's gymnastics.
Hiley, Michael J; Jackson, Monique I; Yeadon, Maurice R
2015-08-01
In vaulting a gymnast must generate sufficient linear and angular momentum during the approach and table contact to complete the rotational requirements in the post-flight phase. This study investigated the optimization of table touchdown conditions and table contact technique for the maximization of rotation potential for forwards rotating vaults. A planar seven-segment torque-driven computer simulation model of the contact phase in vaulting was evaluated by varying joint torque activation time histories to match three performances of a handspring double somersault vault by an elite gymnast. The closest matching simulation was used as a starting point to maximize post-flight rotation potential (the product of angular momentum and flight time) for a forwards rotating vault. It was found that the maximized rotation potential was sufficient to produce a handspring double piked somersault vault. The corresponding optimal touchdown configuration exhibited hip flexion in contrast to the hyperextended configuration required for maximal height. Increasing touchdown velocity and angular momentum lead to additional post-flight rotation potential. By increasing the horizontal velocity at table touchdown, within limits obtained from recorded performances, the handspring double somersault tucked with one and a half twists, and the handspring triple somersault tucked became theoretically possible. PMID:26026290
Maximizing Thermal Efficiency and Optimizing Energy Management (Fact Sheet)
Not Available
2012-03-01
Researchers at the Thermal Test Facility (TTF) on the campus of the U.S. Department of Energy's National Renewable Energy Laboratory (NREL) in Golden, Colorado, are addressing maximizing thermal efficiency and optimizing energy management through analysis of efficient heating, ventilating, and air conditioning (HVAC) strategies, automated home energy management (AHEM), and energy storage systems.
Dynamical generation of maximally entangled states in two identical cavities
Alexanian, Moorad
2011-11-15
The generation of entanglement between two identical coupled cavities, each containing a single three-level atom, is studied when the cavities exchange two coherent photons and are in the N=2,4 manifolds, where N represents the maximum number of photons possible in either cavity. The atom-photon state of each cavity is described by a qutrit for N=2 and a five-dimensional qudit for N=4. However, the conservation of the total value of N for the interacting two-cavity system limits the total number of states to only 4 states for N=2 and 8 states for N=4, rather than the usual 9 for two qutrits and 25 for two five-dimensional qudits. In the N=2 manifold, two-qutrit states dynamically generate four maximally entangled Bell states from initially unentangled states. In the N=4 manifold, two-qudit states dynamically generate maximally entangled states involving three or four states. The generation of these maximally entangled states occurs rather rapidly for large hopping strengths. The cavities function as a storage of periodically generated maximally entangled states.
Maximizing Cohesion and Minimizing Conflict in Collaborative Writing Groups.
ERIC Educational Resources Information Center
Nelson, Sandra J.; Smith, Douglas C.
1990-01-01
Presents instructional strategies designed to maximize cohesion and minimize conflict in collaborative writing groups. Argues that an understanding of sources of conflict, conflict management strategies, and group processes allows productive and creative group energy to be channeled into effective business writing. (RS)
The Profit-Maximizing Firm: Old Wine in New Bottles.
ERIC Educational Resources Information Center
Felder, Joseph
1990-01-01
Explains and illustrates a simplified use of graphical analysis for analyzing the profit-maximizing firm. Believes that graphical analysis helps college students gain a deeper understanding of marginalism and an increased ability to formulate economic problems in marginalist terms. (DB)
Estimation of the Maximal Lactate Steady State in Endurance Runners.
Llodio, I; Gorostiaga, E M; Garcia-Tabar, I; Granados, C; Sánchez-Medina, L
2016-06-01
This study aimed to predict the velocity corresponding to the maximal lactate steady state (MLSSV) from non-invasive variables obtained during a maximal multistage running field test (modified University of Montreal Track Test, UMTT), and to determine whether a single constant velocity test (CVT), performed several days after the UMTT, could estimate the MLSSV. Within 4-5 weeks, 20 male runners performed: 1) a modified UMTT, and 2) several 30 min CVTs to determine MLSSV to a precision of 0.25 km·h(-1). Maximal aerobic velocity (MAV) was the best predictor of MLSSV. A regression equation was obtained: MLSSV=1.425+(0.756·MAV); R(2)=0.63. Running velocity during the CVT (VCVT) and blood lactate at 6 (La6) and 30 (La30) min further improved the MLSSV prediction: MLSSV=VCVT+0.503 - (0.266·ΔLa30-6); R(2)=0.66. MLSSV can be estimated from MAV during a single maximal multistage running field test among a homogeneous group of trained runners. This estimation can be further improved by performing an additional CVT. In terms of accuracy, simplicity and cost-effectiveness, the reported regression equations can be used for the assessment and training prescription of endurance runners. PMID:27116348
Nursing Students' Awareness and Intentional Maximization of Their Learning Styles
ERIC Educational Resources Information Center
Mayfield, Linda Riggs
2012-01-01
This small, descriptive, pilot study addressed survey data from four levels of nursing students who had been taught to maximize their learning styles in a first-semester freshman success skills course. Bandura's Agency Theory supports the design. The hypothesis was that without reinforcing instruction, the students' recall and application of that…
Optoelectronic plethysmography compared to spirometry during maximal exercise.
Layton, Aimee M; Moran, Sienna L; Garber, Carol Ewing; Armstrong, Hilary F; Basner, Robert C; Thomashow, Byron M; Bartels, Matthew N
2013-01-15
The purpose of this study was to compare simultaneous measurements of tidal volume (Vt) by optoelectronic plethysmography (OEP) and spirometry during a maximal cycling exercise test to quantify possible differences between methods. Vt measured simultaneously by OEP and spirometry was collected during a maximal exercise test in thirty healthy participants. The two methods were compared by linear regression and Bland-Altman analysis at submaximal and maximal exercise. The average difference between the two methods and the mean percentage discrepancy were calculated. Submaximal exercise (SM) and maximal exercise (M) Vt measured by OEP and spirometry had very good correlation, SM R=0.963 (p<0.001), M R=0.982 (p<0.001) and high degree of common variance, SM R(2)=0.928, M R(2)=0.983. Bland-Altman analysis demonstrated that during SM, OEP could measure exercise Vt as much as 0.134 L above and -0.025 L below that of spirometry. OEP could measure exercise Vt as much as 0.188 L above and -0.017 L below that of spirometry. The discrepancy between measurements was -2.0 ± 7.2% at SM and -2.4 ± 3.9% at M. In conclusion, Vt measurements at during exercise by OEP and spirometry are closely correlated and the difference between measurements was insignificant. PMID:23022440
How to Maximize Learning for Gifted Math Students
ERIC Educational Resources Information Center
Chamberlin, Scott A.
2008-01-01
Having a gifted math or science student in the family or classroom is a fascination as well as a significant challenge and responsibility for many parents and teachers. In order to help maximize student learning, several questions need to be asked. What should be the role of technology? How well do traditional schools serve gifted students? What…
PROFIT-MAXIMIZING PRINCIPLES, INSTRUCTIONAL UNITS FOR VOCATIONAL AGRICULTURE.
ERIC Educational Resources Information Center
BARKER, RICHARD L.
THE PURPOSE OF THIS GUIDE IS TO ASSIST VOCATIONAL AGRICULTURE TEACHERS IN STIMULATING JUNIOR AND SENIOR HIGH SCHOOL STUDENT THINKING, UNDERSTANDING, AND DECISION MAKING AS ASSOCIATED WITH PROFIT-MAXIMIZING PRINCIPLES OF FARM OPERATION FOR USE IN FARM MANAGEMENT. IT WAS DEVELOPED UNDER A U.S. OFFICE OF EDUCATION GRANT BY TEACHER-EDUCATORS, A FARM…
Maximizing plant density affects broccoli yield and quality
Technology Transfer Automated Retrieval System (TEKTRAN)
Increased demand for fresh market bunch broccoli (Brassica oleracea L. var. italica) has led to increased production along the United States east coast. Maximizing broccoli yields is a primary concern for quickly expanding southeastern commercial markets. This broccoli plant density study was carr...
Emotional Control and Instructional Effectiveness: Maximizing a Timeout
ERIC Educational Resources Information Center
Andrews, Staci R.
2015-01-01
This article provides recommendations for best practices for basketball coaches to maximize the instructional effectiveness of a timeout during competition. Practical applications are derived from research findings linking emotional intelligence to effective coaching behaviors. Additionally, recommendations are based on the implications of the…
Maximality and Idealized Cognitive Models: The Complementation of Spanish "Tener."
ERIC Educational Resources Information Center
Hilferty, Joseph; Valenzuela, Javier
2001-01-01
Discusses the bare-noun phrase (NP) complementation pattern of the Spanish verb "tener" (have). Shows that the maximality of the complement NP is dependent upon three factors: (1) idiosyncratic valence requirements; (2) encyclopedic knowledge related to possession; and (3) contextualized semantic construal. (Author/VWL)
Maximizing grain sorghum water use efficiency under deficit irrigation
Technology Transfer Automated Retrieval System (TEKTRAN)
Development and evaluation of sustainable and efficient irrigation strategies is a priority for producers faced with water shortages resulting from aquifer depletion, reduced base flows, and reallocation of water to non-agricultural sectors. Under a limited water supply, yield maximization may not b...
Maximally entangled mixed-state generation via local operations
Aiello, A.; Puentes, G.; Voigt, D.; Woerdman, J. P.
2007-06-15
We present a general theoretical method to generate maximally entangled mixed states of a pair of photons initially prepared in the singlet polarization state. This method requires only local operations upon a single photon of the pair and exploits spatial degrees of freedom to induce decoherence. We report also experimental confirmation of these theoretical results.
Palfai, T; Wood, M D
2001-03-01
College student drinkers (N = 314) participated in a health survey in which they (a) completed an alcohol-related memory association task (expectancy accessibility measure), (b) rated their positive expectancies about alcohol use (expectancy strength measure), and (c) reported their level of alcohol involvement. Hierarchical regression analyses showed that both expectancy accessibility and expectancy strength predicted frequency of alcohol use and alcohol-related problems. Moreover, moderational analyses showed that the association between expectancy strength and frequency of alcohol use was greater for those who generated more alcohol responses on the expectancy association task. These findings suggest that the outcome association measure and Likert scale ratings of expectancies may assess distinct properties of expectancy representations, which may have independent and interactive effects on different aspects of drinking behavior. PMID:11255940
Expectation-Driven Text Extraction from Medical Ultrasound Images.
Reul, Christian; Köberle, Philipp; Üçeyler, Nurcan; Puppe, Frank
2016-01-01
In this study an expectation-driven approach is proposed to extract data stored as pixel structures in medical ultrasound images. Prior knowledge about certain properties like the position of the text and its background and foreground grayscale values is utilized. Several open source Java libraries are used to pre-process the image and extract the textual information. The results are presented in an Excel table together with the outcome of several consistency checks. After manually correcting potential errors, the outcome is automatically stored in the main database. The proposed system yielded excellent results, reaching an accuracy of 99.94% and reducing the necessary human effort to a minimum. PMID:27577478
Expectations predict chronic pain treatment outcomes.
Cormier, Stéphanie; Lavigne, Geneviève L; Choinière, Manon; Rainville, Pierre
2016-02-01
Accumulating evidence suggests an association between patient pretreatment expectations and numerous health outcomes. However, it remains unclear if and how expectations relate to outcomes after treatments in multidisciplinary pain programs. The present study aims at investigating the predictive association between expectations and clinical outcomes in a large database of chronic pain patients. In this observational cohort study, participants were 2272 patients treated in one of 3 university-affiliated multidisciplinary pain treatment centers. All patients received personalized care, including medical, psychological, and/or physical interventions. Patient expectations regarding pain relief and improvements in quality of life and functioning were measured before the first visit to the pain centers and served as predictor variables. Changes in pain intensity, depressive symptoms, pain interference, and tendency to catastrophize, as well as satisfaction with pain treatment and global impressions of change at 6-month follow-up, were considered as treatment outcomes. Structural equation modeling analyses showed significant positive relationships between expectations and most clinical outcomes, and this association was largely mediated by patients' global impressions of change. Similar patterns of relationships between variables were also observed in various subgroups of patients based on sex, age, pain duration, and pain classification. Such results emphasize the relevance of patient expectations as a determinant of outcomes in multimodal pain treatment programs. Furthermore, the results suggest that superior clinical outcomes are observed in individuals who expect high positive outcomes as a result of treatment. PMID:26447703
The standardized mortality ratio and life expectancy.
Tsai, S P; Hardy, R J; Wen, C P
1992-04-01
This paper develops a theoretical relation between the standardized mortality ratio (SMR) and the expected years of life and establishes a regression equation for easy conversion between these two statistics. The mathematical expression of the derived relation is an approximation, requiring an assumption of constant age-specific mortality ratios. It underestimates the "true" value calculated based on life table technique when the age-specific mortality ratios increase with age. This equation provides a conservative method to estimate the expected years of life for cohort mortality studies and facilitates an assessment of the impact of work-related factors on the length of life of the worker. It also allows one to convert the SMR to life expectancy in smaller studies whose sole objective is to determine the SMR in a working population. A 1% decrease (or increase) in the standardized mortality ratio will result in 0.1373 years increased (or decreased) life expectancy based on white male data for the US population. Furthermore, with data from 14 large oil refinery and chemical worker cohorts of white males, the "derived" expected years of life based on the regression equation closely predicts the corresponding value calculated using a standard life table technique. This statistical equation is expected to have practical applications when used in conjunction with the SMR to provide an approximate measure of life expectancy, a term and statistic familiar to most lay people. PMID:1595682
Patient expectation: what is comprehensive health care?
Starr, G C; Norris, R; Patil, K D; Young, P R
1979-01-01
A patient expectation survey was developed and implemented in order to define the spectrum of health care activities expected from the University of Nebraska Family Health Centers. The hypothesis underlying the survey is that patient expectations or opinions vary considerably among the members of any given population. High expectation is present for office visits, emergency services, yearly physical examination, and performance of chest x-ray, blood test, proctoscopy, and eye examination. Psychiatric services, marital counseling, youth counseling, nursing home care, and health education are indicated as not necessary by a plurality of the respondents. Examination of the responses by age, sex, and payment status through canonical correlation reveals a number of strong correlations of specific subgroups and expectations. Factor analysis revealed three independent factors or clusters representating health care issues as perceived by the patient. This study and further similar studies will be helpful in aiding the family physician's understanding of what patients expect. Through a better understanding of patient expectation, patient satisfaction and compliance may be improved. PMID:759540
Primary Care Clinician Expectations Regarding Aging
Davis, Melinda M.; Bond, Lynne A.; Howard, Alan; Sarkisian, Catherine A.
2011-01-01
Purpose: Expectations regarding aging (ERA) in community-dwelling older adults are associated with personal health behaviors and health resource usage. Clinicians’ age expectations likely influence patients’ expectations and care delivery patterns; yet, limited research has explored clinicians’ age expectations. The Expectations Regarding Aging Survey (ERA-12) was used to assess (a) age expectations in a sample of primary care clinicians practicing in the United States and (b) clinician characteristics associated with ERA-12 scores. Design and Methods: This study was a cross-sectional survey of primary care clinicians affiliated with 5 practice-based research networks, October 2008 to June 2009. A total of 374 of the 1,510 distributed surveys were returned (24.8% response rate); 357 analyzed. Mean respondent age was 48.6 years (SD = 11.6; range 23–87 years); 88.0% physicians, 96.0% family medicine, 94.9% White, and 61.9% male. Results: Female clinicians reported higher ERA-12 scores; clinicians’ age expectations decreased with greater years in practice. Among the clinicians, higher ERA-12 scores were associated with higher clinician ratings of the importance of and personal skill in administering preventive counseling and the importance of delivering preventive services. Agreement with individual ERA-12 items varied widely. Implications: Unrealistically high or low ERA could negatively influence the quality of care provided to patients and patients’ own age expectations. Research should examine the etiology of clinicians’ age expectations and their association with older adult diagnoses and treatment. Medical education must incorporate strategies to promote clinician attitudes that facilitate successful patient aging. PMID:21430129
Massie, B.M.; Wisneski, J.; Kramer, B.; Hollenberg, M.; Gertz, E.; Stern, D.
1982-05-01
Recently the quantitation of regional /sup 201/Tl clearance has been shown to increase the sensitivity of the scintigraphic detection of coronary disease. Although /sup 201/Tl clearance rates might be expected to vary with the degree of exercise, this relationship has not been explored. We therefore evaluated the rate of decrease in myocardial /sup 201/Tl activity following maximal and submaximal stress in seven normal subjects and 21 patients with chest pain, using the seven-pinhole tomographic reconstruction technique. In normals, the mean /sup 201/Tl clearance rate declined from 41% +/- 7 over a 3-hr period with maximal exercise to 25% +/- 5 after 3 hr at a submaximal level (p less than 0.001). Similar differences in clearance rates were found in the normally perfused regions of the left ventricle in patients with chest pain, depending on whether or not a maximal end point (defined as either the appearance of ischemia or reaching 85% of age-predicted heart rate) was achieved. In five patients who did not reach these end points, 3-hr clearance rates in uninvolved regions averaged 25% +/- 2, in contrast to a mean of 38% +/- 5 for such regions in 15 patients who exercised to ischemia or an adequate heart rate. These findings indicate that clearance criteria derived from normals can be applied to patients who are stressed maximally, even if the duration of exercise is limited, but that caution must be used in interpreting clearance rates in those who do not exercise to an accepted end point.
Classics in the Classroom: Great Expectations Fulfilled.
ERIC Educational Resources Information Center
Pearl, Shela
1986-01-01
Describes how an English teacher in a Queens, New York, ghetto school introduced her grade nine students to Charles Dickens's "Great Expectations." Focuses on students' responses, which eventually became enthusiastic, and discusses the use of classics within the curriculum. (KH)
What to Expect during a Heart Transplant
... NHLBI on Twitter. What To Expect During a Heart Transplant Just before heart transplant surgery, the patient will ... are not replaced as part of the surgery. Heart Transplant Figure A shows where the diseased heart is ...
47 CFR 90.743 - Renewal expectancy.
Code of Federal Regulations, 2012 CFR
2012-10-01
... relate to any matter described in this paragraph. (c) Phase I non-nationwide licensees have license terms... authorization in order to receive a renewal expectancy. Phase I nationwide licensees and all Phase II...
Parental outcome expectations on children's TV viewing
Technology Transfer Automated Retrieval System (TEKTRAN)
Children's TV viewing has been associated with increased sedentary behavior and poor eating habits. Positive intervention effects have been observed when addressing outcome expectations as a mediator in interventions targeting children's dietary behavior. Little is known about parental outcome expec...
What To Expect Before a Lung Transplant
... NHLBI on Twitter. What To Expect Before a Lung Transplant If you get into a medical center's ... friends also can offer support. When a Donor Lung Becomes Available OPTN matches donor lungs to recipients ...
What to Expect During a Lung Transplant
... NHLBI on Twitter. What To Expect During a Lung Transplant Just before lung transplant surgery, you will ... airway and its blood vessels to your heart. Lung Transplant The illustration shows the process of a ...
Meningitis B Vaccine Falls Short of Expectations
... Expectations 1 in 3 students didn't get immunity against outbreak strain after 2 doses of Bexsero, ... the effect of the MenB vaccine on individual immunity. Researchers at Princeton University, the University of Minnesota ...
What to Expect After Breast Reconstruction Surgery
... Topic References What to expect after breast reconstruction surgery It’s important to have an idea of what ... regular mammograms. Possible risks during and after reconstruction surgery There are certain risks from any type of ...
Converting customer expectations into achievable results.
Landis, G A
1999-11-01
It is not enough in today's environment to just meet customers' expectations--we must exceed them. Therefore, one must learn what constitutes expectations. These needs have expanded during the past few years from just manufacturing the product and looking at the outcome from a provincial standpoint. Now we must understand and satisfy the entire supply chain. To manage this process and satisfy the customer, the process now involves the supplier, the manufacturer, and the entire distribution system. PMID:10623140
Gamma loop contributing to maximal voluntary contractions in man.
Hagbarth, K E; Kunesch, E J; Nordin, M; Schmidt, R; Wallin, E U
1986-11-01
A local anaesthetic drug was injected around the peroneal nerve in healthy subjects in order to investigate whether the resulting loss in foot dorsiflexion power in part depended on a gamma-fibre block preventing 'internal' activation of spindle end-organs and thereby depriving the alpha-motoneurones of an excitatory spindle inflow during contraction. The motor outcome of maximal dorsiflexion efforts was assessed by measuring firing rates of individual motor units in the anterior tibial (t.a.) muscle, mean voltage e.m.g. from the pretibial muscles, dorsiflexion force and range of voluntary foot dorsiflexion movements. The tests were performed with and without peripheral conditioning stimuli, such as agonist or antagonist muscle vibration or imposed stretch of the contracting muscles. As compared to control values of t.a. motor unit firing rates in maximal isometric voluntary contractions, the firing rates were lower and more irregular during maximal dorsiflexion efforts performed during subtotal peroneal nerve blocks. During the development of paresis a gradual reduction of motor unit firing rates was observed before the units ceased responding to the voluntary commands. This change in motor unit behaviour was accompanied by a reduction of the mean voltage e.m.g. activity in the pretibial muscles. At a given stage of anaesthesia the e.m.g. responses to maximal voluntary efforts were more affected than the responses evoked by electric nerve stimuli delivered proximal to the block, indicating that impaired impulse transmission in alpha motor fibres was not the sole cause of the paresis. The inability to generate high and regular motor unit firing rates during peroneal nerve blocks was accentuated by vibration applied over the antagonistic calf muscles. By contrast, in eight out of ten experiments agonist stretch or vibration caused an enhancement of motor unit firing during the maximal force tasks. The reverse effects of agonist and antagonist vibration on the
Expectation and Attention in Hierarchical Auditory Prediction
Noreika, Valdas; Gueorguiev, David; Blenkmann, Alejandro; Kochen, Silvia; Ibáñez, Agustín; Owen, Adrian M.; Bekinschtein, Tristan A.
2013-01-01
Hierarchical predictive coding suggests that attention in humans emerges from increased precision in probabilistic inference, whereas expectation biases attention in favor of contextually anticipated stimuli. We test these notions within auditory perception by independently manipulating top-down expectation and attentional precision alongside bottom-up stimulus predictability. Our findings support an integrative interpretation of commonly observed electrophysiological signatures of neurodynamics, namely mismatch negativity (MMN), P300, and contingent negative variation (CNV), as manifestations along successive levels of predictive complexity. Early first-level processing indexed by the MMN was sensitive to stimulus predictability: here, attentional precision enhanced early responses, but explicit top-down expectation diminished it. This pattern was in contrast to later, second-level processing indexed by the P300: although sensitive to the degree of predictability, responses at this level were contingent on attentional engagement and in fact sharpened by top-down expectation. At the highest level, the drift of the CNV was a fine-grained marker of top-down expectation itself. Source reconstruction of high-density EEG, supported by intracranial recordings, implicated temporal and frontal regions differentially active at early and late levels. The cortical generators of the CNV suggested that it might be involved in facilitating the consolidation of context-salient stimuli into conscious perception. These results provide convergent empirical support to promising recent accounts of attention and expectation in predictive coding. PMID:23825422
Dynamic emotion perception and prior expectancy.
Dzafic, Ilvana; Martin, Andrew K; Hocking, Julia; Mowry, Bryan; Burianová, Hana
2016-06-01
Social interactions require the ability to rapidly perceive emotion from various incoming dynamic, multisensory cues. Prior expectations reduce incoming emotional information and direct attention to cues that are aligned with what is expected. Studies to date have investigated the prior expectancy effect using static emotional images, despite the fact that dynamic stimuli would represent greater ecological validity. The objective of the study was to create a novel functional magnetic resonance imaging (fMRI) paradigm to examine the influence of prior expectations on naturalistic emotion perception. For this purpose, we developed a dynamic emotion perception task, which consisted of audio-visual videos that carry emotional information congruent or incongruent with prior expectations. The results show that emotional congruency was associated with activity in prefrontal regions, amygdala, and putamen, whereas emotional incongruency was associated with activity in temporoparietal junction and mid-cingulate gyrus. Supported by the behavioural results, our findings suggest that prior expectations are reinforced after repeated experience and learning, whereas unexpected emotions may rely on fast change detection processes. The results from the current study are compatible with the notion that the ability to automatically detect unexpected changes in complex dynamic environments allows for adaptive behaviours in potentially advantageous or threatening situations. PMID:27126841
The Behavioral Expectations Scale: Assessment of Expectations for Interaction with the Mentally Ill
ERIC Educational Resources Information Center
Golding, Stephan L.; And Others
1975-01-01
The process by which expectations influence social interaction can be investigated, according to the authors. Hence, the Behavioral Expectations Scale (BES) was developed. Preliminary data indicate the BES may be useful in further investigation of the role of expectation in influencing the behavior toward those labeled "mentally ill." (Author/HMV)
Testing subject comprehension of utility questionnaires.
Dobrez, Deborah G; Calhoun, Elizabeth A
2004-03-01
Utility questionnaires are often considered difficult for subjects to understand. Our study reports pilot testing of two subject comprehension tests to determine whether comprehension can be directly measured. Current health utilities were assessed using the standard gamble (SG), time trade-off (TTO), and visual analog scale. Subjects were randomized to one of two tests: (1) Logical consistency was tested by comparing rankings of two health states with an investigator-assigned a priori ranking; (2) Utility responses for two hypothetical respondents were presented; the subject was asked who had the better health. Thirty-one subjects completed the SG and TTO for two health states: being blind and wearing glasses. No subjects had inconsistent rankings. Post hoc analyses found that subjects reporting utilities below the first decile for the state, wearing glasses, had significantly lower current health utility than remaining subjects. Of the thirty subjects who evaluated the hypothetical respondents' utilities, five incorrectly judged the respondent with worse utility to have better health. Those subjects also reported current health utilities significantly lower than the remaining subjects. Our study findings suggest that a minority should be expected to have difficulty completing utility questionnaires. Comprehension checks may improve the reliability of utility data by enhancing training and by identifying subjects who may have misunderstood the utility questions. PMID:15085909
NASA Astrophysics Data System (ADS)
Weber, James Daniel
1999-11-01
This dissertation presents a new algorithm that allows a market participant to maximize its individual welfare in the electricity spot market. The use of such an algorithm in determining market equilibrium points, called Nash equilibria, is also demonstrated. The start of the algorithm is a spot market model that uses the optimal power flow (OPF), with a full representation of the transmission system. The OPF is also extended to model consumer behavior, and a thorough mathematical justification for the inclusion of the consumer model in the OPF is presented. The algorithm utilizes price and dispatch sensitivities, available from the Hessian matrix of the OPF, to help determine an optimal change in an individual's bid. The algorithm is shown to be successful in determining local welfare maxima, and the prospects for scaling the algorithm up to realistically sized systems are very good. Assuming a market in which all participants maximize their individual welfare, economic equilibrium points, called Nash equilibria, are investigated. This is done by iteratively solving the individual welfare maximization algorithm for each participant until a point is reached where all individuals stop modifying their bids. It is shown that these Nash equilibria can be located in this manner. However, it is also demonstrated that equilibria do not always exist, and are not always unique when they do exist. It is also shown that individual welfare is a highly nonconcave function resulting in many local maxima. As a result, a more global optimization technique, using a genetic algorithm (GA), is investigated. The genetic algorithm is successfully demonstrated on several systems. It is also shown that a GA can be developed using special niche methods, which allow a GA to converge to several local optima at once. Finally, the last chapter of this dissertation covers the development of a new computer visualization routine for power system analysis: contouring. The contouring algorithm is
Utilization of the terrestrial cyanobacterial sheet
NASA Astrophysics Data System (ADS)
Katoh, Hiroshi; Tomita-Yokotani, Kaori; Furukawa, Jun; Kimura, Shunta; Yamaguchi, Yuji; Takenaka, Hiroyuki; Kohno, Nobuyuki
2016-07-01
The terrestrial nitrogen-fixing cyanobacterium, Nostoc commune, is living ranging from polar to desert. N. commune makes visible colonies composed extracellular polymeric substances. N. commune has expected to utilize for agriculture, food and terraforming cause of its extracellular polysaccharide, desiccation tolerance and nitrogen fixation. To exhibit the potential abilities, the N. commune sheet is made to use convenient and evaluated by plant growth and radioactive accumulation. We will discuss utilization of terrestrial cyanobacteria under closed environment.
Maximal radius of the aftershock zone in earthquake networks
NASA Astrophysics Data System (ADS)
Mezentsev, A. Yu.; Hayakawa, M.
2009-09-01
In this paper, several seismoactive regions were investigated (Japan, Southern California and two tectonically distinct Japanese subregions) and structural seismic constants were estimated for each region. Using the method for seismic clustering detection proposed by Baiesi and Paczuski [M. Baiesi, M. Paczuski, Phys. Rev. E 69 (2004) 066106; M. Baiesi, M. Paczuski, Nonlin. Proc. Geophys. (2005) 1607-7946], we obtained the equation of the aftershock zone (AZ). It was shown that the consideration of a finite velocity of seismic signal leads to the natural appearance of maximal possible radius of the AZ. We obtained the equation of maximal radius of the AZ as a function of the magnitude of the main event and estimated its values for each region.
Safety factor maximization for trusses subjected to fatigue stresses
NASA Astrophysics Data System (ADS)
Hedaya, Mohammed Mohammed; Moneeb Elsabbagh, Adel; Hussein, Ahmed Mohamed
2015-08-01
This article presents a mathematical model for sizing optimization of undamped trusses subjected to dynamic loading leading to fatigue. The combined effect of static and dynamic loading, at steady state, is considered. An optimization model, whose objective is the maximization of the safety factor of these trusses, is developed. A new quantity (equivalent fatigue strain energy) combining the effects of static and dynamic stresses is presented. This quantity is used as a global measure of the proximity of fatigue failure. Therefore, the equivalent fatigue strain energy is minimized, and this seems to give a good value for the maximal equivalent static stress. This assumption is verified through two simple examples. The method of moving asymptotes is used in the optimization of trusses. The applicability of the proposed approach is demonstrated through two numerical examples; a 10-bar truss with different loading cases and a helicopter tail subjected to dynamic loading.
Magellan Project: Evolving enhanced operations efficiency to maximize science value
NASA Technical Reports Server (NTRS)
Cheuvront, Allan R.; Neuman, James C.; Mckinney, J. Franklin
1994-01-01
Magellan has been one of NASA's most successful spacecraft, returning more science data than all planetary spacecraft combined. The Magellan Spacecraft Team (SCT) has maximized the science return with innovative operational techniques to overcome anomalies and to perform activities for which the spacecraft was not designed. Commanding the spacecraft was originally time consuming because the standard development process was envisioned as manual tasks. The Program understood that reducing mission operations costs were essential for an extended mission. Management created an environment which encouraged automation of routine tasks, allowing staff reduction while maximizing the science data returned. Data analysis and trending, command preparation, and command reviews are some of the tasks that were automated. The SCT has accommodated personnel reductions by improving operations efficiency while returning the maximum science data possible.
Osthole suppresses seizures in the mouse maximal electroshock seizure model.
Luszczki, Jarogniew J; Andres-Mach, Marta; Cisowski, Wojciech; Mazol, Irena; Glowniak, Kazimierz; Czuczwar, Stanislaw J
2009-04-01
The aim of this study was to determine the anticonvulsant effects of osthole {[7-methoxy-8-(3-methyl-2-butenyl)-2H-1-benzopyran-2-one]--a natural coumarin derivative} in the mouse maximal electroshock-induced seizure model. The antiseizure effects of osthole were determined at 15, 30, 60, and 120 min after its systemic (i.p.) administration. Time course of anticonvulsant action of osthole revealed that the natural coumarin derivative produced a clear-cut antielectroshock activity in mice and the experimentally-derived ED(50) values for osthole ranged from 259 to 631 mg/kg. In conclusion, osthole suppresses seizure activity in the mouse maximal electroshock-induced seizure model. It may become a novel treatment option following further investigation in other animal models of epilepsy and preclinical studies. PMID:19236860
Controlled Dense Coding Using the Maximal Slice States
NASA Astrophysics Data System (ADS)
Liu, Jun; Mo, Zhi-wen; Sun, Shu-qin
2016-04-01
In this paper we investigate the controlled dense coding with the maximal slice states. Three schemes are presented. Our schemes employ the maximal slice states as quantum channel, which consists of the tripartite entangled state from the first party(Alice), the second party(Bob), the third party(Cliff). The supervisor(Cliff) can supervises and controls the channel between Alice and Bob via measurement. Through carrying out local von Neumann measurement, controlled-NOT operation and positive operator-valued measure(POVM), and introducing an auxiliary particle, we can obtain the success probability of dense coding. It is shown that the success probability of information transmitted from Alice to Bob is usually less than one. The average amount of information for each scheme is calculated in detail. These results offer deeper insight into quantum dense coding via quantum channels of partially entangled states.
NASA Technical Reports Server (NTRS)
Eliason, E.; Hansen, C. J.; McEwen, A.; Delamere, W. A.; Bridges, N.; Grant, J.; Gulich, V.; Herkenhoff, K.; Keszthelyi, L.; Kirk, R.
2003-01-01
Science return from the Mars Reconnaissance Orbiter (MRO) High Resolution Imaging Science Experiment (HiRISE) will be optimized by maximizing science participation in the experiment. MRO is expected to arrive at Mars in March 2006, and the primary science phase begins near the end of 2006 after aerobraking (6 months) and a transition phase. The primary science phase lasts for almost 2 Earth years, followed by a 2-year relay phase in which science observations by MRO are expected to continue. We expect to acquire approx. 10,000 images with HiRISE over the course of MRO's two earth-year mission. HiRISE can acquire images with a ground sampling dimension of as little as 30 cm (from a typical altitude of 300 km), in up to 3 colors, and many targets will be re-imaged for stereo. With such high spatial resolution, the percent coverage of Mars will be very limited in spite of the relatively high data rate of MRO (approx. 10x greater than MGS or Odyssey). We expect to cover approx. 1% of Mars at approx. 1m/pixel or better, approx. 0.1% at full resolution, and approx. 0.05% in color or in stereo. Therefore, the placement of each HiRISE image must be carefully considered in order to maximize the scientific return from MRO. We believe that every observation should be the result of a mini research project based on pre-existing datasets. During operations, we will need a large database of carefully researched 'suggested' observations to select from. The HiRISE team is dedicated to involving the broad Mars community in creating this database, to the fullest degree that is both practical and legal. The philosophy of the team and the design of the ground data system are geared to enabling community involvement. A key aspect of this is that image data will be made available to the planetary community for science analysis as quickly as possible to encourage feedback and new ideas for targets.
About closedness by convolution of the Tsallis maximizers
NASA Astrophysics Data System (ADS)
Vignat, C.; Hero, A. O., III; Costa, J. A.
2004-09-01
In this paper, we study the stability under convolution of the maximizing distributions of the Tsallis entropy under energy constraint (called hereafter Tsallis distributions). These distributions are shown to obey three important properties: a stochastic representation property, an orthogonal invariance property and a duality property. As a consequence of these properties, the behavior of Tsallis distributions under convolution is characterized. At last, a special random convolution, called Kingman convolution, is shown to ensure the stability of Tsallis distributions.
Planning for partnerships: Maximizing surge capacity resources through service learning.
Adams, Lavonne M; Reams, Paula K; Canclini, Sharon B
2015-01-01
Infectious disease outbreaks and natural or human-caused disasters can strain the community's surge capacity through sudden demand on healthcare activities. Collaborative partnerships between communities and schools of nursing have the potential to maximize resource availability to meet community needs following a disaster. This article explores how communities can work with schools of nursing to enhance surge capacity through systems thinking, integrated planning, and cooperative efforts. PMID:26750818
Letters to the editor : Cosmological constant in broken maximal supergravities.
Chalmers, G.; High Energy Physics
2002-12-01
We examine the form of the cosmological constant in the loop expansion of broken maximally supersymmetric supergravity theories, and after embedding, within superstring and M-theory. Supersymmetry breaking at the TeV scale generates values of the cosmological constant that are in agreement with current astrophysical data. The form of perturbative quantum effects in the loop expansion is consistent with this parameter regime.
Cardiovascular changes during maximal breath-holding in elite divers.
Guaraldi, Pietro; Serra, Maria; Barletta, Giorgio; Pierangeli, Giulia; Terlizzi, Rossana; Calandra-Buonaura, Giovanna; Cialoni, Danilo; Cortelli, Pietro
2009-12-01
During maximal breath-holding six healthy elite breath-hold divers, after an initial "easy-going" phase in which cardiovascular changes resembled the so-called "diving response", exhibited a sudden and severe rise in blood pressure during the "struggle" phase of the maneuver. These changes may represent the first tangible expression of a defense reaction, which overrides the classic diving reflex, aiming to reduce the hypoxic damage and to break the apnea before the loss of consciousness. PMID:19655193
Oncoplastic Breast Reduction: Maximizing Aesthetics and Surgical Margins
Chang, Michelle Milee; Huston, Tara; Ascherman, Jeffrey; Rohde, Christine
2012-01-01
Oncoplastic breast reduction combines oncologically sound concepts of cancer removal with aesthetically maximized approaches for breast reduction. Numerous incision patterns and types of pedicles can be used for purposes of oncoplastic reduction, each tailored for size and location of tumor. A team approach between reconstructive and breast surgeons produces positive long-term oncologic results as well as satisfactory cosmetic and functional outcomes, rendering oncoplastic breast reduction a favorable treatment option for certain patients with breast cancer. PMID:23209890
Optimum array design to maximize Fisher information for bearing estimation.
Tuladhar, Saurav R; Buck, John R
2011-11-01
Source bearing estimation is a common application of linear sensor arrays. The Cramer-Rao bound (CRB) sets a lower bound on the achievable mean square error (MSE) of any unbiased bearing estimate. In the spatially white noise case, the CRB is minimized by placing half of the sensors at each end of the array. However, many realistic ocean environments have a mixture of both white noise and spatially correlated noise. In shallow water environments, the correlated ambient noise can be modeled as cylindrically isotropic. This research designs a fixed aperture linear array to maximize the bearing Fisher information (FI) under these noise conditions. The FI is the inverse of the CRB, so maximizing the FI minimizes the CRB. The elements of the optimum array are located closer to the array ends than uniform spacing, but are not as extreme as in the white noise case. The optimum array results from a trade off between maximizing the array bearing sensitivity and minimizing output noise power variation over the bearing. Depending on the source bearing, the resulting improvement in MSE performance of the optimized array over a uniform array is equivalent to a gain of 2-5 dB in input signal-to-noise ratio. PMID:22087908
Reference Values of Maximal Oxygen Uptake for Polish Rowers
Klusiewicz, Andrzej; Starczewski, Michał; Ładyga, Maria; Długołęcka, Barbara; Braksator, Wojciech; Mamcarz, Artur; Sitkowski, Dariusz
2014-01-01
The aim of this study was to characterize changes in maximal oxygen uptake over several years and to elaborate current reference values of this index based on determinations carried out in large and representative groups of top Polish rowers. For this study 81 female and 159 male rowers from the sub-junior to senior categories were recruited from the Polish National Team and its direct backup. All the subjects performed an incremental exercise test on a rowing ergometer. During the test maximal oxygen uptake was measured with the BxB method. The calculated reference values for elite Polish junior and U23 rowers allowed to evaluate the athletes’ fitness level against the respective reference group and may aid the coach in controlling the training process. Mean values of VO2max achieved by members of the top Polish rowing crews who over the last five years competed in the Olympic Games or World Championships were also presented. The results of the research on the “trainability” of the maximal oxygen uptake may lead to a conclusion that the growth rate of the index is larger in case of high-level athletes and that the index (in absolute values) increases significantly between the age of 19–22 years (U23 category). PMID:25713672
Polarity Related Influence Maximization in Signed Social Networks
Li, Dong; Xu, Zhi-Ming; Chakraborty, Nilanjan; Gupta, Anika; Sycara, Katia; Li, Sheng
2014-01-01
Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods. PMID:25061986
Maximal likelihood correspondence estimation for face recognition across pose.
Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang
2014-10-01
Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database. PMID:25163062
Random effects structure for confirmatory hypothesis testing: Keep it maximal
Barr, Dale J.; Levy, Roger; Scheepers, Christoph; Tily, Harry J.
2013-01-01
Linear mixed-effects models (LMEMs) have become increasingly prominent in psycholinguistics and related areas. However, many researchers do not seem to appreciate how random effects structures affect the generalizability of an analysis. Here, we argue that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the standards that have been in place for many decades. Through theoretical arguments and Monte Carlo simulation, we show that LMEMs generalize best when they include the maximal random effects structure justified by the design. The generalization performance of LMEMs including data-driven random effects structures strongly depends upon modeling criteria and sample size, yielding reasonable results on moderately-sized samples when conservative criteria are used, but with little or no power advantage over maximal models. Finally, random-intercepts-only LMEMs used on within-subjects and/or within-items data from populations where subjects and/or items vary in their sensitivity to experimental manipulations always generalize worse than separate F1 and F2 tests, and in many cases, even worse than F1 alone. Maximal LMEMs should be the ‘gold standard’ for confirmatory hypothesis testing in psycholinguistics and beyond. PMID:24403724
Pore space morphology analysis using maximal inscribed spheres
NASA Astrophysics Data System (ADS)
Silin, Dmitriy; Patzek, Tad
2006-11-01
A new robust algorithm analyzing the geometry and connectivity of the pore space of sedimentary rock is based on fundamental concepts of mathematical morphology. The algorithm distinguishes between the “pore bodies” and “pore throats,” and establishes their respective volumes and connectivity. The proposed algorithm also produces a stick-and-ball diagram of the rock pore space. The tests on a pack of equal spheres, for which the results are verifiable, confirm its stability. The impact of image resolution on the algorithm output is investigated on the images of computer-generated pore space. One of distinctive features of our approach is that no image thinning is applied. Instead, the information about the skeleton is stored through the maximal inscribed balls or spheres (MIS) associated with each voxel. These maximal balls retain information about the entire pore space. Comparison with the results obtained by a thinning procedure preserving some topological properties of the pore space shows that our method produces more realistic estimates of the number and shapes of pore bodies and pore throats, and the pore coordination numbers. The distribution of maximal inscribed spheres makes possible simulation of mercury injection and computation of the corresponding dimensionless capillary pressure curve. It turns out that the calculated capillary pressure curve is a robust descriptor of the pore space geometry and, in particular, can be used to determine the quality of computer-based rock reconstruction.
Echocardiographic dimensions and maximal oxygen uptake in oarsmen during training.
Wieling, W; Borghols, E A; Hollander, A P; Danner, S A; Dunning, A J
1981-01-01
We studied nine freshmen and 14 senior oarsmen undergraduates during seven months of training and compared them with 17 age and sex-matched sedentary control subjects in order to assess the influence of heavy physical exercise on cardiac dimensions and maximal oxygen uptake. Standard M-mode echocardiographic techniques were used. At the start of the season senior oarsmen had a greater left ventricular end-diastolic dimension, and a thicker interventricular septum and posterior left ventricular wall than control subjects and freshmen oarsmen. The two latter groups did not differ from each other. During the training period there was a slight and gradual increase in left ventricular end-diastolic dimension, and interventricular septum and posterior wall thickness in freshmen. In seniors only left ventricular end-diastolic dimension increased significantly. Maximal oxygen uptake showed a distinct increase between the fourth and seventh month during the period of intensive rowing training. There was no relation between echocardiographic variables and maximal oxygen uptake. A combination of heavy dynamic and static exercise can thus lead to significant changes in both left ventricular wall thickness and chamber size within months. Echocardiographic variables measured at rest cannot be used as a suitable index of performance capacity. PMID:7272130
Reference values of maximal oxygen uptake for polish rowers.
Klusiewicz, Andrzej; Starczewski, Michał; Ładyga, Maria; Długołęcka, Barbara; Braksator, Wojciech; Mamcarz, Artur; Sitkowski, Dariusz
2014-12-01
The aim of this study was to characterize changes in maximal oxygen uptake over several years and to elaborate current reference values of this index based on determinations carried out in large and representative groups of top Polish rowers. For this study 81 female and 159 male rowers from the sub-junior to senior categories were recruited from the Polish National Team and its direct backup. All the subjects performed an incremental exercise test on a rowing ergometer. During the test maximal oxygen uptake was measured with the BxB method. The calculated reference values for elite Polish junior and U23 rowers allowed to evaluate the athletes' fitness level against the respective reference group and may aid the coach in controlling the training process. Mean values of VO2max achieved by members of the top Polish rowing crews who over the last five years competed in the Olympic Games or World Championships were also presented. The results of the research on the "trainability" of the maximal oxygen uptake may lead to a conclusion that the growth rate of the index is larger in case of high-level athletes and that the index (in absolute values) increases significantly between the age of 19-22 years (U23 category). PMID:25713672
Polarity related influence maximization in signed social networks.
Li, Dong; Xu, Zhi-Ming; Chakraborty, Nilanjan; Gupta, Anika; Sycara, Katia; Li, Sheng
2014-01-01
Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods. PMID:25061986
Yu, Chao; Sharma, Gaurav
2010-08-01
We explore camera scheduling and energy allocation strategies for lifetime optimization in image sensor networks. For the application scenarios that we consider, visual coverage over a monitored region is obtained by deploying wireless, battery-powered image sensors. Each sensor camera provides coverage over a part of the monitored region and a central processor coordinates the sensors in order to gather required visual data. For the purpose of maximizing the network operational lifetime, we consider two problems in this setting: a) camera scheduling, i.e., the selection, among available possibilities, of a set of cameras providing the desired coverage at each time instance, and b) energy allocation, i.e., the distribution of total available energy between the camera sensor nodes. We model the network lifetime as a stochastic random variable that depends upon the coverage geometry for the sensors and the distribution of data requests over the monitored region, two key characteristics that distinguish our problem from other wireless sensor network applications. By suitably abstracting this model of network lifetime and utilizing asymptotic analysis, we propose lifetime-maximizing camera scheduling and energy allocation strategies. The effectiveness of the proposed camera scheduling and energy allocation strategies is validated by simulations. PMID:20350857
A Novel Searching Method for Distribution Network Topology to Maximize Outputs of PV Clusters
NASA Astrophysics Data System (ADS)
Sato, Tsunaki; Saitoh, Hiroumi
This paper proposes a novel searching method for radial distribution network to maximize outputs of PV generators. When a lot of PV generators are connected to distribution network, it is necessary to control PV outputs to keep the voltage within the regulated range of power quality. The aim of the paper is to find the optimal topology, that is the state of section switches so as to maximize the total PV outputs. This problem becomes a combinatorial optimization one and the exhaustive search results in enormous computation time. In this paper, a novel searching method is proposed which is based on the replacement of the complex combinatorial optimization problem with a minimization problem regarding with voltage profile of distribution network. The features of the proposed method are to utilize the relation between the total outputs of PV generators and the voltage profile, the superposition of radial electric circuit and a sequential algorithm for search in an optimal topology. In order to verify the proposed method, the comparison study has been done by using two distribution network models. As the result, there is possibility that the proposed method can find sub optimal topology in a short time compared with tabu search approach.
NASA Astrophysics Data System (ADS)
Qiu, Jianjun; Li, Pengcheng; Luo, Weihua; Wang, Jia; Zhang, Hongyan; Luo, Qingming
2010-01-01
Laser speckle contrast imaging is a technique used for imaging blood flow without scanning. Though several studies have attempted to combine spatial and temporal statistics of laser speckle images for reducing image noise as well as preserving acceptable spatiotemporal resolution, the statistical accuracy of these spatiotemporal methods has not been thoroughly compared. Through numerical simulation and animal experiments, this study investigates the changes in the mean speckle contrast values and the relative noise of the speckle contrast images computed by these methods with various numbers of frames and spatial windows. The simulation results show that the maximum relative error of the mean speckle contrast computed by the spatiotemporal laser speckle contrast analysis (STLASCA) method, in which the speckle contrast images are computed by analyzing the 3-D spatiotemporal speckle image cube, is approximately 5%, while it is higher than 13% for other methods. Changes in the mean speckle contrast values and the relative noise computed by these methods for animal experiment data are consistent with the simulation results. Our results demonstrate that STLASCA achieves more accurate speckle contrast, and suggest that STLASCA most effectively utilizes the number of pixels, thus achieving maximized speckle contrast, and thereby maximizing the variation of the laser speckle contrast image.
Reliability of heart rate measures during walking before and after running maximal efforts.
Boullosa, D A; Barros, E S; del Rosso, S; Nakamura, F Y; Leicht, A S
2014-11-01
Previous studies on HR recovery (HRR) measures have utilized the supine and the seated postures. However, the most common recovery mode in sport and clinical settings after running exercise is active walking. The aim of the current study was to examine the reliability of HR measures during walking (4 km · h(-1)) before and following a maximal test. Twelve endurance athletes performed an incremental running test on 2 days separated by 48 h. Absolute (coefficient of variation, CV, %) and relative [Intraclass correlation coefficient, (ICC)] reliability of time domain and non-linear measures of HR variability (HRV) from 3 min recordings, and HRR parameters over 5 min were assessed. Moderate to very high reliability was identified for most HRV indices with short-term components of time domain and non-linear HRV measures demonstrating the greatest reliability before (CV: 12-22%; ICC: 0.73-0.92) and after exercise (CV: 14-32%; ICC: 0.78-0.91). Most HRR indices and parameters of HRR kinetics demonstrated high to very high reliability with HR values at a given point and the asymptotic value of HR being the most reliable (CV: 2.5-10.6%; ICC: 0.81-0.97). These findings demonstrate these measures as reliable tools for the assessment of autonomic control of HR during walking before and after maximal efforts. PMID:24841837
Content specificity of expectancy beliefs and task values in elementary physical education.
Chen, Ang; Martin, Robert; Ennis, Catherine D; Sun, Haichun
2008-06-01
The curriculum may superimpose a content-specific context that mediates motivation (Bong, 2001). This study examined content specificity of the expectancy-value motivation in elementary school physical education. Students' expectancy beliefs and perceived task values from a cardiorespiratory fitness unit, a muscular fitness unit, and a traditional skill/game unit were analyzed using constant comparison coding procedures, multivariate analysis of variance, X2, and correlation analyses. There was no difference in the intrinsic interest value among the three content conditions. Expectancy belief attainment, and utility values were significantly higher for the cardiorespiratory fitness curriculum. Correlations differentiated among the expectancy-value components of the content conditions, providing further evidence of content specificity in the expectancy-value motivation process. The findings suggest that expectancy beliefs and task values should be incorporated in the theoretical platform for curriculum development based on the learning outcomes that can be specified with enhanced motivation effect. PMID:18664044
Expectations for Melodic Contours Transcend Pitch
Graves, Jackson E.; Micheyl, Christophe; Oxenham, Andrew J.
2015-01-01
The question of what makes a good melody has interested composers, music theorists, and psychologists alike. Many of the observed principles of good “melodic continuation” involve melodic contour – the pattern of rising and falling pitch within a sequence. Previous work has shown that contour perception can extend beyond pitch to other auditory dimensions, such as brightness and loudness. Here, we show with two experiments that the generalization of contour perception to non-traditional dimensions also extends to melodic expectations. In the first experiment, subjective ratings for three-tone sequences that vary in brightness or loudness conformed to the same general contour-based expectations as pitch sequences. In the second experiment, we modified the sequence of melody presentation such that melodies with the same beginning were blocked together. This change produced substantively different results, but the patterns of ratings remained similar across the three auditory dimensions. Taken together, these results suggest that 1) certain well-known principles of melodic expectation (such as the expectation for a reversal following a skip) are dependent on long-term context, and 2) these expectations are not unique to the dimension of pitch and may instead reflect more general principles of perceptual organization. PMID:25365571
Expectations for melodic contours transcend pitch.
Graves, Jackson E; Micheyl, Christophe; Oxenham, Andrew J
2014-12-01
The question of what makes a good melody has interested composers, music theorists, and psychologists alike. Many of the observed principles of good "melodic continuation" involve melodic contour-the pattern of rising and falling pitch within a sequence. Previous work has shown that contour perception can extend beyond pitch to other auditory dimensions, such as brightness and loudness. Here, we show that the generalization of contour perception to nontraditional dimensions also extends to melodic expectations. In the first experiment, subjective ratings for 3-tone sequences that vary in brightness or loudness conformed to the same general contour-based expectations as pitch sequences. In the second experiment, we modified the sequence of melody presentation such that melodies with the same beginning were blocked together. This change produced substantively different results, but the patterns of ratings remained similar across the 3 auditory dimensions. Taken together, these results suggest that (a) certain well-known principles of melodic expectation (such as the expectation for a reversal following a skip) are dependent on long-term context, and (b) these expectations are not unique to the dimension of pitch and may instead reflect more general principles of perceptual organization. PMID:25365571
Components of attention modulated by temporal expectation.
Sørensen, Thomas Alrik; Vangkilde, Signe; Bundesen, Claus
2015-01-01
By varying the probabilities that a stimulus would appear at particular times after the presentation of a cue and modeling the data by the theory of visual attention (Bundesen, 1990), Vangkilde, Coull, and Bundesen (2012) provided evidence that the speed of encoding a singly presented stimulus letter into visual short-term memory (VSTM) is modulated by the observer's temporal expectations. We extended the investigation from single-stimulus recognition to whole report (Experiment 1) and partial report (Experiment 2). Cue-stimulus foreperiods were distributed geometrically using time steps of 500 ms. In high expectancy conditions, the probability that the stimulus would appear on the next time step, given that it had not yet appeared, was high, whereas in low expectancy conditions, the probability was low. The speed of encoding the stimuli into VSTM was higher in the high expectancy conditions. In line with the Easterbrook (1959) hypothesis, under high temporal expectancy, the processing was also more focused (selective). First, the storage capacity of VSTM was lower, so that fewer stimuli were encoded into VSTM. Second, the distribution of attentional weights across stimuli was less even: The efficiency of selecting targets rather than distractors for encoding into VSTM was higher, as was the spread of the attentional weights of the target letters. PMID:25068851
Controlling Your Utility Rates.
ERIC Educational Resources Information Center
Lucht, Ray; Dembowski, Frederick L.
1985-01-01
A cost-effective alternative to high utility bills for middle-sized and smaller utility users is the service of utility rate consultants. The consultants analyze utility invoices for the previous 12 months to locate available refunds or credits. (MLF)
Expected rates with mini-arrays for air showers
NASA Technical Reports Server (NTRS)
Hazen, W. E.
1985-01-01
As a guide in the design of mini-arrays used to exploit the Linsley effect in the study of air showers, it is useful to calculate the expected rates. The results can aid in the choice of detectors and their placement or in predicting the utility of existing detector systems. Furthermore, the potential of the method can be appraised for the study of large showers. Specifically, we treat the case of a mini-array of dimensions small enough compared to the distance of axes of showers of interest so that it can be considered a point detector. The input information is taken from the many previous studies of air showers by other groups. The calculations will give: (1) the expected integral rate, F(sigma, rho), for disk thickness, sigma, or rise time, t sub 1/2, with local particle density, rho, as a parameter; (2) the effective detection area A(N) with sigma (min) and rho (min) and rho (min) as parameters; (3) the expected rate of collection of data F sub L (N) versus shower size, N.
Ong, S.; Denholm, P.
2011-07-01
Schools in California often have a choice between multiple electricity rate options. For schools with photovoltaic (PV) installations, choosing the right rate is essential to maximize the value of PV generation. The rate option that minimizes a school?s electricity expenses often does not remain the most economical choice after the school installs a PV system. The complex interaction between PV generation, building load, and rate structure makes determining the best rate a challenging task. This report evaluates 22 rate structures across three of California?s largest electric utilities--Pacific Gas and Electric Co. (PG&E), Southern California Edison (SCE), and San Diego Gas and Electric (SDG&E)--in order to identify common rate structure attributes that are favorable to PV installations.
Aykul, Senem; Martinez-Hackert, Erik
2016-09-01
Half-maximal inhibitory concentration (IC50) is the most widely used and informative measure of a drug's efficacy. It indicates how much drug is needed to inhibit a biological process by half, thus providing a measure of potency of an antagonist drug in pharmacological research. Most approaches to determine IC50 of a pharmacological compound are based on assays that utilize whole cell systems. While they generally provide outstanding potency information, results can depend on the experimental cell line used and may not differentiate a compound's ability to inhibit specific interactions. Here we show using the secreted Transforming Growth Factor-β (TGF-β) family ligand BMP-4 and its receptors as example that surface plasmon resonance can be used to accurately determine IC50 values of individual ligand-receptor pairings. The molecular resolution achievable wih this approach can help distinguish inhibitors that specifically target individual complexes, or that can inhibit multiple functional interactions at the same time. PMID:27365221