Yang, Defu; Wang, Lin; Chen, Dongmei; Yan, Chenggang; He, Xiaowei; Liang, Jimin; Chen, Xueli
2018-05-17
The reconstruction of bioluminescence tomography (BLT) is severely ill-posed due to the insufficient measurements and diffuses nature of the light propagation. Predefined permissible source region (PSR) combined with regularization terms is one common strategy to reduce such ill-posedness. However, the region of PSR is usually hard to determine and can be easily affected by subjective consciousness. Hence, we theoretically developed a filtered maximum likelihood expectation maximization (fMLEM) method for BLT. Our method can avoid predefining the PSR and provide a robust and accurate result for global reconstruction. In the method, the simplified spherical harmonics approximation (SP N ) was applied to characterize diffuse light propagation in medium, and the statistical estimation-based MLEM algorithm combined with a filter function was used to solve the inverse problem. We systematically demonstrated the performance of our method by the regular geometry- and digital mouse-based simulations and a liver cancer-based in vivo experiment. Graphical abstract The filtered MLEM-based global reconstruction method for BLT.
Expectation maximization for hard X-ray count modulation profiles
NASA Astrophysics Data System (ADS)
Benvenuto, F.; Schwartz, R.; Piana, M.; Massone, A. M.
2013-07-01
Context. This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) instrument. Aims: Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized to analyze count modulation profiles in solar hard X-ray imaging based on rotating modulation collimators. Methods: The algorithm described in this paper solves the maximum likelihood problem iteratively and encodes a positivity constraint into the iterative optimization scheme. The result is therefore a classical expectation maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, at the same time, a very satisfactory Cash-statistic (C-statistic). Results: The method is applied to both reproduce synthetic flaring configurations and reconstruct images from experimental data corresponding to three real events. In this second case, the performance of expectation maximization, when compared to Pixon image reconstruction, shows a comparable accuracy and a notably reduced computational burden; when compared to CLEAN, shows a better fidelity with respect to the measurements with a comparable computational effectiveness. Conclusions: If optimally stopped, expectation maximization represents a very reliable method for image reconstruction in the RHESSI context when count modulation profiles are used as input data.
The role of multisensory interplay in enabling temporal expectations.
Ball, Felix; Michels, Lara E; Thiele, Carsten; Noesselt, Toemme
2018-01-01
Temporal regularities can guide our attention to focus on a particular moment in time and to be especially vigilant just then. Previous research provided evidence for the influence of temporal expectation on perceptual processing in unisensory auditory, visual, and tactile contexts. However, in real life we are often exposed to a complex and continuous stream of multisensory events. Here we tested - in a series of experiments - whether temporal expectations can enhance perception in multisensory contexts and whether this enhancement differs from enhancements in unisensory contexts. Our discrimination paradigm contained near-threshold targets (subject-specific 75% discrimination accuracy) embedded in a sequence of distractors. The likelihood of target occurrence (early or late) was manipulated block-wise. Furthermore, we tested whether spatial and modality-specific target uncertainty (i.e. predictable vs. unpredictable target position or modality) would affect temporal expectation (TE) measured with perceptual sensitivity (d ' ) and response times (RT). In all our experiments, hidden temporal regularities improved performance for expected multisensory targets. Moreover, multisensory performance was unaffected by spatial and modality-specific uncertainty, whereas unisensory TE effects on d ' but not RT were modulated by spatial and modality-specific uncertainty. Additionally, the size of the temporal expectation effect, i.e. the increase in perceptual sensitivity and decrease of RT, scaled linearly with the likelihood of expected targets. Finally, temporal expectation effects were unaffected by varying target position within the stream. Together, our results strongly suggest that participants quickly adapt to novel temporal contexts, that they benefit from multisensory (relative to unisensory) stimulation and that multisensory benefits are maximal if the stimulus-driven uncertainty is highest. We propose that enhanced informational content (i.e. multisensory stimulation) enables the robust extraction of temporal regularities which in turn boost (uni-)sensory representations. Copyright © 2017 Elsevier B.V. All rights reserved.
Brand, Samuel P C; Keeling, Matt J
2017-03-01
It is a long recognized fact that climatic variations, especially temperature, affect the life history of biting insects. This is particularly important when considering vector-borne diseases, especially in temperate regions where climatic fluctuations are large. In general, it has been found that most biological processes occur at a faster rate at higher temperatures, although not all processes change in the same manner. This differential response to temperature, often considered as a trade-off between onward transmission and vector life expectancy, leads to the total transmission potential of an infected vector being maximized at intermediate temperatures. Here we go beyond the concept of a static optimal temperature, and mathematically model how realistic temperature variation impacts transmission dynamics. We use bluetongue virus (BTV), under UK temperatures and transmitted by Culicoides midges, as a well-studied example where temperature fluctuations play a major role. We first consider an optimal temperature profile that maximizes transmission, and show that this is characterized by a warm day to maximize biting followed by cooler weather to maximize vector life expectancy. This understanding can then be related to recorded representative temperature patterns for England, the UK region which has experienced BTV cases, allowing us to infer historical transmissibility of BTV, as well as using forecasts of climate change to predict future transmissibility. Our results show that when BTV first invaded northern Europe in 2006 the cumulative transmission intensity was higher than any point in the last 50 years, although with climate change such high risks are the expected norm by 2050. Such predictions would indicate that regular BTV epizootics should be expected in the UK in the future. © 2017 The Author(s).
NASA Astrophysics Data System (ADS)
Aslan, Serdar; Taylan Cemgil, Ali; Akın, Ata
2016-08-01
Objective. In this paper, we aimed for the robust estimation of the parameters and states of the hemodynamic model by using blood oxygen level dependent signal. Approach. In the fMRI literature, there are only a few successful methods that are able to make a joint estimation of the states and parameters of the hemodynamic model. In this paper, we implemented a maximum likelihood based method called the particle smoother expectation maximization (PSEM) algorithm for the joint state and parameter estimation. Main results. Former sequential Monte Carlo methods were only reliable in the hemodynamic state estimates. They were claimed to outperform the local linearization (LL) filter and the extended Kalman filter (EKF). The PSEM algorithm is compared with the most successful method called square-root cubature Kalman smoother (SCKS) for both state and parameter estimation. SCKS was found to be better than the dynamic expectation maximization (DEM) algorithm, which was shown to be a better estimator than EKF, LL and particle filters. Significance. PSEM was more accurate than SCKS for both the state and the parameter estimation. Hence, PSEM seems to be the most accurate method for the system identification and state estimation for the hemodynamic model inversion literature. This paper do not compare its results with Tikhonov-regularized Newton—CKF (TNF-CKF), a recent robust method which works in filtering sense.
A combined reconstruction-classification method for diffuse optical tomography.
Hiltunen, P; Prince, S J D; Arridge, S
2009-11-07
We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.
J.-L. Lions' problem concerning maximal regularity of equations governed by non-autonomous forms
NASA Astrophysics Data System (ADS)
Fackler, Stephan
2017-05-01
An old problem due to J.-L. Lions going back to the 1960s asks whether the abstract Cauchy problem associated to non-autonomous forms has maximal regularity if the time dependence is merely assumed to be continuous or even measurable. We give a negative answer to this question and discuss the minimal regularity needed for positive results.
Symbolic Dynamics and Grammatical Complexity
NASA Astrophysics Data System (ADS)
Hao, Bai-Lin; Zheng, Wei-Mou
The following sections are included: * Formal Languages and Their Complexity * Formal Language * Chomsky Hierarchy of Grammatical Complexity * The L-System * Regular Language and Finite Automaton * Finite Automaton * Regular Language * Stefan Matrix as Transfer Function for Automaton * Beyond Regular Languages * Feigenbaum and Generalized Feigenbaum Limiting Sets * Even and Odd Fibonacci Sequences * Odd Maximal Primitive Prefixes and Kneading Map * Even Maximal Primitive Prefixes and Distinct Excluded Blocks * Summary of Results
Data Unfolding with Wiener-SVD Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, W.; Li, X.; Qian, X.
Here, data unfolding is a common analysis technique used in HEP data analysis. Inspired by the deconvolution technique in the digital signal processing, a new unfolding technique based on the SVD technique and the well-known Wiener filter is introduced. The Wiener-SVD unfolding approach achieves the unfolding by maximizing the signal to noise ratios in the effective frequency domain given expectations of signal and noise and is free from regularization parameter. Through a couple examples, the pros and cons of the Wiener-SVD approach as well as the nature of the unfolded results are discussed.
Data Unfolding with Wiener-SVD Method
Tang, W.; Li, X.; Qian, X.; ...
2017-10-04
Here, data unfolding is a common analysis technique used in HEP data analysis. Inspired by the deconvolution technique in the digital signal processing, a new unfolding technique based on the SVD technique and the well-known Wiener filter is introduced. The Wiener-SVD unfolding approach achieves the unfolding by maximizing the signal to noise ratios in the effective frequency domain given expectations of signal and noise and is free from regularization parameter. Through a couple examples, the pros and cons of the Wiener-SVD approach as well as the nature of the unfolded results are discussed.
Asadi, Abbas; Ramirez-Campillo, Rodrigo; Meylan, Cesar; Nakamura, Fabio Y; Cañas-Jamett, Rodrigo; Izquierdo, Mikel
2017-12-01
The aim of the present study was to compare maximal-intensity exercise adaptations in young basketball players (who were strong individuals at baseline) participating in regular basketball training versus regular plus a volume-based plyometric training program in the pre-season period. Young basketball players were recruited and assigned either to a plyometric with regular basketball training group (experimental group [EG]; N.=8), or a basketball training only group (control group [CG]; N.=8). The athletes in EG performed periodized (i.e., from 117 to 183 jumps per session) plyometric training for eight weeks. Before and after the intervention, players were assessed in vertical and broad jump, change of direction, maximal strength and a 60-meter sprint test. No significant improvements were found in the CG, while the EG improved vertical jump (effect size [ES] 2.8), broad jump (ES=2.4), agility T test (ES=2.2), Illinois agility test (ES=1.4), maximal strength (ES=1.8), and 60-m sprint (ES=1.6) (P<0.05) after intervention, and the improvements were greater compared to the CG (P<0.05). Plyometric training in addition to regular basketball practice can lead to meaningful improvements in maximal-intensity exercise adaptations among young basketball players during the pre-season.
Mixture models with entropy regularization for community detection in networks
NASA Astrophysics Data System (ADS)
Chang, Zhenhai; Yin, Xianjun; Jia, Caiyan; Wang, Xiaoyang
2018-04-01
Community detection is a key exploratory tool in network analysis and has received much attention in recent years. NMM (Newman's mixture model) is one of the best models for exploring a range of network structures including community structure, bipartite and core-periphery structures, etc. However, NMM needs to know the number of communities in advance. Therefore, in this study, we have proposed an entropy regularized mixture model (called EMM), which is capable of inferring the number of communities and identifying network structure contained in a network, simultaneously. In the model, by minimizing the entropy of mixing coefficients of NMM using EM (expectation-maximization) solution, the small clusters contained little information can be discarded step by step. The empirical study on both synthetic networks and real networks has shown that the proposed model EMM is superior to the state-of-the-art methods.
Maximal volume behind horizons without curvature singularity
NASA Astrophysics Data System (ADS)
Wang, Shao-Jun; Guo, Xin-Xuan; Wang, Towe
2018-01-01
The black hole information paradox is related to the area of event horizon, and potentially to the volume and singularity behind it. One example is the complexity/volume duality conjectured by Stanford and Susskind. Accepting the proposal of Christodoulou and Rovelli, we calculate the maximal volume inside regular black holes, which are free of curvature singularity, in asymptotically flat and anti-de Sitter spacetimes respectively. The complexity/volume duality is then applied to anti-de Sitter regular black holes. We also present an analytical expression for the maximal volume outside the de Sitter horizon.
Hermassi, Souhail; van den Tillaar, Roland; Khlifa, Riadh; Chelly, Mohamed Souhaiel; Chamari, Karim
2015-08-01
The purpose of this study was to compare the effect of a specific resistance training program (throwing movement with a medicine ball) with that of regular training (throwing with regular balls) on ball velocity, anthropometry, maximal upper-body strength, and power. Thirty-four elite male team handball players (age: 18 ± 0.5 years, body mass: 80.6 ± 5.5 kg, height: 1.80 ± 5.1 m, body fat: 13.4 ± 0.6%) were randomly assigned to 1 of the 3 groups: control (n = 10), resistance training group (n = 12), or regular throwing training group (n = 12). Over the 8-week in season, the athletes performed 3 times per week according to an assigned training program alongside their normal team handball training. One repetition maximum (1RM) bench press and 1RM pullover scores assessed maximal arm strength. Anthropometry was assessed by body mass, fat percentage, and muscle volumes of upper body. Handball throwing velocity was measured by a standing throw, a throw with run, and a jump throw. Power was measured by measuring total distance thrown by a 3-kg medicine ball overhead throw. Throwing ball velocity, maximal strength, power, and muscle volume increases for the specific resistance training group after the 8 weeks of training, whereas only maximal strength, muscle volume and power and in the jump throw increases were found for the regular throwing training group. No significant changes for the control group were found. The current findings suggest that elite male handball players can improve ball velocity, anthropometrics, maximal upper-body strength, and power during the competition season by implementing a medicine ball throwing program.
SPECT reconstruction using DCT-induced tight framelet regularization
NASA Astrophysics Data System (ADS)
Zhang, Jiahan; Li, Si; Xu, Yuesheng; Schmidtlein, C. R.; Lipson, Edward D.; Feiglin, David H.; Krol, Andrzej
2015-03-01
Wavelet transforms have been successfully applied in many fields of image processing. Yet, to our knowledge, they have never been directly incorporated to the objective function in Emission Computed Tomography (ECT) image reconstruction. Our aim has been to investigate if the ℓ1-norm of non-decimated discrete cosine transform (DCT) coefficients of the estimated radiotracer distribution could be effectively used as the regularization term for the penalized-likelihood (PL) reconstruction, where a regularizer is used to enforce the image smoothness in the reconstruction. In this study, the ℓ1-norm of 2D DCT wavelet decomposition was used as a regularization term. The Preconditioned Alternating Projection Algorithm (PAPA), which we proposed in earlier work to solve penalized likelihood (PL) reconstruction with non-differentiable regularizers, was used to solve this optimization problem. The DCT wavelet decompositions were performed on the transaxial reconstructed images. We reconstructed Monte Carlo simulated SPECT data obtained for a numerical phantom with Gaussian blobs as hot lesions and with a warm random lumpy background. Reconstructed images using the proposed method exhibited better noise suppression and improved lesion conspicuity, compared with images reconstructed using expectation maximization (EM) algorithm with Gaussian post filter (GPF). Also, the mean square error (MSE) was smaller, compared with EM-GPF. A critical and challenging aspect of this method was selection of optimal parameters. In summary, our numerical experiments demonstrated that the ℓ1-norm of discrete cosine transform (DCT) wavelet frame transform DCT regularizer shows promise for SPECT image reconstruction using PAPA method.
Liu, Feng
2018-01-01
In this paper we investigate the endpoint regularity of the discrete m -sublinear fractional maximal operator associated with [Formula: see text]-balls, both in the centered and uncentered versions. We show that these operators map [Formula: see text] into [Formula: see text] boundedly and continuously. Here [Formula: see text] represents the set of functions of bounded variation defined on [Formula: see text].
Regularized Generalized Canonical Correlation Analysis
ERIC Educational Resources Information Center
Tenenhaus, Arthur; Tenenhaus, Michel
2011-01-01
Regularized generalized canonical correlation analysis (RGCCA) is a generalization of regularized canonical correlation analysis to three or more sets of variables. It constitutes a general framework for many multi-block data analysis methods. It combines the power of multi-block data analysis methods (maximization of well identified criteria) and…
Why Contextual Preference Reversals Maximize Expected Value
2016-01-01
Contextual preference reversals occur when a preference for one option over another is reversed by the addition of further options. It has been argued that the occurrence of preference reversals in human behavior shows that people violate the axioms of rational choice and that people are not, therefore, expected value maximizers. In contrast, we demonstrate that if a person is only able to make noisy calculations of expected value and noisy observations of the ordinal relations among option features, then the expected value maximizing choice is influenced by the addition of new options and does give rise to apparent preference reversals. We explore the implications of expected value maximizing choice, conditioned on noisy observations, for a range of contextual preference reversal types—including attraction, compromise, similarity, and phantom effects. These preference reversal types have played a key role in the development of models of human choice. We conclude that experiments demonstrating contextual preference reversals are not evidence for irrationality. They are, however, a consequence of expected value maximization given noisy observations. PMID:27337391
A Regularized Linear Dynamical System Framework for Multivariate Time Series Analysis.
Liu, Zitao; Hauskrecht, Milos
2015-01-01
Linear Dynamical System (LDS) is an elegant mathematical framework for modeling and learning Multivariate Time Series (MTS). However, in general, it is difficult to set the dimension of an LDS's hidden state space. A small number of hidden states may not be able to model the complexities of a MTS, while a large number of hidden states can lead to overfitting. In this paper, we study learning methods that impose various regularization penalties on the transition matrix of the LDS model and propose a regularized LDS learning framework (rLDS) which aims to (1) automatically shut down LDSs' spurious and unnecessary dimensions, and consequently, address the problem of choosing the optimal number of hidden states; (2) prevent the overfitting problem given a small amount of MTS data; and (3) support accurate MTS forecasting. To learn the regularized LDS from data we incorporate a second order cone program and a generalized gradient descent method into the Maximum a Posteriori framework and use Expectation Maximization to obtain a low-rank transition matrix of the LDS model. We propose two priors for modeling the matrix which lead to two instances of our rLDS. We show that our rLDS is able to recover well the intrinsic dimensionality of the time series dynamics and it improves the predictive performance when compared to baselines on both synthetic and real-world MTS datasets.
Constrained Fisher Scoring for a Mixture of Factor Analyzers
2016-09-01
expectation -maximization algorithm with similar computational requirements. Lastly, we demonstrate the efficacy of the proposed method for learning a... expectation maximization 44 Gene T Whipps 301 394 2372Unclassified Unclassified Unclassified UU ii Approved for public release; distribution is unlimited...14 3.6 Relationship with Expectation -Maximization 16 4. Simulation Examples 16 4.1 Synthetic MFA Example 17 4.2 Manifold Learning Example 22 5
Haas, Kevin R; Yang, Haw; Chu, Jhih-Wei
2013-12-12
The dynamics of a protein along a well-defined coordinate can be formally projected onto the form of an overdamped Lagevin equation. Here, we present a comprehensive statistical-learning framework for simultaneously quantifying the deterministic force (the potential of mean force, PMF) and the stochastic force (characterized by the diffusion coefficient, D) from single-molecule Förster-type resonance energy transfer (smFRET) experiments. The likelihood functional of the Langevin parameters, PMF and D, is expressed by a path integral of the latent smFRET distance that follows Langevin dynamics and realized by the donor and the acceptor photon emissions. The solution is made possible by an eigen decomposition of the time-symmetrized form of the corresponding Fokker-Planck equation coupled with photon statistics. To extract the Langevin parameters from photon arrival time data, we advance the expectation-maximization algorithm in statistical learning, originally developed for and mostly used in discrete-state systems, to a general form in the continuous space that allows for a variational calculus on the continuous PMF function. We also introduce the regularization of the solution space in this Bayesian inference based on a maximum trajectory-entropy principle. We use a highly nontrivial example with realistically simulated smFRET data to illustrate the application of this new method.
Optical tomography by means of regularized MLEM
NASA Astrophysics Data System (ADS)
Majer, Charles L.; Urbanek, Tina; Peter, Jörg
2015-09-01
To solve the inverse problem involved in fluorescence mediated tomography a regularized maximum likelihood expectation maximization (MLEM) reconstruction strategy is proposed. This technique has recently been applied to reconstruct galaxy clusters in astronomy and is adopted here. The MLEM algorithm is implemented as Richardson-Lucy (RL) scheme and includes entropic regularization and a floating default prior. Hence, the strategy is very robust against measurement noise and also avoids converging into noise patterns. Normalized Gaussian filtering with fixed standard deviation is applied for the floating default kernel. The reconstruction strategy is investigated using the XFM-2 homogeneous mouse phantom (Caliper LifeSciences Inc., Hopkinton, MA) with known optical properties. Prior to optical imaging, X-ray CT tomographic data of the phantom were acquire to provide structural context. Phantom inclusions were fit with various fluorochrome inclusions (Cy5.5) for which optical data at 60 projections over 360 degree have been acquired, respectively. Fluorochrome excitation has been accomplished by scanning laser point illumination in transmission mode (laser opposite to camera). Following data acquisition, a 3D triangulated mesh is derived from the reconstructed CT data which is then matched with the various optical projection images through 2D linear interpolation, correlation and Fourier transformation in order to assess translational and rotational deviations between the optical and CT imaging systems. Preliminary results indicate that the proposed regularized MLEM algorithm, when driven with a constant initial condition, yields reconstructed images that tend to be smoother in comparison to classical MLEM without regularization. Once the floating default prior is included this bias was significantly reduced.
Wang, Qi; Wang, Huaxiang; Cui, Ziqiang; Yang, Chengyi
2012-11-01
Electrical impedance tomography (EIT) calculates the internal conductivity distribution within a body using electrical contact measurements. The image reconstruction for EIT is an inverse problem, which is both non-linear and ill-posed. The traditional regularization method cannot avoid introducing negative values in the solution. The negativity of the solution produces artifacts in reconstructed images in presence of noise. A statistical method, namely, the expectation maximization (EM) method, is used to solve the inverse problem for EIT in this paper. The mathematical model of EIT is transformed to the non-negatively constrained likelihood minimization problem. The solution is obtained by the gradient projection-reduced Newton (GPRN) iteration method. This paper also discusses the strategies of choosing parameters. Simulation and experimental results indicate that the reconstructed images with higher quality can be obtained by the EM method, compared with the traditional Tikhonov and conjugate gradient (CG) methods, even with non-negative processing. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Independence of reaction time and response force control during isometric leg extension.
Fukushi, Tamami; Ohtsuki, Tatsuyuki
2004-04-01
In this study, we examined the relative control of reaction time and force in responses of the lower limb. Fourteen female participants (age 21.2 +/- 1.0 years, height 1.62 +/- 0.05 m, body mass 54.1 +/- 6.1 kg; mean +/- s) were instructed to exert their maximal isometric one-leg extension force as quickly as possible in response to an auditory stimulus presented after one of 13 foreperiod durations, ranging from 0.5 to 10.0 s. In the 'irregular condition' each foreperiod was presented in random order, while in the 'regular condition' each foreperiod was repeated consecutively. A significant interactive effect of foreperiod duration and regularity on reaction time was observed (P < 0.001 in two-way ANOVA with repeated measures). In the irregular condition the shorter foreperiod induced a longer reaction time, while in the regular condition the shorter foreperiod induced a shorter reaction time. Peak amplitude of isometric force was affected only by the regularity of foreperiod and there was a significant variation of changes in peak force across participants; nine participants were shown to significantly increase peak force for the regular condition (P < 0.001), three to decrease it (P < 0.05) and two showed no difference. These results indicate the independence of reaction time and response force control in the lower limb motor system. Variation of changes in peak force across participants may be due to the different attention to the bipolar nature of the task requirements such as maximal force and maximal speed.
Exercise Prescriptions for Active Seniors: A Team Approach for Maximizing Adherence.
ERIC Educational Resources Information Center
Brennan, Fred H., Jr.
2002-01-01
Exercise is an important "medication" that healthcare providers can prescribe for their geriatric patients. Increasing physical fitness by participating in regular exercise can reduce the effects of aging that lead to functional declines and poor health. Modest regular exercise can substantially lower the risk of death from coronary…
Impacts of Maximizing Tendencies on Experience-Based Decisions.
Rim, Hye Bin
2017-06-01
Previous research on risky decisions has suggested that people tend to make different choices depending on whether they acquire the information from personally repeated experiences or from statistical summary descriptions. This phenomenon, called as a description-experience gap, was expected to be moderated by the individual difference in maximizing tendencies, a desire towards maximizing decisional outcome. Specifically, it was hypothesized that maximizers' willingness to engage in extensive information searching would lead maximizers to make experience-based decisions as payoff distributions were given explicitly. A total of 262 participants completed four decision problems. Results showed that maximizers, compared to non-maximizers, drew more samples before making a choice but reported lower confidence levels on both the accuracy of knowledge gained from experiences and the likelihood of satisfactory outcomes. Additionally, maximizers exhibited smaller description-experience gaps than non-maximizers as expected. The implications of the findings and unanswered questions for future research were discussed.
Does aerobic exercise mitigate the effects of cigarette smoking on arterial stiffness?
Park, Wonil; Miyachi, Motohiko; Tanaka, Hirofumi
2014-09-01
The largest percentage of mortality from tobacco smoking is cardiovascular-related. It is not known whether regular participation in exercise mitigates the adverse influence of smoking on vasculature. Accordingly, the authors determined whether regular aerobic exercise is associated with reduced arterial stiffness in men who smoke cigarettes. Using a cross-sectional study design, 78 young men were studied, including sedentary nonsmokers (n=20), sedentary smokers (n=12), physically active nonsmokers (n=21), and physically active smokers (n=25). Arterial stiffness was assessed by brachial-ankle pulse wave velocity (baPWV). There were no group differences in height, body fat, and systolic and diastolic blood pressure. As expected, both physically active groups demonstrated greater maximal oxygen consumption and lower heart rate at rest than their sedentary peers. The sedentary smokers demonstrated greater baPWV than the sedentary nonsmokers (11.8±1 m/s vs 10.6±1 m/s, P=.036). baPWV values were not different between the physically active nonsmokers and the physically active smokers (10.8±1 m/s vs 10.7±1 m/s). Chronic smoking is associated with arterial stiffening in sedentary men but a significant smoking-induced increase in arterial stiffness was not observed in physically active adults. These results are consistent with the idea that regular participation in physical activity may mitigate the adverse effects of smoking on the vasculature. ©2014 Wiley Periodicals, Inc.
Tabet, Michael R.; Norman, Mantana K.; Fey, Brittney K.; Tsibulsky, Vladimir L.; Millard, Ronald W.
2011-01-01
Differences in the time to maximal effect (Tmax) of a series of dopamine receptor antagonists on the self-administration of cocaine are not consistent with their lipophilicity (octanol-water partition coefficients at pH 7.4) and expected rapid entry into the brain after intravenous injection. It was hypothesized that the Tmax reflects the time required for maximal occupancy of receptors, which would occur as equilibrium was approached. If so, the Tmax should be related to the affinity for the relevant receptor population. This hypothesis was tested using a series of nine antagonists having a 2500-fold range of Ki or Kd values for D2-like dopamine receptors. Rats self-administered cocaine at regular intervals and then were injected intravenously with a dose of antagonist, and the self-administration of cocaine was continued for 6 to 10 h. The level of cocaine at the time of every self-administration (satiety threshold) was calculated throughout the session. The satiety threshold was stable before the injection of antagonist and then increased approximately 3-fold over the baseline value at doses of antagonists selected to produce this approximately equivalent maximal magnitude of effect (maximum increase in the equiactive cocaine concentration, satiety threshold; Cmax). Despite the similar Cmax, the mean Tmax varied between 5 and 157 min across this series of antagonists. Furthermore, there was a strong and significant correlation between the in vivo Tmax values for each antagonist and the affinity for D2-like dopamine receptors measured in vitro. It is concluded that the cocaine self-administration paradigm offers a reliable and predictive bioassay for measuring the affinity of a competitive antagonist for D2-like dopamine receptors. PMID:21606176
Chapman, Benjamin P.; Weiss, Alexander; Duberstein, Paul
2016-01-01
Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in “big data” problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how three common SLT algorithms–Supervised Principal Components, Regularization, and Boosting—can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach—or perhaps because of them–SLT methods may hold value as a statistically rigorous approach to exploratory regression. PMID:27454257
Active inference and epistemic value.
Friston, Karl; Rigoli, Francesco; Ognibene, Dimitri; Mathys, Christoph; Fitzgerald, Thomas; Pezzulo, Giovanni
2015-01-01
We offer a formal treatment of choice behavior based on the premise that agents minimize the expected free energy of future outcomes. Crucially, the negative free energy or quality of a policy can be decomposed into extrinsic and epistemic (or intrinsic) value. Minimizing expected free energy is therefore equivalent to maximizing extrinsic value or expected utility (defined in terms of prior preferences or goals), while maximizing information gain or intrinsic value (or reducing uncertainty about the causes of valuable outcomes). The resulting scheme resolves the exploration-exploitation dilemma: Epistemic value is maximized until there is no further information gain, after which exploitation is assured through maximization of extrinsic value. This is formally consistent with the Infomax principle, generalizing formulations of active vision based upon salience (Bayesian surprise) and optimal decisions based on expected utility and risk-sensitive (Kullback-Leibler) control. Furthermore, as with previous active inference formulations of discrete (Markovian) problems, ad hoc softmax parameters become the expected (Bayes-optimal) precision of beliefs about, or confidence in, policies. This article focuses on the basic theory, illustrating the ideas with simulations. A key aspect of these simulations is the similarity between precision updates and dopaminergic discharges observed in conditioning paradigms.
Discrete maximal regularity of time-stepping schemes for fractional evolution equations.
Jin, Bangti; Li, Buyang; Zhou, Zhi
2018-01-01
In this work, we establish the maximal [Formula: see text]-regularity for several time stepping schemes for a fractional evolution model, which involves a fractional derivative of order [Formula: see text], [Formula: see text], in time. These schemes include convolution quadratures generated by backward Euler method and second-order backward difference formula, the L1 scheme, explicit Euler method and a fractional variant of the Crank-Nicolson method. The main tools for the analysis include operator-valued Fourier multiplier theorem due to Weis (Math Ann 319:735-758, 2001. doi:10.1007/PL00004457) and its discrete analogue due to Blunck (Stud Math 146:157-176, 2001. doi:10.4064/sm146-2-3). These results generalize the corresponding results for parabolic problems.
Deterministic quantum annealing expectation-maximization algorithm
NASA Astrophysics Data System (ADS)
Miyahara, Hideyuki; Tsumura, Koji; Sughiyama, Yuki
2017-11-01
Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization approach. Motivated by QA, we propose a quantum annealing extension of EM, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. We also discuss its advantage in terms of the path integral formulation. Furthermore, by employing numerical simulations, we illustrate how DQAEM works in MLE and show that DQAEM moderate the problem of local optima in EM.
Reinforcement Learning for Constrained Energy Trading Games With Incomplete Information.
Wang, Huiwei; Huang, Tingwen; Liao, Xiaofeng; Abu-Rub, Haitham; Chen, Guo
2017-10-01
This paper considers the problem of designing adaptive learning algorithms to seek the Nash equilibrium (NE) of the constrained energy trading game among individually strategic players with incomplete information. In this game, each player uses the learning automaton scheme to generate the action probability distribution based on his/her private information for maximizing his own averaged utility. It is shown that if one of admissible mixed-strategies converges to the NE with probability one, then the averaged utility and trading quantity almost surely converge to their expected ones, respectively. For the given discontinuous pricing function, the utility function has already been proved to be upper semicontinuous and payoff secure which guarantee the existence of the mixed-strategy NE. By the strict diagonal concavity of the regularized Lagrange function, the uniqueness of NE is also guaranteed. Finally, an adaptive learning algorithm is provided to generate the strategy probability distribution for seeking the mixed-strategy NE.
Annotti, Lee A; Teglasi, Hedwig
2017-01-01
Real-world contexts differ in the clarity of expectations for desired responses, as do assessment procedures, ranging along a continuum from maximal conditions that provide well-defined expectations to typical conditions that provide ill-defined expectations. Executive functions guide effective social interactions, but relations between them have not been studied with measures that are matched in the clarity of response expectations. In predicting teacher-rated social competence (SC) from kindergarteners' performance on tasks of executive functions (EFs), we found better model-data fit indexes when both measures were similar in the clarity of response expectations for the child. The maximal EF measure, the Developmental Neuropsychological Assessment, presents well-defined response expectations, and the typical EF measure, 5 scales from the Thematic Apperception Test (TAT), presents ill-defined response expectations (i.e., Abstraction, Perceptual Integration, Cognitive-Experiential Integration, and Associative Thinking). To assess SC under maximal and typical conditions, we used 2 teacher-rated questionnaires, with items, respectively, that emphasize well-defined and ill-defined expectations: the Behavior Rating Inventory: Behavioral Regulation Index and the Social Skills Improvement System: Social Competence Scale. Findings suggest that matching clarity of expectations improves generalization across measures and highlight the usefulness of the TAT to measure EF.
Leicht, Anthony; Crowther, Robert; Golledge, Jonathan
2015-05-18
This study examined the impact of regular supervised exercise on body fat, assessed via anthropometry, and eating patterns of peripheral arterial disease patients with intermittent claudication (IC). Body fat, eating patterns and walking ability were assessed in 11 healthy adults (Control) and age- and mass-matched IC patients undertaking usual care (n = 10; IC-Con) or supervised exercise (12-months; n = 10; IC-Ex). At entry, all groups exhibited similar body fat and eating patterns. Maximal walking ability was greatest for Control participants and similar for IC-Ex and IC-Con patients. Supervised exercise resulted in significantly greater improvements in maximal walking ability (IC-Ex 148%-170% vs. IC-Con 29%-52%) and smaller increases in body fat (IC-Ex -2.1%-1.4% vs. IC-Con 8.4%-10%). IC-Con patients exhibited significantly greater increases in body fat compared with Control at follow-up (8.4%-10% vs. -0.6%-1.4%). Eating patterns were similar for all groups at follow-up. The current study demonstrated that regular, supervised exercise significantly improved maximal walking ability and minimised increase in body fat amongst IC patients without changes in eating patterns. The study supports the use of supervised exercise to minimize cardiovascular risk amongst IC patients. Further studies are needed to examine the additional value of other lifestyle interventions such as diet modification.
Task-based statistical image reconstruction for high-quality cone-beam CT
NASA Astrophysics Data System (ADS)
Dang, Hao; Webster Stayman, J.; Xu, Jennifer; Zbijewski, Wojciech; Sisniega, Alejandro; Mow, Michael; Wang, Xiaohui; Foos, David H.; Aygun, Nafi; Koliatsos, Vassilis E.; Siewerdsen, Jeffrey H.
2017-11-01
Task-based analysis of medical imaging performance underlies many ongoing efforts in the development of new imaging systems. In statistical image reconstruction, regularization is often formulated in terms to encourage smoothness and/or sharpness (e.g. a linear, quadratic, or Huber penalty) but without explicit formulation of the task. We propose an alternative regularization approach in which a spatially varying penalty is determined that maximizes task-based imaging performance at every location in a 3D image. We apply the method to model-based image reconstruction (MBIR—viz., penalized weighted least-squares, PWLS) in cone-beam CT (CBCT) of the head, focusing on the task of detecting a small, low-contrast intracranial hemorrhage (ICH), and we test the performance of the algorithm in the context of a recently developed CBCT prototype for point-of-care imaging of brain injury. Theoretical predictions of local spatial resolution and noise are computed via an optimization by which regularization (specifically, the quadratic penalty strength) is allowed to vary throughout the image to maximize local task-based detectability index ({{d}\\prime} ). Simulation studies and test-bench experiments were performed using an anthropomorphic head phantom. Three PWLS implementations were tested: conventional (constant) penalty; a certainty-based penalty derived to enforce constant point-spread function, PSF; and the task-based penalty derived to maximize local detectability at each location. Conventional (constant) regularization exhibited a fairly strong degree of spatial variation in {{d}\\prime} , and the certainty-based method achieved uniform PSF, but each exhibited a reduction in detectability compared to the task-based method, which improved detectability up to ~15%. The improvement was strongest in areas of high attenuation (skull base), where the conventional and certainty-based methods tended to over-smooth the data. The task-driven reconstruction method presents a promising regularization method in MBIR by explicitly incorporating task-based imaging performance as the objective. The results demonstrate improved ICH conspicuity and support the development of high-quality CBCT systems.
Maximizers in Lipschitz spacetimes are either timelike or null
NASA Astrophysics Data System (ADS)
Graf, Melanie; Ling, Eric
2018-04-01
We prove that causal maximizers in C 0,1 spacetimes are either timelike or null. This question was posed in Sämann and Steinbauer (2017 arXiv:1710.10887) since bubbling regions in C0, α spacetimes (α <1 ) can produce causal maximizers that contain a segment which is timelike and a segment which is null, see Chruściel and Grant (2012 Class. Quantum Grav. 29 145001). While C 0,1 spacetimes do not produce bubbling regions, the causal character of maximizers for spacetimes with regularity at least C 0,1 but less than C 1,1 was unknown until now. As an application we show that timelike geodesically complete spacetimes are C 0,1-inextendible.
[Measures to reduce lighting-related energy use and costs at hospital nursing stations].
Su, Chiu-Ching; Chen, Chen-Hui; Chen, Shu-Hwa; Ping, Tsui-Chu
2011-06-01
Hospitals have long been expected to deliver medical services in an environment that is comfortable and bright. This expectation keeps hospital energy demand stubbornly high and energy costs spiraling due to escalating utility fees. Hospitals must identify appropriate strategies to control electricity usage in order to control operating costs effectively. This paper proposes several electricity saving measures that both support government policies aimed at reducing global warming and help reduce energy consumption at the authors' hospital. The authors held educational seminars, established a website teaching energy saving methods, maximized facility and equipment use effectiveness (e.g., adjusting lamp placements, power switch and computer saving modes), posted signs promoting electricity saving, and established a regularized energy saving review mechanism. After implementation, average nursing staff energy saving knowledge had risen from 71.8% to 100% and total nursing station electricity costs fell from NT$16,456 to NT$10,208 per month, representing an effective monthly savings of 37.9% (NT$6,248). This project demonstrated the ability of a program designed to slightly modify nursing staff behavior to achieve effective and meaningful results in reducing overall electricity use.
Variable Scheduling to Mitigate Channel Losses in Energy-Efficient Body Area Networks
Tselishchev, Yuriy; Boulis, Athanassios; Libman, Lavy
2012-01-01
We consider a typical body area network (BAN) setting in which sensor nodes send data to a common hub regularly on a TDMA basis, as defined by the emerging IEEE 802.15.6 BAN standard. To reduce transmission losses caused by the highly dynamic nature of the wireless channel around the human body, we explore variable TDMA scheduling techniques that allow the order of transmissions within each TDMA round to be decided on the fly, rather than being fixed in advance. Using a simple Markov model of the wireless links, we devise a number of scheduling algorithms that can be performed by the hub, which aim to maximize the expected number of successful transmissions in a TDMA round, and thereby significantly reduce transmission losses as compared with a static TDMA schedule. Importantly, these algorithms do not require a priori knowledge of the statistical properties of the wireless channels, and the reliability improvement is achieved entirely via shuffling the order of transmissions among devices, and does not involve any additional energy consumption (e.g., retransmissions). We evaluate these algorithms directly on an experimental set of traces obtained from devices strapped to human subjects performing regular daily activities, and confirm that the benefits of the proposed variable scheduling algorithms extend to this practical setup as well. PMID:23202183
Crowther, Robert G; Leicht, Anthony S; Spinks, Warwick L; Sangla, Kunwarjit; Quigley, Frank; Golledge, Jonathan
2012-01-01
The purpose of this study was to examine the effects of a 6-month exercise program on submaximal walking economy in individuals with peripheral arterial disease and intermittent claudication (PAD-IC). Participants (n = 16) were randomly allocated to either a control PAD-IC group (CPAD-IC, n = 6) which received standard medical therapy, or a treatment PAD-IC group (TPAD-IC; n = 10) which took part in a supervised exercise program. During a graded treadmill test, physiological responses, including oxygen consumption, were assessed to calculate walking economy during submaximal and maximal walking performance. Differences between groups at baseline and post-intervention were analyzed via Kruskal-Wallis tests. At baseline, CPAD-IC and TPAD-IC groups demonstrated similar walking performance and physiological responses. Postintervention, TPAD-IC patients demonstrated significantly lower oxygen consumption during the graded exercise test, and greater maximal walking performance compared to CPAD-IC. These preliminary results indicate that 6 months of regular exercise improves both submaximal walking economy and maximal walking performance, without significant changes in maximal walking economy. Enhanced walking economy may contribute to physiological efficiency, which in turn may improve walking performance as demonstrated by PAD-IC patients following regular exercise programs.
Banerjee, Arindam; Ghosh, Joydeep
2004-05-01
Competitive learning mechanisms for clustering, in general, suffer from poor performance for very high-dimensional (>1000) data because of "curse of dimensionality" effects. In applications such as document clustering, it is customary to normalize the high-dimensional input vectors to unit length, and it is sometimes also desirable to obtain balanced clusters, i.e., clusters of comparable sizes. The spherical kmeans (spkmeans) algorithm, which normalizes the cluster centers as well as the inputs, has been successfully used to cluster normalized text documents in 2000+ dimensional space. Unfortunately, like regular kmeans and its soft expectation-maximization-based version, spkmeans tends to generate extremely imbalanced clusters in high-dimensional spaces when the desired number of clusters is large (tens or more). This paper first shows that the spkmeans algorithm can be derived from a certain maximum likelihood formulation using a mixture of von Mises-Fisher distributions as the generative model, and in fact, it can be considered as a batch-mode version of (normalized) competitive learning. The proposed generative model is then adapted in a principled way to yield three frequency-sensitive competitive learning variants that are applicable to static data and produced high-quality and well-balanced clusters for high-dimensional data. Like kmeans, each iteration is linear in the number of data points and in the number of clusters for all the three algorithms. A frequency-sensitive algorithm to cluster streaming data is also proposed. Experimental results on clustering of high-dimensional text data sets are provided to show the effectiveness and applicability of the proposed techniques. Index Terms-Balanced clustering, expectation maximization (EM), frequency-sensitive competitive learning (FSCL), high-dimensional clustering, kmeans, normalized data, scalable clustering, streaming data, text clustering.
Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster
2017-12-01
This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.
Chapman, Benjamin P; Weiss, Alexander; Duberstein, Paul R
2016-12-01
Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in "big data" problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how 3 common SLT algorithms-supervised principal components, regularization, and boosting-can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach-or perhaps because of them-SLT methods may hold value as a statistically rigorous approach to exploratory regression. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Patel, Nitin R; Ankolekar, Suresh
2007-11-30
Classical approaches to clinical trial design ignore economic factors that determine economic viability of a new drug. We address the choice of sample size in Phase III trials as a decision theory problem using a hybrid approach that takes a Bayesian view from the perspective of a drug company and a classical Neyman-Pearson view from the perspective of regulatory authorities. We incorporate relevant economic factors in the analysis to determine the optimal sample size to maximize the expected profit for the company. We extend the analysis to account for risk by using a 'satisficing' objective function that maximizes the chance of meeting a management-specified target level of profit. We extend the models for single drugs to a portfolio of clinical trials and optimize the sample sizes to maximize the expected profit subject to budget constraints. Further, we address the portfolio risk and optimize the sample sizes to maximize the probability of achieving a given target of expected profit.
On split regular Hom-Lie superalgebras
NASA Astrophysics Data System (ADS)
Albuquerque, Helena; Barreiro, Elisabete; Calderón, A. J.; Sánchez, José M.
2018-06-01
We introduce the class of split regular Hom-Lie superalgebras as the natural extension of the one of split Hom-Lie algebras and Lie superalgebras, and study its structure by showing that an arbitrary split regular Hom-Lie superalgebra L is of the form L = U +∑jIj with U a linear subspace of a maximal abelian graded subalgebra H and any Ij a well described (split) ideal of L satisfying [Ij ,Ik ] = 0 if j ≠ k. Under certain conditions, the simplicity of L is characterized and it is shown that L is the direct sum of the family of its simple ideals.
Forecasting continuously increasing life expectancy: what implications?
Le Bourg, Eric
2012-04-01
It has been proposed that life expectancy could linearly increase in the next decades and that median longevity of the youngest birth cohorts could reach 105 years or more. These forecasts have been criticized but it seems that their implications for future maximal lifespan (i.e. the lifespan of the last survivors) have not been considered. These implications make these forecasts untenable and it is less risky to hypothesize that life expectancy and maximal lifespan will reach an asymptotic limit in some decades from now. Copyright © 2012 Elsevier B.V. All rights reserved.
Hierarchical trie packet classification algorithm based on expectation-maximization clustering.
Bi, Xia-An; Zhao, Junxia
2017-01-01
With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm.
Soneson, Charlotte; Lilljebjörn, Henrik; Fioretos, Thoas; Fontes, Magnus
2010-04-15
With the rapid development of new genetic measurement methods, several types of genetic alterations can be quantified in a high-throughput manner. While the initial focus has been on investigating each data set separately, there is an increasing interest in studying the correlation structure between two or more data sets. Multivariate methods based on Canonical Correlation Analysis (CCA) have been proposed for integrating paired genetic data sets. The high dimensionality of microarray data imposes computational difficulties, which have been addressed for instance by studying the covariance structure of the data, or by reducing the number of variables prior to applying the CCA. In this work, we propose a new method for analyzing high-dimensional paired genetic data sets, which mainly emphasizes the correlation structure and still permits efficient application to very large data sets. The method is implemented by translating a regularized CCA to its dual form, where the computational complexity depends mainly on the number of samples instead of the number of variables. The optimal regularization parameters are chosen by cross-validation. We apply the regularized dual CCA, as well as a classical CCA preceded by a dimension-reducing Principal Components Analysis (PCA), to a paired data set of gene expression changes and copy number alterations in leukemia. Using the correlation-maximizing methods, regularized dual CCA and PCA+CCA, we show that without pre-selection of known disease-relevant genes, and without using information about clinical class membership, an exploratory analysis singles out two patient groups, corresponding to well-known leukemia subtypes. Furthermore, the variables showing the highest relevance to the extracted features agree with previous biological knowledge concerning copy number alterations and gene expression changes in these subtypes. Finally, the correlation-maximizing methods are shown to yield results which are more biologically interpretable than those resulting from a covariance-maximizing method, and provide different insight compared to when each variable set is studied separately using PCA. We conclude that regularized dual CCA as well as PCA+CCA are useful methods for exploratory analysis of paired genetic data sets, and can be efficiently implemented also when the number of variables is very large.
Highly Symmetric and Congruently Tiled Meshes for Shells and Domes
Rasheed, Muhibur; Bajaj, Chandrajit
2016-01-01
We describe the generation of all possible shell and dome shapes that can be uniquely meshed (tiled) using a single type of mesh face (tile), and following a single meshing (tiling) rule that governs the mesh (tile) arrangement with maximal vertex, edge and face symmetries. Such tiling arrangements or congruently tiled meshed shapes, are frequently found in chemical forms (fullerenes or Bucky balls, crystals, quasi-crystals, virus nano shells or capsids), and synthetic shapes (cages, sports domes, modern architectural facades). Congruently tiled meshes are both aesthetic and complete, as they support maximal mesh symmetries with minimal complexity and possess simple generation rules. Here, we generate congruent tilings and meshed shape layouts that satisfy these optimality conditions. Further, the congruent meshes are uniquely mappable to an almost regular 3D polyhedron (or its dual polyhedron) and which exhibits face-transitive (and edge-transitive) congruency with at most two types of vertices (each type transitive to the other). The family of all such congruently meshed polyhedra create a new class of meshed shapes, beyond the well-studied regular, semi-regular and quasi-regular classes, and their duals (platonic, Catalan and Johnson). While our new mesh class is infinite, we prove that there exists a unique mesh parametrization, where each member of the class can be represented by two integer lattice variables, and moreover efficiently constructable. PMID:27563368
Volume versus value maximization illustrated for Douglas-fir with thinning
Kurt H. Riitters; J. Douglas Brodie; Chiang Kao
1982-01-01
Economic and physical criteria for selecting even-aged rotation lengths are reviewed with examples of their optimizations. To demonstrate the trade-off between physical volume, economic return, and stand diameter, examples of thinning regimes for maximizing volume, forest rent, and soil expectation are compared with an example of maximizing volume without thinning. The...
Power-to-load balancing for asymmetric heave wave energy converters with nonideal power take-off
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tom, Nathan M.; Madhi, Farshad; Yeung, Ronald W.
The aim of this study is to maximize the power-to-load ratio for asymmetric heave wave energy converters. Linear hydrodynamic theory was used to calculate bounds of the expected time-averaged power (TAP) and corresponding surge-restraining force, pitch-restraining torque, and power take-off (PTO) control force with the assumption of sinusoidal displacement. This paper formulates an optimal control problem to handle an objective function with competing terms in an attempt to maximize power capture while minimizing structural and actuator loads in regular and irregular waves. Penalty weights are placed on the surge-restraining force, pitch-restraining torque, and PTO actuation force, thereby allowing the controlmore » focus to concentrate on either power absorption or load mitigation. The penalty weights are used to control peak structural and actuator loads that were found to curb the additional losses in power absorption associated with a nonideal PTO. Thus, in achieving these goals, a per-unit gain in TAP would not lead to a greater per-unit demand in structural strength, hence yielding a favorable benefit-to-cost ratio. Demonstrative results for 'The Berkeley Wedge' in the form of output TAP, reactive TAP needed to drive WEC motion, and the amplitudes of the surge-restraining force, pitch-restraining torque, and PTO control force are shown.« less
Power-to-load balancing for asymmetric heave wave energy converters with nonideal power take-off
Tom, Nathan M.; Madhi, Farshad; Yeung, Ronald W.
2017-12-11
The aim of this study is to maximize the power-to-load ratio for asymmetric heave wave energy converters. Linear hydrodynamic theory was used to calculate bounds of the expected time-averaged power (TAP) and corresponding surge-restraining force, pitch-restraining torque, and power take-off (PTO) control force with the assumption of sinusoidal displacement. This paper formulates an optimal control problem to handle an objective function with competing terms in an attempt to maximize power capture while minimizing structural and actuator loads in regular and irregular waves. Penalty weights are placed on the surge-restraining force, pitch-restraining torque, and PTO actuation force, thereby allowing the controlmore » focus to concentrate on either power absorption or load mitigation. The penalty weights are used to control peak structural and actuator loads that were found to curb the additional losses in power absorption associated with a nonideal PTO. Thus, in achieving these goals, a per-unit gain in TAP would not lead to a greater per-unit demand in structural strength, hence yielding a favorable benefit-to-cost ratio. Demonstrative results for 'The Berkeley Wedge' in the form of output TAP, reactive TAP needed to drive WEC motion, and the amplitudes of the surge-restraining force, pitch-restraining torque, and PTO control force are shown.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tom, Nathan M.; Madhi, Farshad; Yeung, Ronald W.
2016-06-24
The aim of this paper is to maximize the power-to-load ratio of the Berkeley Wedge: a one-degree-of-freedom, asymmetrical, energy-capturing, floating breakwater of high performance that is relatively free of viscosity effects. Linear hydrodynamic theory was used to calculate bounds on the expected time-averaged power (TAP) and corresponding surge restraining force, pitch restraining torque, and power take-off (PTO) control force when assuming that the heave motion of the wave energy converter remains sinusoidal. This particular device was documented to be an almost-perfect absorber if one-degree-of-freedom motion is maintained. The success of such or similar future wave energy converter technologies would requiremore » the development of control strategies that can adapt device performance to maximize energy generation in operational conditions while mitigating hydrodynamic loads in extreme waves to reduce the structural mass and overall cost. This paper formulates the optimal control problem to incorporate metrics that provide a measure of the surge restraining force, pitch restraining torque, and PTO control force. The optimizer must now handle an objective function with competing terms in an attempt to maximize power capture while minimizing structural and actuator loads. A penalty weight is placed on the surge restraining force, pitch restraining torque, and PTO actuation force, thereby allowing the control focus to be placed either on power absorption or load mitigation. Thus, in achieving these goals, a per-unit gain in TAP would not lead to a greater per-unit demand in structural strength, hence yielding a favorable benefit-to-cost ratio. Demonstrative results in the form of TAP, reactive TAP, and the amplitudes of the surge restraining force, pitch restraining torque, and PTO control force are shown for the Berkeley Wedge example.« less
Noise-enhanced clustering and competitive learning algorithms.
Osoba, Osonde; Kosko, Bart
2013-01-01
Noise can provably speed up convergence in many centroid-based clustering algorithms. This includes the popular k-means clustering algorithm. The clustering noise benefit follows from the general noise benefit for the expectation-maximization algorithm because many clustering algorithms are special cases of the expectation-maximization algorithm. Simulations show that noise also speeds up convergence in stochastic unsupervised competitive learning, supervised competitive learning, and differential competitive learning. Copyright © 2012 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kagie, Matthew J.; Lanterman, Aaron D.
2017-12-01
This paper addresses parameter estimation for an optical transient signal when the received data has been right-censored. We develop an expectation-maximization (EM) algorithm to estimate the amplitude of a Poisson intensity with a known shape in the presence of additive background counts, where the measurements are subject to saturation effects. We compare the results of our algorithm with those of an EM algorithm that is unaware of the censoring.
PEM-PCA: a parallel expectation-maximization PCA face recognition architecture.
Rujirakul, Kanokmon; So-In, Chakchai; Arnonkijpanich, Banchar
2014-01-01
Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.
A new approach to blind deconvolution of astronomical images
NASA Astrophysics Data System (ADS)
Vorontsov, S. V.; Jefferies, S. M.
2017-05-01
We readdress the strategy of finding approximate regularized solutions to the blind deconvolution problem, when both the object and the point-spread function (PSF) have finite support. Our approach consists in addressing fixed points of an iteration in which both the object x and the PSF y are approximated in an alternating manner, discarding the previous approximation for x when updating x (similarly for y), and considering the resultant fixed points as candidates for a sensible solution. Alternating approximations are performed by truncated iterative least-squares descents. The number of descents in the object- and in the PSF-space play a role of two regularization parameters. Selection of appropriate fixed points (which may not be unique) is performed by relaxing the regularization gradually, using the previous fixed point as an initial guess for finding the next one, which brings an approximation of better spatial resolution. We report the results of artificial experiments with noise-free data, targeted at examining the potential capability of the technique to deconvolve images of high complexity. We also show the results obtained with two sets of satellite images acquired using ground-based telescopes with and without adaptive optics compensation. The new approach brings much better results when compared with an alternating minimization technique based on positivity-constrained conjugate gradients, where the iterations stagnate when addressing data of high complexity. In the alternating-approximation step, we examine the performance of three different non-blind iterative deconvolution algorithms. The best results are provided by the non-negativity-constrained successive over-relaxation technique (+SOR) supplemented with an adaptive scheduling of the relaxation parameter. Results of comparable quality are obtained with steepest descents modified by imposing the non-negativity constraint, at the expense of higher numerical costs. The Richardson-Lucy (or expectation-maximization) algorithm fails to locate stable fixed points in our experiments, due apparently to inappropriate regularization properties.
Numerical simulation of coherent resonance in a model network of Rulkov neurons
NASA Astrophysics Data System (ADS)
Andreev, Andrey V.; Runnova, Anastasia E.; Pisarchik, Alexander N.
2018-04-01
In this paper we study the spiking behaviour of a neuronal network consisting of Rulkov elements. We find that the regularity of this behaviour maximizes at a certain level of environment noise. This effect referred to as coherence resonance is demonstrated in a random complex network of Rulkov neurons. An external stimulus added to some of neurons excites them, and then activates other neurons in the network. The network coherence is also maximized at the certain stimulus amplitude.
NASA Astrophysics Data System (ADS)
King, Sharon V.; Yuan, Shuai; Preza, Chrysanthe
2018-03-01
Effectiveness of extended depth of field microscopy (EDFM) implementation with wavefront encoding methods is reduced by depth-induced spherical aberration (SA) due to reliance of this approach on a defined point spread function (PSF). Evaluation of the engineered PSF's robustness to SA, when a specific phase mask design is used, is presented in terms of the final restored image quality. Synthetic intermediate images were generated using selected generalized cubic and cubic phase mask designs. Experimental intermediate images were acquired using the same phase mask designs projected from a liquid crystal spatial light modulator. Intermediate images were restored using the penalized space-invariant expectation maximization and the regularized linear least squares algorithms. In the presence of depth-induced SA, systems characterized by radially symmetric PSFs, coupled with model-based computational methods, achieve microscope imaging performance with fewer deviations in structural fidelity (e.g., artifacts) in simulation and experiment and 50% more accurate positioning of 1-μm beads at 10-μm depth in simulation than those with radially asymmetric PSFs. Despite a drop in the signal-to-noise ratio after processing, EDFM is shown to achieve the conventional resolution limit when a model-based reconstruction algorithm with appropriate regularization is used. These trends are also found in images of fixed fluorescently labeled brine shrimp, not adjacent to the coverslip, and fluorescently labeled mitochondria in live cells.
Regularized minimum I-divergence methods for the inverse blackbody radiation problem
NASA Astrophysics Data System (ADS)
Choi, Kerkil; Lanterman, Aaron D.; Shin, Jaemin
2006-08-01
This paper proposes iterative methods for estimating the area temperature distribution of a blackbody from its total radiated power spectrum measurements. This is called the inverse blackbody radiation problem. This problem is inherently ill-posed due to the characteristics of the kernel in the underlying integral equation given by Planck's law. The functions involved in the problem are all non-negative. Csiszár's I-divergence is an information-theoretic discrepancy measure between two non-negative functions. We derive iterative methods for minimizing Csiszár's I-divergence between the measured power spectrum and the power spectrum arising from the estimate according to the integral equation. Due to the ill-posedness of the problem, unconstrained algorithms often produce poor estimates, especially when the measurements are corrupted by noise. To alleviate this difficulty, we apply regularization methods to our algorithms. Penalties based on Shannon's entropy, the L1-norm and Good's roughness are chosen to suppress the undesirable artefacts. When a penalty is applied, the pertinent optimization that needs to be performed at each iteration is no longer trivial. In particular, Good's roughness causes couplings between estimate components. To handle this issue, we adapt Green's one-step-late method. This choice is based on the important fact that our minimum I-divergence algorithms can be interpreted as asymptotic forms of certain expectation-maximization algorithms. The effectiveness of our methods is illustrated via various numerical experiments.
Instationary Generalized Stokes Equations in Partially Periodic Domains
NASA Astrophysics Data System (ADS)
Sauer, Jonas
2018-06-01
We consider an instationary generalized Stokes system with nonhomogeneous divergence data under a periodic condition in only some directions. The problem is set in the whole space, the half space or in (after an identification of the periodic directions with a torus) bounded domains with sufficiently regular boundary. We show unique solvability for all times in Muckenhoupt weighted Lebesgue spaces. The divergence condition is dealt with by analyzing the associated reduced Stokes system and in particular by showing maximal regularity of the partially periodic reduced Stokes operator.
Battaglia, Francesco P.; Pennartz, Cyriel M. A.
2011-01-01
After acquisition, memories underlie a process of consolidation, making them more resistant to interference and brain injury. Memory consolidation involves systems-level interactions, most importantly between the hippocampus and associated structures, which takes part in the initial encoding of memory, and the neocortex, which supports long-term storage. This dichotomy parallels the contrast between episodic memory (tied to the hippocampal formation), collecting an autobiographical stream of experiences, and semantic memory, a repertoire of facts and statistical regularities about the world, involving the neocortex at large. Experimental evidence points to a gradual transformation of memories, following encoding, from an episodic to a semantic character. This may require an exchange of information between different memory modules during inactive periods. We propose a theory for such interactions and for the formation of semantic memory, in which episodic memory is encoded as relational data. Semantic memory is modeled as a modified stochastic grammar, which learns to parse episodic configurations expressed as an association matrix. The grammar produces tree-like representations of episodes, describing the relationships between its main constituents at multiple levels of categorization, based on its current knowledge of world regularities. These regularities are learned by the grammar from episodic memory information, through an expectation-maximization procedure, analogous to the inside–outside algorithm for stochastic context-free grammars. We propose that a Monte-Carlo sampling version of this algorithm can be mapped on the dynamics of “sleep replay” of previously acquired information in the hippocampus and neocortex. We propose that the model can reproduce several properties of semantic memory such as decontextualization, top-down processing, and creation of schemata. PMID:21887143
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
Pereira, N F; Sitek, A
2011-01-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated. PMID:20736496
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
NASA Astrophysics Data System (ADS)
Pereira, N. F.; Sitek, A.
2010-09-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.
Hierarchical trie packet classification algorithm based on expectation-maximization clustering
Bi, Xia-an; Zhao, Junxia
2017-01-01
With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm. PMID:28704476
Adding statistical regularity results in a global slowdown in visual search.
Vaskevich, Anna; Luria, Roy
2018-05-01
Current statistical learning theories predict that embedding implicit regularities within a task should further improve online performance, beyond general practice. We challenged this assumption by contrasting performance in a visual search task containing either a consistent-mapping (regularity) condition, a random-mapping condition, or both conditions, mixed. Surprisingly, performance in a random visual search, without any regularity, was better than performance in a mixed design search that contained a beneficial regularity. This result was replicated using different stimuli and different regularities, suggesting that mixing consistent and random conditions leads to an overall slowing down of performance. Relying on the predictive-processing framework, we suggest that this global detrimental effect depends on the validity of the regularity: when its predictive value is low, as it is in the case of a mixed design, reliance on all prior information is reduced, resulting in a general slowdown. Our results suggest that our cognitive system does not maximize speed, but rather continues to gather and implement statistical information at the expense of a possible slowdown in performance. Copyright © 2018 Elsevier B.V. All rights reserved.
Michael R. Vanderberg; Kevin Boston; John Bailey
2011-01-01
Accounting for the probability of loss due to disturbance events can influence the prediction of carbon flux over a planning horizon, and can affect the determination of optimal silvicultural regimes to maximize terrestrial carbon storage. A preliminary model that includes forest disturbance-related carbon loss was developed to maximize expected values of carbon stocks...
Exercise Adherence. ERIC Digest.
ERIC Educational Resources Information Center
Sullivan, Pat
This digest discusses exercise adherence, noting its vital role in maximizing the benefits associated with physical activity. Information is presented on the following: (1) factors that influence adherence to self-monitored programs of regular exercise (childhood eating habits, and psychological, physical, social, and situational factors); (2)…
Instructional Variables that Make a Difference: Attention to Task and Beyond.
ERIC Educational Resources Information Center
Rieth, Herbert J.; And Others
1981-01-01
Three procedures for increasing the disabled students' academic learning time(ALT)by maximizing allocation time, engagement time, and success rate are discussed, and a direct instructional model for enhancing ALT in both regular and special education environments is described. (CL)
Ouerghi, Nejmeddine; Khammassi, Marwa; Boukorraa, Sami; Feki, Moncef; Kaabachi, Naziha; Bouassida, Anissa
2014-01-01
Background Data regarding the effect of training on plasma lipids are controversial. Most studies have addressed continuous or long intermittent training programs. The present study evaluated the effect of short-short high-intensity intermittent training (HIIT) on aerobic capacity and plasma lipids in soccer players. Methods The study included 24 male subjects aged 21–26 years, divided into three groups: experimental group 1 (EG1, n=8) comprising soccer players who exercised in addition to regular short-short HIIT twice a week for 12 weeks; experimental group 2 (EG2, n=8) comprising soccer players who exercised in a regular football training program; and a control group (CG, n=8) comprising untrained subjects who did not practice regular physical activity. Maximal aerobic velocity and maximal oxygen uptake along with plasma lipids were measured before and after 6 weeks and 12 weeks of the respective training program. Results Compared with basal values, maximal oxygen uptake had significantly increased in EG1 (from 53.3±4.0 mL/min/kg to 54.8±3.0 mL/min/kg at 6 weeks [P<0.05] and to 57.0±3.2 mL/min/kg at 12 weeks [P<0.001]). Maximal oxygen uptake was increased only after 12 weeks in EG2 (from 52.8±2.7 mL/min/kg to 54.2±2.6 mL/min/kg, [P<0.05]), but remain unchanged in CG. After 12 weeks of training, maximal oxygen uptake was significantly higher in EG1 than in EG2 (P<0.05). During training, no significant changes in plasma lipids occurred. However, after 12 weeks, total and low-density lipoprotein cholesterol levels had decreased (by about 2%) in EG1 but increased in CG. High-density lipoprotein cholesterol levels increased in EG1 and EG2, but decreased in CG. Plasma triglycerides decreased by 8% in EG1 and increased by about 4% in CG. Conclusion Twelve weeks of short-short HIIT improves aerobic capacity. Although changes in the lipid profile were not significant after this training program, they may have a beneficial impact on health. PMID:25378960
Balakrishnan, Narayanaswamy; Pal, Suvra
2016-08-01
Recently, a flexible cure rate survival model has been developed by assuming the number of competing causes of the event of interest to follow the Conway-Maxwell-Poisson distribution. This model includes some of the well-known cure rate models discussed in the literature as special cases. Data obtained from cancer clinical trials are often right censored and expectation maximization algorithm can be used in this case to efficiently estimate the model parameters based on right censored data. In this paper, we consider the competing cause scenario and assuming the time-to-event to follow the Weibull distribution, we derive the necessary steps of the expectation maximization algorithm for estimating the parameters of different cure rate survival models. The standard errors of the maximum likelihood estimates are obtained by inverting the observed information matrix. The method of inference developed here is examined by means of an extensive Monte Carlo simulation study. Finally, we illustrate the proposed methodology with a real data on cancer recurrence. © The Author(s) 2013.
Very Slow Search and Reach: Failure to Maximize Expected Gain in an Eye-Hand Coordination Task
Zhang, Hang; Morvan, Camille; Etezad-Heydari, Louis-Alexandre; Maloney, Laurence T.
2012-01-01
We examined an eye-hand coordination task where optimal visual search and hand movement strategies were inter-related. Observers were asked to find and touch a target among five distractors on a touch screen. Their reward for touching the target was reduced by an amount proportional to how long they took to locate and reach to it. Coordinating the eye and the hand appropriately would markedly reduce the search-reach time. Using statistical decision theory we derived the sequence of interrelated eye and hand movements that would maximize expected gain and we predicted how hand movements should change as the eye gathered further information about target location. We recorded human observers' eye movements and hand movements and compared them with the optimal strategy that would have maximized expected gain. We found that most observers failed to adopt the optimal search-reach strategy. We analyze and describe the strategies they did adopt. PMID:23071430
NASA Astrophysics Data System (ADS)
Hegedűs, Árpád
2018-03-01
In this paper, using the light-cone lattice regularization, we compute the finite volume expectation values of the composite operator \\overline{Ψ}Ψ between pure fermion states in the Massive Thirring Model. In the light-cone regularized picture, this expectation value is related to 2-point functions of lattice spin operators being located at neighboring sites of the lattice. The operator \\overline{Ψ}Ψ is proportional to the trace of the stress-energy tensor. This is why the continuum finite volume expectation values can be computed also from the set of non-linear integral equations (NLIE) governing the finite volume spectrum of the theory. Our results for the expectation values coming from the computation of lattice correlators agree with those of the NLIE computations. Previous conjectures for the LeClair-Mussardo-type series representation of the expectation values are also checked.
Dong, J; Hayakawa, Y; Kober, C
2014-01-01
When metallic prosthetic appliances and dental fillings exist in the oral cavity, the appearance of metal-induced streak artefacts is not avoidable in CT images. The aim of this study was to develop a method for artefact reduction using the statistical reconstruction on multidetector row CT images. Adjacent CT images often depict similar anatomical structures. Therefore, reconstructed images with weak artefacts were attempted using projection data of an artefact-free image in a neighbouring thin slice. Images with moderate and strong artefacts were continuously processed in sequence by successive iterative restoration where the projection data was generated from the adjacent reconstructed slice. First, the basic maximum likelihood-expectation maximization algorithm was applied. Next, the ordered subset-expectation maximization algorithm was examined. Alternatively, a small region of interest setting was designated. Finally, the general purpose graphic processing unit machine was applied in both situations. The algorithms reduced the metal-induced streak artefacts on multidetector row CT images when the sequential processing method was applied. The ordered subset-expectation maximization and small region of interest reduced the processing duration without apparent detriments. A general-purpose graphic processing unit realized the high performance. A statistical reconstruction method was applied for the streak artefact reduction. The alternative algorithms applied were effective. Both software and hardware tools, such as ordered subset-expectation maximization, small region of interest and general-purpose graphic processing unit achieved fast artefact correction.
Fast GPU-based computation of spatial multigrid multiframe LMEM for PET.
Nassiri, Moulay Ali; Carrier, Jean-François; Després, Philippe
2015-09-01
Significant efforts were invested during the last decade to accelerate PET list-mode reconstructions, notably with GPU devices. However, the computation time per event is still relatively long, and the list-mode efficiency on the GPU is well below the histogram-mode efficiency. Since list-mode data are not arranged in any regular pattern, costly accesses to the GPU global memory can hardly be optimized and geometrical symmetries cannot be used. To overcome obstacles that limit the acceleration of reconstruction from list-mode on the GPU, a multigrid and multiframe approach of an expectation-maximization algorithm was developed. The reconstruction process is started during data acquisition, and calculations are executed concurrently on the GPU and the CPU, while the system matrix is computed on-the-fly. A new convergence criterion also was introduced, which is computationally more efficient on the GPU. The implementation was tested on a Tesla C2050 GPU device for a Gemini GXL PET system geometry. The results show that the proposed algorithm (multigrid and multiframe list-mode expectation-maximization, MGMF-LMEM) converges to the same solution as the LMEM algorithm more than three times faster. The execution time of the MGMF-LMEM algorithm was 1.1 s per million of events on the Tesla C2050 hardware used, for a reconstructed space of 188 x 188 x 57 voxels of 2 x 2 x 3.15 mm3. For 17- and 22-mm simulated hot lesions, the MGMF-LMEM algorithm led on the first iteration to contrast recovery coefficients (CRC) of more than 75 % of the maximum CRC while achieving a minimum in the relative mean square error. Therefore, the MGMF-LMEM algorithm can be used as a one-pass method to perform real-time reconstructions for low-count acquisitions, as in list-mode gated studies. The computation time for one iteration and 60 millions of events was approximately 66 s.
Convergence of damped inertial dynamics governed by regularized maximally monotone operators
NASA Astrophysics Data System (ADS)
Attouch, Hedy; Cabot, Alexandre
2018-06-01
In a Hilbert space setting, we study the asymptotic behavior, as time t goes to infinity, of the trajectories of a second-order differential equation governed by the Yosida regularization of a maximally monotone operator with time-varying positive index λ (t). The dissipative and convergence properties are attached to the presence of a viscous damping term with positive coefficient γ (t). A suitable tuning of the parameters γ (t) and λ (t) makes it possible to prove the weak convergence of the trajectories towards zeros of the operator. When the operator is the subdifferential of a closed convex proper function, we estimate the rate of convergence of the values. These results are in line with the recent articles by Attouch-Cabot [3], and Attouch-Peypouquet [8]. In this last paper, the authors considered the case γ (t) = α/t, which is naturally linked to Nesterov's accelerated method. We unify, and often improve the results already present in the literature.
Effect of inhibitory firing pattern on coherence resonance in random neural networks
NASA Astrophysics Data System (ADS)
Yu, Haitao; Zhang, Lianghao; Guo, Xinmeng; Wang, Jiang; Cao, Yibin; Liu, Jing
2018-01-01
The effect of inhibitory firing patterns on coherence resonance (CR) in random neuronal network is systematically studied. Spiking and bursting are two main types of firing pattern considered in this work. Numerical results show that, irrespective of the inhibitory firing patterns, the regularity of network is maximized by an optimal intensity of external noise, indicating the occurrence of coherence resonance. Moreover, the firing pattern of inhibitory neuron indeed has a significant influence on coherence resonance, but the efficacy is determined by network property. In the network with strong coupling strength but weak inhibition, bursting neurons largely increase the amplitude of resonance, while they can decrease the noise intensity that induced coherence resonance within the neural system of strong inhibition. Different temporal windows of inhibition induced by different inhibitory neurons may account for the above observations. The network structure also plays a constructive role in the coherence resonance. There exists an optimal network topology to maximize the regularity of the neural systems.
Callréus, M; McGuigan, F; Ringsberg, K; Akesson, K
2012-10-01
Recreational physical activity in 25-year-old women in Sweden increases bone mineral density (BMD) in the trochanter by 5.5% when combining regularity and impact. Jogging and spinning were especially beneficial for hip BMD (6.4-8.5%). Women who enjoyed physical education in school maintained their higher activity level at age 25. The aims of this study were to evaluate the effects of recreational exercise on BMD and describe how exercise patterns change with time in a normal population of young adult women. In a population-based study of 1,061 women, age 25 (±0.2), BMD was measured at total body (TB-BMD), femoral neck (FN-BMD), trochanter (TR-BMD), and spine (LS-BMD). Self-reported physical activity status was assessed by questionnaire. Regularity of exercise was expressed as recreational activity level (RAL) and impact load as peak strain score (PSS). A permutation (COMB-RP) was used to evaluate combined endurance and impacts on bone mass. More than half of the women reported exercising on a regular basis and the most common activities were running, strength training, aerobics, and spinning. Seventy percent participated in at least one activity during the year. Women with high RAL or PSS had higher BMD in the hip (2.6-3.5%) and spine (1.5-2.1%), with the greatest differences resulting from PSS (p < 0.001-0.02). Combined regularity and impact (high-COMB-RP) conferred the greatest gains in BMD (FN 4.7%, TR 5.5%, LS 3.1%; p < 0.001) despite concomitant lower body weight. Jogging and spinning were particularly beneficial for hip BMD (+6.4-8.5%). Women with high-COMB-RP scores enjoyed physical education in school more and maintained higher activity levels throughout compared to those with low scores. Self-reported recreational levels of physical activity positively influence BMD in young adult women but to maximize BMD gains, regular, high-impact exercise is required. Enjoyment of exercise contributes to regularity of exercising which has short- and long-term implications for bone health.
Coverage-maximization in networks under resource constraints.
Nandi, Subrata; Brusch, Lutz; Deutsch, Andreas; Ganguly, Niloy
2010-06-01
Efficient coverage algorithms are essential for information search or dispersal in all kinds of networks. We define an extended coverage problem which accounts for constrained resources of consumed bandwidth B and time T . Our solution to the network challenge is here studied for regular grids only. Using methods from statistical mechanics, we develop a coverage algorithm with proliferating message packets and temporally modulated proliferation rate. The algorithm performs as efficiently as a single random walker but O(B(d-2)/d) times faster, resulting in significant service speed-up on a regular grid of dimension d . The algorithm is numerically compared to a class of generalized proliferating random walk strategies and on regular grids shown to perform best in terms of the product metric of speed and efficiency.
Pal, Suvra; Balakrishnan, Narayanaswamy
2018-05-01
In this paper, we develop likelihood inference based on the expectation maximization algorithm for the Box-Cox transformation cure rate model assuming the lifetimes to follow a Weibull distribution. A simulation study is carried out to demonstrate the performance of the proposed estimation method. Through Monte Carlo simulations, we also study the effect of model misspecification on the estimate of cure rate. Finally, we analyze a well-known data on melanoma with the model and the inferential method developed here.
Husak, Jerry F; Fox, Stanley F
2006-09-01
To understand how selection acts on performance capacity, the ecological role of the performance trait being measured must be determined. Knowing if and when an animal uses maximal performance capacity may give insight into what specific selective pressures may be acting on performance, because individuals are expected to use close to maximal capacity only in contexts important to survival or reproductive success. Furthermore, if an ecological context is important, poor performers are expected to compensate behaviorally. To understand the relative roles of natural and sexual selection on maximal sprint speed capacity we measured maximal sprint speed of collared lizards (Crotaphytus collaris) in the laboratory and field-realized sprint speed for the same individuals in three different contexts (foraging, escaping a predator, and responding to a rival intruder). Females used closer to maximal speed while escaping predators than in the other contexts. Adult males, on the other hand, used closer to maximal speed while responding to an unfamiliar male intruder tethered within their territory. Sprint speeds during foraging attempts were far below maximal capacity for all lizards. Yearlings appeared to compensate for having lower absolute maximal capacity by using a greater percentage of their maximal capacity while foraging and escaping predators than did adults of either sex. We also found evidence for compensation within age and sex classes, where slower individuals used a greater percentage of their maximal capacity than faster individuals. However, this was true only while foraging and escaping predators and not while responding to a rival. Collared lizards appeared to choose microhabitats near refugia such that maximal speed was not necessary to escape predators. Although natural selection for predator avoidance cannot be ruled out as a selective force acting on locomotor performance in collared lizards, intrasexual selection for territory maintenance may be more important for territorial males.
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction
NASA Astrophysics Data System (ADS)
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-11-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.
Enhancing Social Work Education through Team-Based Learning
ERIC Educational Resources Information Center
Gillespie, Judy
2012-01-01
Group learning strategies are used extensively in social work education, despite the challenges and negative outcomes regularly experienced by students and faculty. Building on principles of cooperative learning, team-based learning offers a more structured approach that maximizes the benefits of cooperative learning while also offering…
Aerobic exercise and respiratory muscle strength in patients with cystic fibrosis.
Dassios, Theodore; Katelari, Anna; Doudounakis, Stavros; Dimitriou, Gabriel
2013-05-01
The beneficial role of exercise in maintaining health in patients with cystic fibrosis (CF) is well described. Few data exist on the effect of exercise on respiratory muscle function in patients with CF. Our objective was to compare respiratory muscle function indices in CF patients that regularly exercise with those CF patients that do not. This cross-sectional study assessed nutrition, pulmonary function and respiratory muscle function in 37 CF patients that undertook regular aerobic exercise and in a control group matched for age and gender which consisted of 44 CF patients that did not undertake regular exercise. Respiratory muscle function in CF was assessed by maximal inspiratory pressure (Pimax), maximal expiratory pressure (Pemax) and pressure-time index of the respiratory muscles (PTImus). Median Pimax and Pemax were significantly higher in the exercise group compared to the control group (92 vs. 63 cm H2O and 94 vs. 64 cm H2O respectively). PTImus was significantly lower in the exercise group compared to the control group (0.089 vs. 0.121). Upper arm muscle area (UAMA) and mid-arm muscle circumference were significantly increased in the exercise group compared to the control group (2608 vs. 2178 mm2 and 23 vs. 21 cm respectively). UAMA was significantly related to Pimax in the exercising group. These results suggest that CF patients that undertake regular aerobic exercise maintain higher indices of respiratory muscle strength and lower PTImus values, while increased UAMA values in exercising patients highlight the importance of muscular competence in respiratory muscle function in this population. Copyright © 2013 Elsevier Ltd. All rights reserved.
Studies of a Next-Generation Silicon-Photomultiplier-Based Time-of-Flight PET/CT System.
Hsu, David F C; Ilan, Ezgi; Peterson, William T; Uribe, Jorge; Lubberink, Mark; Levin, Craig S
2017-09-01
This article presents system performance studies for the Discovery MI PET/CT system, a new time-of-flight system based on silicon photomultipliers. System performance and clinical imaging were compared between this next-generation system and other commercially available PET/CT and PET/MR systems, as well as between different reconstruction algorithms. Methods: Spatial resolution, sensitivity, noise-equivalent counting rate, scatter fraction, counting rate accuracy, and image quality were characterized with the National Electrical Manufacturers Association NU-2 2012 standards. Energy resolution and coincidence time resolution were measured. Tests were conducted independently on two Discovery MI scanners installed at Stanford University and Uppsala University, and the results were averaged. Back-to-back patient scans were also performed between the Discovery MI, Discovery 690 PET/CT, and SIGNA PET/MR systems. Clinical images were reconstructed using both ordered-subset expectation maximization and Q.Clear (block-sequential regularized expectation maximization with point-spread function modeling) and were examined qualitatively. Results: The averaged full widths at half maximum (FWHMs) of the radial/tangential/axial spatial resolution reconstructed with filtered backprojection at 1, 10, and 20 cm from the system center were, respectively, 4.10/4.19/4.48 mm, 5.47/4.49/6.01 mm, and 7.53/4.90/6.10 mm. The averaged sensitivity was 13.7 cps/kBq at the center of the field of view. The averaged peak noise-equivalent counting rate was 193.4 kcps at 21.9 kBq/mL, with a scatter fraction of 40.6%. The averaged contrast recovery coefficients for the image-quality phantom were 53.7, 64.0, 73.1, 82.7, 86.8, and 90.7 for the 10-, 13-, 17-, 22-, 28-, and 37-mm-diameter spheres, respectively. The average photopeak energy resolution was 9.40% FWHM, and the average coincidence time resolution was 375.4 ps FWHM. Clinical image comparisons between the PET/CT systems demonstrated the high quality of the Discovery MI. Comparisons between the Discovery MI and SIGNA showed a similar spatial resolution and overall imaging performance. Lastly, the results indicated significantly enhanced image quality and contrast-to-noise performance for Q.Clear, compared with ordered-subset expectation maximization. Conclusion: Excellent performance was achieved with the Discovery MI, including 375 ps FWHM coincidence time resolution and sensitivity of 14 cps/kBq. Comparisons between reconstruction algorithms and other multimodal silicon photomultiplier and non-silicon photomultiplier PET detector system designs indicated that performance can be substantially enhanced with this next-generation system. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Confronting Diversity in the Community College Classroom: Six Maxims for Good Teaching.
ERIC Educational Resources Information Center
Gillett-Karam, Rosemary
1992-01-01
Emphasizes the leadership role of community college faculty in developing critical teaching strategies focusing attention on the needs of women and minorities. Describes six maxims of teaching excellence: engaging students' desire to learn, increasing opportunities, eliminating obstacles, empowering students through high expectations, offering…
Pelham, William E; Hoza, Betsy; Pillow, David R; Gnagy, Elizabeth M; Kipp, Heidi L; Greiner, Andrew R; Waschbusch, Daniel A; Trane, Sarah T; Greenhouse, Joel; Wolfson, Lara; Fitzpatrick, Erin
2002-04-01
Pharmacological and expectancy effects of 0.3 mg/kg methylphenidate on the behavior and attributions of boys with attention-deficit/hyperactivity disorder were evaluated. In a within-subject, balanced-placebo design, 136 boys received 4 medication-expectancy conditions. Attributions for success and failure on a daily report card were gathered. Assessments took place within the setting of a summer treatment program and were repeated in boys' regular classrooms. Expectancy did not affect the boys' behavior; only active medication improved their behavior. Boys attributed their success to their effort and ability and attributed failure to task difficulty and the pill, regardless of medication and expectancy. Results were generally equivalent across the two settings; where there were differences, beneficial effects of medication were more apparent in the school setting. The findings were unaffected by individual-difference factors.
Induced venous pooling and cardiorespiratory responses to exercise after bed rest
NASA Technical Reports Server (NTRS)
Convertino, V. A.; Sandler, H.; Webb, P.; Annis, J. F.
1982-01-01
Venous pooling induced by a specially constructed garment is investigated as a possible means for reversing the reduction in maximal oxygen uptake regularly observed following bed rest. Experiments involved a 15-day period of bed rest during which four healthy male subjects, while remaining recumbent in bed, received daily 210-min venous pooling treatments from a reverse gradient garment supplying counterpressure to the torso. Results of exercise testing indicate that while maximal oxygen uptake endurance time and plasma volume were reduced and maximal heart rate increased after bed rest in the control group, those parameters remained essentially unchanged for the group undergoing venous pooling treatment. Results demonstrate the importance of fluid shifts and venous pooling within the cardiovascular system in addition to physical activity to the maintenance of cardiovascular conditioning.
High Intensity Interval Training for Maximizing Health Outcomes.
Karlsen, Trine; Aamot, Inger-Lise; Haykowsky, Mark; Rognmo, Øivind
Regular physical activity and exercise training are important actions to improve cardiorespiratory fitness and maintain health throughout life. There is solid evidence that exercise is an effective preventative strategy against at least 25 medical conditions, including cardiovascular disease, stroke, hypertension, colon and breast cancer, and type 2 diabetes. Traditionally, endurance exercise training (ET) to improve health related outcomes has consisted of low- to moderate ET intensity. However, a growing body of evidence suggests that higher exercise intensities may be superior to moderate intensity for maximizing health outcomes. The primary objective of this review is to discuss how aerobic high-intensity interval training (HIIT) as compared to moderate continuous training may maximize outcomes, and to provide practical advices for successful clinical and home-based HIIT. Copyright © 2017. Published by Elsevier Inc.
20 CFR 220.11 - Definitions as used in this subpart.
Code of Federal Regulations, 2014 CFR
2014-04-01
... tests which provide objective measures of a claimant's maximal work ability and includes functional... DETERMINING DISABILITY Disability Under the Railroad Retirement Act for Work in an Employee's Regular Railroad... position to which the employee holds seniority rights or the position which he or she left to work for a...
20 CFR 220.11 - Definitions as used in this subpart.
Code of Federal Regulations, 2012 CFR
2012-04-01
... tests which provide objective measures of a claimant's maximal work ability and includes functional... DETERMINING DISABILITY Disability Under the Railroad Retirement Act for Work in an Employee's Regular Railroad... position to which the employee holds seniority rights or the position which he or she left to work for a...
20 CFR 220.11 - Definitions as used in this subpart.
Code of Federal Regulations, 2013 CFR
2013-04-01
... tests which provide objective measures of a claimant's maximal work ability and includes functional... DETERMINING DISABILITY Disability Under the Railroad Retirement Act for Work in an Employee's Regular Railroad... position to which the employee holds seniority rights or the position which he or she left to work for a...
Educational Programming for Pupils with Neurologically Based Language Disorders. Final Report.
ERIC Educational Resources Information Center
Zedler, Empress Y.
To investigate procedures whereby schools may achieve maximal results with otherwise normal underachieving pupils with neurologically based language-learning disorders, 100 such subjects were studied over a 2-year period. Fifty experimental subjects remained in regular classes in school and received individualized teaching outside of school hours…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Wei-Chen; Maitra, Ranjan
2011-01-01
We propose a model-based approach for clustering time series regression data in an unsupervised machine learning framework to identify groups under the assumption that each mixture component follows a Gaussian autoregressive regression model of order p. Given the number of groups, the traditional maximum likelihood approach of estimating the parameters using the expectation-maximization (EM) algorithm can be employed, although it is computationally demanding. The somewhat fast tune to the EM folk song provided by the Alternating Expectation Conditional Maximization (AECM) algorithm can alleviate the problem to some extent. In this article, we develop an alternative partial expectation conditional maximization algorithmmore » (APECM) that uses an additional data augmentation storage step to efficiently implement AECM for finite mixture models. Results on our simulation experiments show improved performance in both fewer numbers of iterations and computation time. The methodology is applied to the problem of clustering mutual funds data on the basis of their average annual per cent returns and in the presence of economic indicators.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Webb-Robertson, Bobbie-Jo M.; Wiberg, Holli K.; Matzke, Melissa M.
In this review, we apply selected imputation strategies to label-free liquid chromatography–mass spectrometry (LC–MS) proteomics datasets to evaluate the accuracy with respect to metrics of variance and classification. We evaluate several commonly used imputation approaches for individual merits and discuss the caveats of each approach with respect to the example LC–MS proteomics data. In general, local similarity-based approaches, such as the regularized expectation maximization and least-squares adaptive algorithms, yield the best overall performances with respect to metrics of accuracy and robustness. However, no single algorithm consistently outperforms the remaining approaches, and in some cases, performing classification without imputation sometimes yieldedmore » the most accurate classification. Thus, because of the complex mechanisms of missing data in proteomics, which also vary from peptide to protein, no individual method is a single solution for imputation. In summary, on the basis of the observations in this review, the goal for imputation in the field of computational proteomics should be to develop new approaches that work generically for this data type and new strategies to guide users in the selection of the best imputation for their dataset and analysis objectives.« less
Metadata from data: identifying holidays from anesthesia data.
Starnes, Joseph R; Wanderer, Jonathan P; Ehrenfeld, Jesse M
2015-05-01
The increasingly large databases available to researchers necessitate high-quality metadata that is not always available. We describe a method for generating this metadata independently. Cluster analysis and expectation-maximization were used to separate days into holidays/weekends and regular workdays using anesthesia data from Vanderbilt University Medical Center from 2004 to 2014. This classification was then used to describe differences between the two sets of days over time. We evaluated 3802 days and correctly categorized 3797 based on anesthesia case time (representing an error rate of 0.13%). Use of other metrics for categorization, such as billed anesthesia hours and number of anesthesia cases per day, led to similar results. Analysis of the two categories showed that surgical volume increased more quickly with time for non-holidays than holidays (p < 0.001). We were able to successfully generate metadata from data by distinguishing holidays based on anesthesia data. This data can then be used for economic analysis and scheduling purposes. It is possible that the method can be expanded to similar bimodal and multimodal variables.
Hypergraph-based anomaly detection of high-dimensional co-occurrences.
Silva, Jorge; Willett, Rebecca
2009-03-01
This paper addresses the problem of detecting anomalous multivariate co-occurrences using a limited number of unlabeled training observations. A novel method based on using a hypergraph representation of the data is proposed to deal with this very high-dimensional problem. Hypergraphs constitute an important extension of graphs which allow edges to connect more than two vertices simultaneously. A variational Expectation-Maximization algorithm for detecting anomalies directly on the hypergraph domain without any feature selection or dimensionality reduction is presented. The resulting estimate can be used to calculate a measure of anomalousness based on the False Discovery Rate. The algorithm has O(np) computational complexity, where n is the number of training observations and p is the number of potential participants in each co-occurrence event. This efficiency makes the method ideally suited for very high-dimensional settings, and requires no tuning, bandwidth or regularization parameters. The proposed approach is validated on both high-dimensional synthetic data and the Enron email database, where p > 75,000, and it is shown that it can outperform other state-of-the-art methods.
On split regular BiHom-Lie superalgebras
NASA Astrophysics Data System (ADS)
Zhang, Jian; Chen, Liangyun; Zhang, Chiping
2018-06-01
We introduce the class of split regular BiHom-Lie superalgebras as the natural extension of the one of split Hom-Lie superalgebras and the one of split Lie superalgebras. By developing techniques of connections of roots for this kind of algebras, we show that such a split regular BiHom-Lie superalgebra L is of the form L = U +∑ [ α ] ∈ Λ / ∼I[α] with U a subspace of the Abelian (graded) subalgebra H and any I[α], a well described (graded) ideal of L, satisfying [I[α] ,I[β] ] = 0 if [ α ] ≠ [ β ] . Under certain conditions, in the case of L being of maximal length, the simplicity of the algebra is characterized and it is shown that L is the direct sum of the family of its simple (graded) ideals.
Merrill, Jennifer E; Wardell, Jeffrey D; Read, Jennifer P
2009-12-01
The present study investigated whether tension reduction expectancies were uniquely associated with self-reported mood following in-lab alcohol administration, given that little research has addressed this association. We also tested whether level of experience with alcohol, which may influence the learning of expectancies, moderated expectancy-mood associations. Regularly drinking college students (N = 145) recruited through advertisements completed self-report measures of positive alcohol expectancies, alcohol involvement, demographics, and pre- and post-drinking mood, and then consumed alcohol ad libitum up to four drinks in the laboratory. Regression analyses controlling for pre-consumption mood, blood alcohol concentration, and all other positive expectancies showed tension reduction expectancies to be a marginally significant positive predictor of negative mood post-drinking. This association was significant only for those who achieved lower blood alcohol concentrations in lab and those who reported less involvement with alcohol (i.e., lower typical quantity, heavy episodic drinking frequency, and years of regular drinking). Findings suggest that associations between expectations for mood and actual post-drinking mood outcomes may operate differently for less versus more involved drinkers. Clinical implications pertain to early intervention, when expectancies may be less ingrained and perhaps more readily modified.
Demura, Shinichi; Morishita, Koji; Yamada, Takayoshi; Yamaji, Shunsuke; Komatsu, Miho
2011-11-01
L-Ornithine plays an important role in ammonia metabolism via the urea cycle. This study aimed to examine the effect of L-ornithine hydrochloride ingestion on ammonia metabolism and performance after intermittent maximal anaerobic cycle ergometer exercise. Ten healthy young adults (age, 23.8 ± 3.9 year; height, 172.3 ± 5.5 cm; body mass, 67.7 ± 6.1 kg) with regular training experience ingested L-ornithine hydrochloride (0.1 g/kg, body mass) or placebo after 30 s of maximal cycling exercise. Five sets of the same maximal cycling exercise were conducted 60 min after ingestion, and maximal cycling exercise was conducted after a 15 min rest. The intensity of cycling exercise was based on each subject's body mass (0.74 N kg(-1)). Work volume (watt), peak rpm (rpm) before and after intermittent maximal ergometer exercise and the following serum parameters were measured before ingestion, immediately after exercise and 15 min after exercise: ornithine, ammonia, urea, lactic acid and glutamate. Peak rpm was significantly greater with L-ornithine hydrochloride ingestion than with placebo ingestion. Serum ornithine level was significantly greater with L-ornithine hydrochloride ingestion than with placebo ingestion immediately and 15 min after intermittent maximal cycle ergometer exercise. In conclusion, although maximal anaerobic performance may be improved by L-ornithine hydrochloride ingestion before intermittent maximal anaerobic cycle ergometer exercise, the above may not depend on increase of ammonia metabolism with L-ornithine hydrochloride.
Exercise Responses after Inactivity
NASA Technical Reports Server (NTRS)
Convertino, Victor A.
1986-01-01
The exercise response after bed rest inactivity is a reduction in the physical work capacity and is manifested by significant decreases in oxygen uptake. The magnitude of decrease in maximal oxygen intake V(dot)O2max is related to the duration of confinement and the pre-bed-rest level of aerobic fitness; these relationships are relatively independent of age and gender. The reduced exercise performance and V(dot)O2max following bed rest are associated with various physiological adaptations including reductions in blood volume, submaximal and maximal stroke volume, maximal cardiac output, sceletal muscle tone and strength, and aerobic enzyme capacities, as well as increases in venous compliance and submaximal and maximal heart rate. This reduction in physiological capacity can be partially restored by specific countermeasures that provide regular muscular activity or orhtostatic stress or both during the bed rest exposure. The understanding of these physiological and physical responses to exercise following bed rest inactivity has important implications for the solution to safety and health problems that arise in clinical medicine, aerospace medicine, sedentary living, and aging.
Zachary, Chase E; Jiao, Yang; Torquato, Salvatore
2011-05-01
Hyperuniform many-particle distributions possess a local number variance that grows more slowly than the volume of an observation window, implying that the local density is effectively homogeneous beyond a few characteristic length scales. Previous work on maximally random strictly jammed sphere packings in three dimensions has shown that these systems are hyperuniform and possess unusual quasi-long-range pair correlations decaying as r(-4), resulting in anomalous logarithmic growth in the number variance. However, recent work on maximally random jammed sphere packings with a size distribution has suggested that such quasi-long-range correlations and hyperuniformity are not universal among jammed hard-particle systems. In this paper, we show that such systems are indeed hyperuniform with signature quasi-long-range correlations by characterizing the more general local-volume-fraction fluctuations. We argue that the regularity of the void space induced by the constraints of saturation and strict jamming overcomes the local inhomogeneity of the disk centers to induce hyperuniformity in the medium with a linear small-wave-number nonanalytic behavior in the spectral density, resulting in quasi-long-range spatial correlations scaling with r(-(d+1)) in d Euclidean space dimensions. A numerical and analytical analysis of the pore-size distribution for a binary maximally random jammed system in addition to a local characterization of the n-particle loops governing the void space surrounding the inclusions is presented in support of our argument. This paper is the first part of a series of two papers considering the relationships among hyperuniformity, jamming, and regularity of the void space in hard-particle packings.
Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim
2016-01-01
Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation.
Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim
2016-01-01
Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179
The Self in Decision Making and Decision Implementation.
ERIC Educational Resources Information Center
Beach, Lee Roy; Mitchell, Terence R.
Since the early 1950's the principal prescriptive model in the psychological study of decision making has been maximization of Subjective Expected Utility (SEU). This SEU maximization has come to be regarded as a description of how people go about making decisions. However, while observed decision processes sometimes resemble the SEU model,…
ERIC Educational Resources Information Center
Mollenkopf, Dawn L.
2009-01-01
The "highly qualified teacher" requirement of No Child Left Behind has put pressure on rural school districts to recruit and retain highly qualified regular and special education teachers. If necessary, they may utilize uncertified, rural teachers with provisional certification; however, these teachers may find completing the necessary…
Portfolios for Prior Learning Assessment: Caught between Diversity and Standardization
ERIC Educational Resources Information Center
Sweygers, Annelies; Soetewey, Kim; Meeus, Wil; Struyf, Elke; Pieters, Bert
2009-01-01
In recent years, procedures have been established in Flanders for "Prior Learning Assessment" (PLA) outside the formal learning circuit, of which the portfolio is a regular component. In order to maximize the possibilities of acknowledgement of prior learning assessment, the Flemish government is looking for a set of common criteria and…
A Study of Coordination Between Mathematics and Chemistry in the Pre-Technical Program.
ERIC Educational Resources Information Center
Loiseau, Roger A.
This research was undertaken to determine whether the mathematics course offered to students taking courses in chemical technology was adequate. Students in a regular class and an experimental class were given mathematics and chemistry pretests and posttests. The experimental class was taught using a syllabus designed to maximize the coherence…
Benefits of Moderate-Intensity Exercise during a Calorie-Restricted Low-Fat Diet
ERIC Educational Resources Information Center
Apekey, Tanefa A.; Morris, A. E. J.; Fagbemi, S.; Griffiths, G. J.
2012-01-01
Objective: Despite the health benefits, many people do not undertake regular exercise. This study investigated the effects of moderate-intensity exercise on cardiorespiratory fitness (lung age, blood pressure and maximal aerobic power, VO[subscript 2]max), serum lipids concentration and body mass index (BMI) in sedentary overweight/obese adults…
Minimize Subjective Theory, Maximize Authentic Experience in the Teaching of French Civilization.
ERIC Educational Resources Information Center
Corredor, Eva L.
A program developed to teach French civilization and modern France at the U.S. Naval Academy (Annapolis, Maryland) was designed to take advantage of readily available, relatively sophisticated technology for classroom instruction. The hardware used includes a satellite earth station that receives regular television broadcasts from France, a…
A trace ratio maximization approach to multiple kernel-based dimensionality reduction.
Jiang, Wenhao; Chung, Fu-lai
2014-01-01
Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.
On the role of budget sufficiency, cost efficiency, and uncertainty in species management
van der Burg, Max Post; Bly, Bartholomew B.; Vercauteren, Tammy; Grand, James B.; Tyre, Andrew J.
2014-01-01
Many conservation planning frameworks rely on the assumption that one should prioritize locations for management actions based on the highest predicted conservation value (i.e., abundance, occupancy). This strategy may underperform relative to the expected outcome if one is working with a limited budget or the predicted responses are uncertain. Yet, cost and tolerance to uncertainty rarely become part of species management plans. We used field data and predictive models to simulate a decision problem involving western burrowing owls (Athene cunicularia hypugaea) using prairie dog colonies (Cynomys ludovicianus) in western Nebraska. We considered 2 species management strategies: one maximized abundance and the other maximized abundance in a cost-efficient way. We then used heuristic decision algorithms to compare the 2 strategies in terms of how well they met a hypothetical conservation objective. Finally, we performed an info-gap decision analysis to determine how these strategies performed under different budget constraints and uncertainty about owl response. Our results suggested that when budgets were sufficient to manage all sites, the maximizing strategy was optimal and suggested investing more in expensive actions. This pattern persisted for restricted budgets up to approximately 50% of the sufficient budget. Below this budget, the cost-efficient strategy was optimal and suggested investing in cheaper actions. When uncertainty in the expected responses was introduced, the strategy that maximized abundance remained robust under a sufficient budget. Reducing the budget induced a slight trade-off between expected performance and robustness, which suggested that the most robust strategy depended both on one's budget and tolerance to uncertainty. Our results suggest that wildlife managers should explicitly account for budget limitations and be realistic about their expected levels of performance.
Verlinde's emergent gravity versus MOND and the case of dwarf spheroidals
NASA Astrophysics Data System (ADS)
Diez-Tejedor, Alberto; Gonzalez-Morales, Alma X.; Niz, Gustavo
2018-06-01
In a recent paper, Erik Verlinde has developed the interesting possibility that space-time and gravity may emerge from the entangled structure of an underlying microscopic theory. In this picture, dark matter arises as a response to the standard model of particle physics from the delocalized degrees of freedom that build up the dark energy component of the Universe. Dark matter physics is then regulated by a characteristic acceleration scale a0, identified with the radius of the (quasi)-de Sitter universe we inhabit. For a point particle matter source, or outside an extended spherically symmetric object, MOND's empirical fitting formula is recovered. However, Verlinde's theory critically departs from MOND when considering the inner structure of galaxies, differing by a factor of 2 at the centre of a regular massive body. For illustration, we use the eight classical dwarf spheroidal satellites of the Milky Way. These objects are perfect testbeds for the model given their approximate spherical symmetry, measured kinematics, and identified missing mass. We show that, without the assumption of a maximal deformation, Verlinde's theory can fit the velocity dispersion profile in dwarf spheroidals with no further need of an extra dark particle component. If a maximal deformation is considered, the theory leads to mass-to-light ratios that are marginally larger than expected from stellar population and formation history studies. We also compare our results with the recent phenomenological interpolating MOND function of McGaugh et al., and find a departure that, for these galaxies, is consistent with the scatter in current observations.
Hadamard States for the Klein-Gordon Equation on Lorentzian Manifolds of Bounded Geometry
NASA Astrophysics Data System (ADS)
Gérard, Christian; Oulghazi, Omar; Wrochna, Michał
2017-06-01
We consider the Klein-Gordon equation on a class of Lorentzian manifolds with Cauchy surface of bounded geometry, which is shown to include examples such as exterior Kerr, Kerr-de Sitter spacetime and the maximal globally hyperbolic extension of the Kerr outer region. In this setup, we give an approximate diagonalization and a microlocal decomposition of the Cauchy evolution using a time-dependent version of the pseudodifferential calculus on Riemannian manifolds of bounded geometry. We apply this result to construct all pure regular Hadamard states (and associated Feynman inverses), where regular refers to the state's two-point function having Cauchy data given by pseudodifferential operators. This allows us to conclude that there is a one-parameter family of elliptic pseudodifferential operators that encodes both the choice of (pure, regular) Hadamard state and the underlying spacetime metric.
Interval-based reconstruction for uncertainty quantification in PET
NASA Astrophysics Data System (ADS)
Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis
2018-02-01
A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.
The benefits of social influence in optimized cultural markets.
Abeliuk, Andrés; Berbeglia, Gerardo; Cebrian, Manuel; Van Hentenryck, Pascal
2015-01-01
Social influence has been shown to create significant unpredictability in cultural markets, providing one potential explanation why experts routinely fail at predicting commercial success of cultural products. As a result, social influence is often presented in a negative light. Here, we show the benefits of social influence for cultural markets. We present a policy that uses product quality, appeal, position bias and social influence to maximize expected profits in the market. Our computational experiments show that our profit-maximizing policy leverages social influence to produce significant performance benefits for the market, while our theoretical analysis proves that our policy outperforms in expectation any policy not displaying social signals. Our results contrast with earlier work which focused on showing the unpredictability and inequalities created by social influence. Not only do we show for the first time that, under our policy, dynamically showing consumers positive social signals increases the expected profit of the seller in cultural markets. We also show that, in reasonable settings, our profit-maximizing policy does not introduce significant unpredictability and identifies "blockbusters". Overall, these results shed new light on the nature of social influence and how it can be leveraged for the benefits of the market.
Optimal Investment Under Transaction Costs: A Threshold Rebalanced Portfolio Approach
NASA Astrophysics Data System (ADS)
Tunc, Sait; Donmez, Mehmet Ali; Kozat, Suleyman Serdar
2013-06-01
We study optimal investment in a financial market having a finite number of assets from a signal processing perspective. We investigate how an investor should distribute capital over these assets and when he should reallocate the distribution of the funds over these assets to maximize the cumulative wealth over any investment period. In particular, we introduce a portfolio selection algorithm that maximizes the expected cumulative wealth in i.i.d. two-asset discrete-time markets where the market levies proportional transaction costs in buying and selling stocks. We achieve this using "threshold rebalanced portfolios", where trading occurs only if the portfolio breaches certain thresholds. Under the assumption that the relative price sequences have log-normal distribution from the Black-Scholes model, we evaluate the expected wealth under proportional transaction costs and find the threshold rebalanced portfolio that achieves the maximal expected cumulative wealth over any investment period. Our derivations can be readily extended to markets having more than two stocks, where these extensions are pointed out in the paper. As predicted from our derivations, we significantly improve the achieved wealth over portfolio selection algorithms from the literature on historical data sets.
Matching Pupils and Teachers to Maximize Expected Outcomes.
ERIC Educational Resources Information Center
Ward, Joe H., Jr.; And Others
To achieve a good teacher-pupil match, it is necessary (1) to predict the learning outcomes that will result when each student is instructed by each teacher, (2) to use the predicted performance to compute an Optimality Index for each teacher-pupil combination to indicate the quality of each combination toward maximizing learning for all students,…
Ecological neighborhoods as a framework for umbrella species selection
Stuber, Erica F.; Fontaine, Joseph J.
2018-01-01
Umbrella species are typically chosen because they are expected to confer protection for other species assumed to have similar ecological requirements. Despite its popularity and substantial history, the value of the umbrella species concept has come into question because umbrella species chosen using heuristic methods, such as body or home range size, are not acting as adequate proxies for the metrics of interest: species richness or population abundance in a multi-species community for which protection is sought. How species associate with habitat across ecological scales has important implications for understanding population size and species richness, and therefore may be a better proxy for choosing an umbrella species. We determined the spatial scales of ecological neighborhoods important for predicting abundance of 8 potential umbrella species breeding in Nebraska using Bayesian latent indicator scale selection in N-mixture models accounting for imperfect detection. We compare the conservation value measured as collective avian abundance under different umbrella species selected following commonly used criteria and selected based on identifying spatial land cover characteristics within ecological neighborhoods that maximize collective abundance. Using traditional criteria to select an umbrella species resulted in sub-maximal expected collective abundance in 86% of cases compared to selecting an umbrella species based on land cover characteristics that maximized collective abundance directly. We conclude that directly assessing the expected quantitative outcomes, rather than ecological proxies, is likely the most efficient method to maximize the potential for conservation success under the umbrella species concept.
Bolduc, Virginie; Thorin-Trescases, Nathalie; Thorin, Eric
2013-09-01
Cognitive performances are tightly associated with the maximal aerobic exercise capacity, both of which decline with age. The benefits on mental health of regular exercise, which slows the age-dependent decline in maximal aerobic exercise capacity, have been established for centuries. In addition, the maintenance of an optimal cerebrovascular endothelial function through regular exercise, part of a healthy lifestyle, emerges as one of the key and primary elements of successful brain aging. Physical exercise requires the activation of specific brain areas that trigger a local increase in cerebral blood flow to match neuronal metabolic needs. In this review, we propose three ways by which exercise could maintain the cerebrovascular endothelial function, a premise to a healthy cerebrovascular function and an optimal regulation of cerebral blood flow. First, exercise increases blood flow locally and increases shear stress temporarily, a known stimulus for endothelial cell maintenance of Akt-dependent expression of endothelial nitric oxide synthase, nitric oxide generation, and the expression of antioxidant defenses. Second, the rise in circulating catecholamines during exercise not only facilitates adequate blood and nutrient delivery by stimulating heart function and mobilizing energy supplies but also enhances endothelial repair mechanisms and angiogenesis. Third, in the long term, regular exercise sustains a low resting heart rate that reduces the mechanical stress imposed to the endothelium of cerebral arteries by the cardiac cycle. Any chronic variation from a healthy environment will perturb metabolism and thus hasten endothelial damage, favoring hypoperfusion and neuronal stress.
An ERP study of regular and irregular English past tense inflection.
Newman, Aaron J; Ullman, Michael T; Pancheva, Roumyana; Waligura, Diane L; Neville, Helen J
2007-01-01
Compositionality is a critical and universal characteristic of human language. It is found at numerous levels, including the combination of morphemes into words and of words into phrases and sentences. These compositional patterns can generally be characterized by rules. For example, the past tense of most English verbs ("regulars") is formed by adding an -ed suffix. However, many complex linguistic forms have rather idiosyncratic mappings. For example, "irregular" English verbs have past tense forms that cannot be derived from their stems in a consistent manner. Whether regular and irregular forms depend on fundamentally distinct neurocognitive processes (rule-governed combination vs. lexical memorization), or whether a single processing system is sufficient to explain the phenomena, has engendered considerable investigation and debate. We recorded event-related potentials while participants read English sentences that were either correct or had violations of regular past tense inflection, irregular past tense inflection, syntactic phrase structure, or lexical semantics. Violations of regular past tense and phrase structure, but not of irregular past tense or lexical semantics, elicited left-lateralized anterior negativities (LANs). These seem to reflect neurocognitive substrates that underlie compositional processes across linguistic domains, including morphology and syntax. Regular, irregular, and phrase structure violations all elicited later positivities that were maximal over midline parietal sites (P600s), and seem to index aspects of controlled syntactic processing of both phrase structure and morphosyntax. The results suggest distinct neurocognitive substrates for processing regular and irregular past tense forms: regulars depending on compositional processing, and irregulars stored in lexical memory.
Text Classification for Intelligent Portfolio Management
2002-05-01
years including nearest neighbor classification [15], naive Bayes with EM (Ex- pectation Maximization) [11] [13], Winnow with active learning [10... Active Learning and Expectation Maximization (EM). In particular, active learning is used to actively select documents for labeling, then EM assigns...generalization with active learning . Machine Learning, 15(2):201–221, 1994. [3] I. Dagan and P. Engelson. Committee-based sampling for training
Replica analysis for the duality of the portfolio optimization problem
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2016-11-01
In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.
Replica analysis for the duality of the portfolio optimization problem.
Shinzato, Takashi
2016-11-01
In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tom, Nathan M; Yu, Yi-Hsiang; Wright, Alan D
In this work, the net power delivered to the grid from a nonideal power take-off (PTO) is introduced followed by a review of the pseudo-spectral control theory. A power-to-load ratio, used to evaluate the pseudo-spectral controller performance, is discussed, and the results obtained from optimizing a multiterm objective function are compared against results obtained from maximizing the net output power to the grid. Simulation results are then presented for four different oscillating wave energy converter geometries to highlight the potential of combing both geometry and PTO control to maximize power while minimizing loads.
When Does Reward Maximization Lead to Matching Law?
Sakai, Yutaka; Fukai, Tomoki
2008-01-01
What kind of strategies subjects follow in various behavioral circumstances has been a central issue in decision making. In particular, which behavioral strategy, maximizing or matching, is more fundamental to animal's decision behavior has been a matter of debate. Here, we prove that any algorithm to achieve the stationary condition for maximizing the average reward should lead to matching when it ignores the dependence of the expected outcome on subject's past choices. We may term this strategy of partial reward maximization “matching strategy”. Then, this strategy is applied to the case where the subject's decision system updates the information for making a decision. Such information includes subject's past actions or sensory stimuli, and the internal storage of this information is often called “state variables”. We demonstrate that the matching strategy provides an easy way to maximize reward when combined with the exploration of the state variables that correctly represent the crucial information for reward maximization. Our results reveal for the first time how a strategy to achieve matching behavior is beneficial to reward maximization, achieving a novel insight into the relationship between maximizing and matching. PMID:19030101
ERIC Educational Resources Information Center
Wood, Richard E.
Second language instruction in the U.S. and Europe is in difficulties. The choice of a second language is artibrary and the motivation dubious. In Europe and now also in the U.S., attention has turned to the planned interlanguage Esperanto, which offers a maximally regularized structure, is considered "easy" by learners, and has the…
What Can Parents Expect During Their Infant's Well-Child Visits?
... Snapshot of Pregnancy & Infant Development Advances Snapshot of Child Development Advances Snapshot of Adult & Family Health Advances NICHD ... What can parents expect during their infant’s well-child visits? ... regularly because growth and development occur so quickly in the first 2 years ...
On the Solutions of a 2+1-Dimensional Model for Epitaxial Growth with Axial Symmetry
NASA Astrophysics Data System (ADS)
Lu, Xin Yang
2018-04-01
In this paper, we study the evolution equation derived by Xu and Xiang (SIAM J Appl Math 69(5):1393-1414, 2009) to describe heteroepitaxial growth in 2+1 dimensions with elastic forces on vicinal surfaces is in the radial case and uniform mobility. This equation is strongly nonlinear and contains two elliptic integrals and defined via Cauchy principal value. We will first derive a formally equivalent parabolic evolution equation (i.e., full equivalence when sufficient regularity is assumed), and the main aim is to prove existence, uniqueness and regularity of strong solutions. We will extensively use techniques from the theory of evolution equations governed by maximal monotone operators in Banach spaces.
American College of Sports Medicine Position Stand. Exercise and physical activity for older adults.
1998-06-01
ACSM Position Stand on Exercise and Physical Activity for Older Adults. Med. Sci. Sports. Exerc., Vol. 30, No. 6, pp. 992-1008, 1998. By the year 2030, the number of individuals 65 yr and over will reach 70 million in the United States alone; persons 85 yr and older will be the fastest growing segment of the population. As more individuals live longer, it is imperative to determine the extent and mechanisms by which exercise and physical activity can improve health, functional capacity, quality of life, and independence in this population. Aging is a complex process involving many variables (e.g., genetics, lifestyle factors, chronic diseases) that interact with one another, greatly influencing the manner in which we age. Participation in regular physical activity (both aerobic and strength exercises) elicits a number of favorable responses that contribute to healthy aging. Much has been learned recently regarding the adaptability of various biological systems, as well as the ways that regular exercise can influence them. Participation in a regular exercise program is an effective intervention/ modality to reduce/prevent a number of functional declines associated with aging. Further, the trainability of older individuals (including octo- and nonagenarians) is evidenced by their ability to adapt and respond to both endurance and strength training. Endurance training can help maintain and improve various aspects of cardiovascular function (as measured by maximal VO2, cardiac output, and arteriovenous O2 difference), as well as enhance submaximal performance. Importantly, reductions in risk factors associated with disease states (heart disease, diabetes, etc.) improve health status and contribute to an increase in life expectancy. Strength training helps offset the loss in muscle mass and strength typically associated with normal aging. Additional benefits from regular exercise include improved bone health and, thus, reduction in risk for osteoporosis; improved postural stability, thereby reducing the risk of falling and associated injuries and fractures; and increased flexibility and range of motion. While not as abundant, the evidence also suggests that involvement in regular exercise can also provide a number of psychological benefits related to preserved cognitive function, alleviation of depression symptoms and behavior, and an improved concept of personal control and self-efficacy. It is important to note that while participation in physical activity may not always elicit increases in the traditional markers of physiological performance and fitness (e.g., VO2max, mitochondrial oxidative capacity, body composition) in older adults, it does improve health (reduction in disease risk factors) and functional capacity. Thus, the benefits associated with regular exercise and physical activity contribute to a more healthy, independent lifestyle, greatly improving the functional capacity and quality of life in this population.
Large-scale detection of repetitions
Smyth, W. F.
2014-01-01
Combinatorics on words began more than a century ago with a demonstration that an infinitely long string with no repetitions could be constructed on an alphabet of only three letters. Computing all the repetitions (such as ⋯TTT⋯ or ⋯CGACGA⋯ ) in a given string x of length n is one of the oldest and most important problems of computational stringology, requiring time in the worst case. About a dozen years ago, it was discovered that repetitions can be computed as a by-product of the Θ(n)-time computation of all the maximal periodicities or runs in x. However, even though the computation is linear, it is also brute force: global data structures, such as the suffix array, the longest common prefix array and the Lempel–Ziv factorization, need to be computed in a preprocessing phase. Furthermore, all of this effort is required despite the fact that the expected number of runs in a string is generally a small fraction of the string length. In this paper, I explore the possibility that repetitions (perhaps also other regularities in strings) can be computed in a manner commensurate with the size of the output. PMID:24751872
Multiple imputation of rainfall missing data in the Iberian Mediterranean context
NASA Astrophysics Data System (ADS)
Miró, Juan Javier; Caselles, Vicente; Estrela, María José
2017-11-01
Given the increasing need for complete rainfall data networks, in recent years have been proposed diverse methods for filling gaps in observed precipitation series, progressively more advanced that traditional approaches to overcome the problem. The present study has consisted in validate 10 methods (6 linear, 2 non-linear and 2 hybrid) that allow multiple imputation, i.e., fill at the same time missing data of multiple incomplete series in a dense network of neighboring stations. These were applied for daily and monthly rainfall in two sectors in the Júcar River Basin Authority (east Iberian Peninsula), which is characterized by a high spatial irregularity and difficulty of rainfall estimation. A classification of precipitation according to their genetic origin was applied as pre-processing, and a quantile-mapping adjusting as post-processing technique. The results showed in general a better performance for the non-linear and hybrid methods, highlighting that the non-linear PCA (NLPCA) method outperforms considerably the Self Organizing Maps (SOM) method within non-linear approaches. On linear methods, the Regularized Expectation Maximization method (RegEM) was the best, but far from NLPCA. Applying EOF filtering as post-processing of NLPCA (hybrid approach) yielded the best results.
Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George
2009-08-01
We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.
Webb-Robertson, Bobbie-Jo M.; Wiberg, Holli K.; Matzke, Melissa M.; ...
2015-04-09
In this review, we apply selected imputation strategies to label-free liquid chromatography–mass spectrometry (LC–MS) proteomics datasets to evaluate the accuracy with respect to metrics of variance and classification. We evaluate several commonly used imputation approaches for individual merits and discuss the caveats of each approach with respect to the example LC–MS proteomics data. In general, local similarity-based approaches, such as the regularized expectation maximization and least-squares adaptive algorithms, yield the best overall performances with respect to metrics of accuracy and robustness. However, no single algorithm consistently outperforms the remaining approaches, and in some cases, performing classification without imputation sometimes yieldedmore » the most accurate classification. Thus, because of the complex mechanisms of missing data in proteomics, which also vary from peptide to protein, no individual method is a single solution for imputation. In summary, on the basis of the observations in this review, the goal for imputation in the field of computational proteomics should be to develop new approaches that work generically for this data type and new strategies to guide users in the selection of the best imputation for their dataset and analysis objectives.« less
Han, Miaomiao; Guo, Zhirong; Liu, Haifeng; Li, Qinghua
2018-05-01
Tomographic Gamma Scanning (TGS) is a method used for the nondestructive assay of radioactive wastes. In TGS, the actual irregular edge voxels are regarded as regular cubic voxels in the traditional treatment method. In this study, in order to improve the performance of TGS, a novel edge treatment method is proposed that considers the actual shapes of these voxels. The two different edge voxel treatment methods were compared by computing the pixel-level relative errors and normalized mean square errors (NMSEs) between the reconstructed transmission images and the ideal images. Both methods were coupled with two different interative algorithms comprising Algebraic Reconstruction Technique (ART) with a non-negativity constraint and Maximum Likelihood Expectation Maximization (MLEM). The results demonstrated that the traditional method for edge voxel treatment can introduce significant error and that the real irregular edge voxel treatment method can improve the performance of TGS by obtaining better transmission reconstruction images. With the real irregular edge voxel treatment method, MLEM algorithm and ART algorithm can be comparable when assaying homogenous matrices, but MLEM algorithm is superior to ART algorithm when assaying heterogeneous matrices. Copyright © 2018 Elsevier Ltd. All rights reserved.
Inverse Ising problem in continuous time: A latent variable approach
NASA Astrophysics Data System (ADS)
Donner, Christian; Opper, Manfred
2017-12-01
We consider the inverse Ising problem: the inference of network couplings from observed spin trajectories for a model with continuous time Glauber dynamics. By introducing two sets of auxiliary latent random variables we render the likelihood into a form which allows for simple iterative inference algorithms with analytical updates. The variables are (1) Poisson variables to linearize an exponential term which is typical for point process likelihoods and (2) Pólya-Gamma variables, which make the likelihood quadratic in the coupling parameters. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum likelihood estimate of network parameters. Using a third set of latent variables we extend the EM algorithm to sparse couplings via L1 regularization. Finally, we develop an efficient approximate Bayesian inference algorithm using a variational approach. We demonstrate the performance of our algorithms on data simulated from an Ising model. For data which are simulated from a more biologically plausible network with spiking neurons, we show that the Ising model captures well the low order statistics of the data and how the Ising couplings are related to the underlying synaptic structure of the simulated network.
Vacuum polarization in the field of a multidimensional global monopole
NASA Astrophysics Data System (ADS)
Grats, Yu. V.; Spirin, P. A.
2016-11-01
An approximate expression for the Euclidean Green function of a massless scalar field in the spacetime of a multidimensional global monopole has been derived. Expressions for the vacuum expectation values <ϕ2>ren and < T 00>ren have been derived by the dimensional regularization method. Comparison with the results obtained by alternative regularization methods is made.
ERIC Educational Resources Information Center
Vlachou, Anastasia; Eleftheriadou, Dimitra; Metallidou, Panayiota
2014-01-01
This study aimed to (a) investigate whether the presence of learning difficulties (LD) in primary school children differentiates Greek teachers' attributional patterns, emotional responses, expectations and evaluative feedback for the children's academic failures and (b) to examine possible differences between regular and special education…
Chen, Wei J.; Ting, Te-Tien; Chang, Chao-Ming; Liu, Ying-Chun; Chen, Chuan-Yu
2014-01-01
The popularity of ketamine for recreational use among young people began to increase, particularly in Asia, in 2000. To gain more knowledge about the use of ketamine among high risk individuals, a respondent-driven sampling (RDS) was implemented among regular alcohol and tobacco users in the Taipei metropolitan area from 2007 to 2010. The sampling was initiated in three different settings (i.e., two in the community and one in a clinic) to recruit seed individuals. Each participant was asked to refer one to five friends known to be regular tobacco smokers and alcohol drinkers to participate in the present study. Incentives were offered differentially upon the completion of an interview and successful referral. Information pertaining to drug use experience was collected by an audio computer-assisted self-interview instrument. Software built for RDS analyses was used for data analyses. Of the 1,115 subjects recruited, about 11.7% of the RDS respondents reported ever having used ketamine. Positive expectancy of ketamine use was positively associated with ketamine use; in contrast, negative expectancy inversely associated with ketamine use. Decision-making characteristics as measured on the Iowa Gambling Task using reinforcement learning models revealed that ketamine users learned less from the most recent event than both tobacco- and drug-naïve controls and regular tobacco and alcohol users. These findings about ketamine use among young people have implications for its prevention and intervention. PMID:25264412
Work Placement in UK Undergraduate Programmes. Student Expectations and Experiences.
ERIC Educational Resources Information Center
Leslie, David; Richardson, Anne
1999-01-01
A survey of 189 pre- and 106 post-sandwich work-experience students in tourism suggested that potential benefits were not being maximized. Students needed better preparation for the work experience, especially in terms of their expectations. The work experience needed better design, and the role of industry tutors needed clarification. (SK)
Career Preference among Universities' Faculty: Literature Review
ERIC Educational Resources Information Center
Alenzi, Faris Q.; Salem, Mohamed L.
2007-01-01
Why do people enter academic life? What are their expectations? How can they maximize their experience and achievements, both short- and long-term? How much should they move towards commercialization? What can they do to improve their career? How much autonomy can they reasonably expect? What are the key issues for academics and aspiring academics…
Picking battles wisely: plant behaviour under competition.
Novoplansky, Ariel
2009-06-01
Plants are limited in their ability to choose their neighbours, but they are able to orchestrate a wide spectrum of rational competitive behaviours that increase their prospects to prevail under various ecological settings. Through the perception of neighbours, plants are able to anticipate probable competitive interactions and modify their competitive behaviours to maximize their long-term gains. Specifically, plants can minimize competitive encounters by avoiding their neighbours; maximize their competitive effects by aggressively confronting their neighbours; or tolerate the competitive effects of their neighbours. However, the adaptive values of these non-mutually exclusive options are expected to depend strongly on the plants' evolutionary background and to change dynamically according to their past development, and relative sizes and vigour. Additionally, the magnitude of competitive responsiveness is expected to be positively correlated with the reliability of the environmental information regarding the expected competitive interactions and the expected time left for further plastic modifications. Concurrent competition over external and internal resources and morphogenetic signals may enable some plants to increase their efficiency and external competitive performance by discriminately allocating limited resources to their more promising organs at the expense of failing or less successful organs.
Instability of enclosed horizons
NASA Astrophysics Data System (ADS)
Kay, Bernard S.
2015-03-01
We point out that there are solutions to the scalar wave equation on dimensional Minkowski space with finite energy tails which, if they reflect off a uniformly accelerated mirror due to (say) Dirichlet boundary conditions on it, develop an infinite stress-energy tensor on the mirror's Rindler horizon. We also show that, in the presence of an image mirror in the opposite Rindler wedge, suitable compactly supported arbitrarily small initial data on a suitable initial surface will develop an arbitrarily large stress-energy scalar near where the two horizons cross. Also, while there is a regular Hartle-Hawking-Israel-like state for the quantum theory between these two mirrors, there are coherent states built on it for which there are similar singularities in the expectation value of the renormalized stress-energy tensor. We conjecture that in other situations with analogous enclosed horizons such as a (maximally extended) Schwarzschild black hole in equilibrium in a (stationary spherical) box or the (maximally extended) Schwarzschild-AdS spacetime, there will be similar stress-energy singularities and almost-singularities—leading to instability of the horizons when gravity is switched on and matter and gravity perturbations are allowed for. All this suggests it is incorrect to picture a black hole in equilibrium in a box or a Schwarzschild-AdS black hole as extending beyond the past and future horizons of a single Schwarzschild (/Schwarzschild-AdS) wedge. It would thus provide new evidence for 't Hooft's brick wall model while seeming to invalidate the picture in Maldacena's ` Eternal black holes in AdS'. It would thereby also support the validity of the author's matter-gravity entanglement hypothesis and of the paper ` Brick walls and AdS/CFT' by the author and Ortíz.
NASA Astrophysics Data System (ADS)
Zhu, Yansong; Jha, Abhinav K.; Dreyer, Jakob K.; Le, Hanh N. D.; Kang, Jin U.; Roland, Per E.; Wong, Dean F.; Rahmim, Arman
2017-02-01
Fluorescence molecular tomography (FMT) is a promising tool for real time in vivo quantification of neurotransmission (NT) as we pursue in our BRAIN initiative effort. However, the acquired image data are noisy and the reconstruction problem is ill-posed. Further, while spatial sparsity of the NT effects could be exploited, traditional compressive-sensing methods cannot be directly applied as the system matrix in FMT is highly coherent. To overcome these issues, we propose and assess a three-step reconstruction method. First, truncated singular value decomposition is applied on the data to reduce matrix coherence. The resultant image data are input to a homotopy-based reconstruction strategy that exploits sparsity via l1 regularization. The reconstructed image is then input to a maximum-likelihood expectation maximization (MLEM) algorithm that retains the sparseness of the input estimate and improves upon the quantitation by accurate Poisson noise modeling. The proposed reconstruction method was evaluated in a three-dimensional simulated setup with fluorescent sources in a cuboidal scattering medium with optical properties simulating human brain cortex (reduced scattering coefficient: 9.2 cm-1, absorption coefficient: 0.1 cm-1 and tomographic measurements made using pixelated detectors. In different experiments, fluorescent sources of varying size and intensity were simulated. The proposed reconstruction method provided accurate estimates of the fluorescent source intensity, with a 20% lower root mean square error on average compared to the pure-homotopy method for all considered source intensities and sizes. Further, compared with conventional l2 regularized algorithm, overall, the proposed method reconstructed substantially more accurate fluorescence distribution. The proposed method shows considerable promise and will be tested using more realistic simulations and experimental setups.
Kessler, Thomas; Neumann, Jörg; Mummendey, Amélie; Berthold, Anne; Schubert, Thomas; Waldzus, Sven
2010-09-01
To explain the determinants of negative behavior toward deviants (e.g., punishment), this article examines how people evaluate others on the basis of two types of standards: minimal and maximal. Minimal standards focus on an absolute cutoff point for appropriate behavior; accordingly, the evaluation of others varies dichotomously between acceptable or unacceptable. Maximal standards focus on the degree of deviation from that standard; accordingly, the evaluation of others varies gradually from positive to less positive. This framework leads to the prediction that violation of minimal standards should elicit punishment regardless of the degree of deviation, whereas punishment in response to violations of maximal standards should depend on the degree of deviation. Four studies assessed or manipulated the type of standard and degree of deviation displayed by a target. Results consistently showed the expected interaction between type of standard (minimal and maximal) and degree of deviation on punishment behavior.
With age a lower individual breathing reserve is associated with a higher maximal heart rate.
Burtscher, Martin; Gatterer, Hannes; Faulhaber, Martin; Burtscher, Johannes
2018-01-01
Maximal heart rate (HRmax) is linearly declining with increasing age. Regular exercise training is supposed to partly prevent this decline, whereas sex and habitual physical activity do not. High exercise capacity is associated with a high cardiac output (HR x stroke volume) and high ventilatory requirements. Due to the close cardiorespiratory coupling, we hypothesized that the individual ventilatory response to maximal exercise might be associated with the age-related HRmax. Retrospective analyses have been conducted on the results of 129 consecutively performed routine cardiopulmonary exercise tests. The study sample comprised healthy subjects of both sexes of a broad range of age (20-86 years). Maximal values of power output, minute ventilation, oxygen uptake and heart rate were assessed by the use of incremental cycle spiroergometry. Linear multivariate regression analysis revealed that in addition to age the individual breathing reserve at maximal exercise was independently predictive for HRmax. A lower breathing reserve due to a high ventilatory demand and/or a low ventilatory capacity, which is more pronounced at a higher age, was associated with higher HRmax. Age explained the observed variance in HRmax by 72% and was improved to 83% when the variable "breathing reserve" was entered. The presented findings indicate an independent association between the breathing reserve at maximal exercise and maximal heart rate, i.e. a low individual breathing reserve is associated with a higher age-related HRmax. A deeper understanding of this association has to be investigated in a more physiological scenario. Copyright © 2017 Elsevier B.V. All rights reserved.
Skedgel, Chris; Wailoo, Allan; Akehurst, Ron
2015-01-01
Economic theory suggests that resources should be allocated in a way that produces the greatest outputs, on the grounds that maximizing output allows for a redistribution that could benefit everyone. In health care, this is known as QALY (quality-adjusted life-year) maximization. This justification for QALY maximization may not hold, though, as it is difficult to reallocate health. Therefore, the allocation of health care should be seen as a matter of distributive justice as well as efficiency. A discrete choice experiment was undertaken to test consistency with the principles of QALY maximization and to quantify the willingness to trade life-year gains for distributive justice. An empirical ethics process was used to identify attributes that appeared relevant and ethically justified: patient age, severity (decomposed into initial quality and life expectancy), final health state, duration of benefit, and distributional concerns. Only 3% of respondents maximized QALYs with every choice, but scenarios with larger aggregate QALY gains were chosen more often and a majority of respondents maximized QALYs in a majority of their choices. However, respondents also appeared willing to prioritize smaller gains to preferred groups over larger gains to less preferred groups. Marginal analyses found a statistically significant preference for younger patients and a wider distribution of gains, as well as an aversion to patients with the shortest life expectancy or a poor final health state. These results support the existence of an equity-efficiency tradeoff and suggest that well-being could be enhanced by giving priority to programs that best satisfy societal preferences. Societal preferences could be incorporated through the use of explicit equity weights, although more research is required before such weights can be used in priority setting. © The Author(s) 2014.
ERIC Educational Resources Information Center
McKeithan, Glennda Kashner
2016-01-01
An increase has occurred in the number of students identified as having high functioning autism (HFA), who are being served in the regular education setting with their non-disabled peers. Many of these students have difficulty with academic and social expectations in this setting, and a minimal amount of information is available to educators…
Katashima, Takuya; Urayama, Kenji; Chung, Ung-il; Sakai, Takamasa
2015-05-07
The pure shear deformation of the Tetra-polyethylene glycol gels reveals the presence of an explicit cross-effect of strains in the strain energy density function even for the polymer networks with nearly regular structure including no appreciable amount of structural defect such as trapped entanglement. This result is in contrast to the expectation of the classical Gaussian network model (Neo Hookean model), i.e., the vanishing of the cross effect in regular networks with no trapped entanglement. The results show that (1) the cross effect of strains is not dependent on the network-strand length; (2) the cross effect is not affected by the presence of non-network strands; (3) the cross effect is proportional to the network polymer concentration including both elastically effective and ineffective strands; (4) no cross effect is expected exclusively in zero limit of network concentration in real polymer networks. These features indicate that the real polymer networks with regular network structures have an explicit cross-effect of strains, which originates from some interaction between network strands (other than entanglement effect) such as nematic interaction, topological interaction, and excluded volume interaction.
Periodic binary sequence generators: VLSI circuits considerations
NASA Technical Reports Server (NTRS)
Perlman, M.
1984-01-01
Feedback shift registers are efficient periodic binary sequence generators. Polynomials of degree r over a Galois field characteristic 2(GF(2)) characterize the behavior of shift registers with linear logic feedback. The algorithmic determination of the trinomial of lowest degree, when it exists, that contains a given irreducible polynomial over GF(2) as a factor is presented. This corresponds to embedding the behavior of an r-stage shift register with linear logic feedback into that of an n-stage shift register with a single two-input modulo 2 summer (i.e., Exclusive-OR gate) in its feedback. This leads to Very Large Scale Integrated (VLSI) circuit architecture of maximal regularity (i.e., identical cells) with intercell communications serialized to a maximal degree.
He, Xin; Frey, Eric C
2006-08-01
Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.
Utilization of community pharmacy space to enhance privacy: a qualitative study.
Hattingh, H Laetitia; Emmerton, Lynne; Ng Cheong Tin, Pascale; Green, Catherine
2016-10-01
Community pharmacists require access to consumers' information about their medicines and health-related conditions to make informed decisions regarding treatment options. Open communication between consumers and pharmacists is ideal although consumers are only likely to disclose relevant information if they feel that their privacy requirements are being acknowledged and adhered to. This study sets out to explore community pharmacy privacy practices, experiences and expectations and the utilization of available space to achieve privacy. Qualitative methods were used, comprising a series of face-to-face interviews with 25 pharmacists and 55 pharmacy customers in Perth, Western Australia, between June and August 2013. The use of private consultation areas for certain services and sensitive discussions was supported by pharmacists and consumers although there was recognition that workflow processes in some pharmacies may need to change to maximize the use of private areas. Pharmacy staff adopted various strategies to overcome privacy obstacles such as taking consumers to a quieter part of the pharmacy, avoiding exposure of sensitive items through packaging, lowering of voices, interacting during pharmacy quiet times and telephoning consumers. Pharmacy staff and consumers regularly had to apply judgement to achieve the required level of privacy. Management of privacy can be challenging in the community pharmacy environment, and on-going work in this area is important. As community pharmacy practice is increasingly becoming more involved in advanced medication and disease state management services with unique privacy requirements, pharmacies' layouts and systems to address privacy challenges require a proactive approach. © 2015 The Authors. Health Expectations Published by John Wiley & Sons Ltd.
Can anti-gravity running improve performance to the same degree as over-ground running?
Brennan, Christopher T; Jenkins, David G; Osborne, Mark A; Oyewale, Michael; Kelly, Vincent G
2018-03-11
This study examined the changes in running performance, maximal blood lactate concentrations and running kinematics between 85%BM anti-gravity (AG) running and normal over-ground (OG) running over an 8-week training period. Fifteen elite male developmental cricketers were assigned to either the AG or over-ground (CON) running group. The AG group (n = 7) ran twice a week on an AG treadmill and once per week over-ground. The CON group (n = 8) completed all sessions OG on grass. Both AG and OG training resulted in similar improvements in time trial and shuttle run performance. Maximal running performance showed moderate differences between the groups, however the AG condition resulted in less improvement. Large differences in maximal blood lactate concentrations existed with OG running resulting in greater improvements in blood lactate concentrations measured during maximal running. Moderate increases in stride length paired with moderate decreases in stride rate also resulted from AG training. The use of AG training to supplement regular OG training for performance should be used cautiously, as extended use over long periods of time could lead to altered stride mechanics and reduced blood lactate.
Influence of label information on dark chocolate acceptability.
Torres-Moreno, M; Tarrega, A; Torrescasana, E; Blanch, C
2012-04-01
The aim of the present work was to study how the information on product labels influences consumer expectations and their acceptance and purchase intention of dark chocolate. Six samples of dark chocolate, varying in brand (premium and store brand) and in type of product (regular dark chocolate, single cocoa origin dark chocolate and high percentage of cocoa dark chocolate), were evaluated by 109 consumers who scored their liking and purchase intention under three conditions: blind (only tasting the products), expected (observing product label information) and informed (tasting the products together with provision of the label information). In the expected condition, consumer liking was mainly affected by the brand. In the blind condition, differences in liking were due to the type of product; the samples with a high percentage of cocoa were those less preferred by consumers. Under the informed condition, liking of dark chocolates varied depending on both brand and type of product. Premium brand chocolates generated high consumer expectations of chocolate acceptability, which were fulfilled by the sensory characteristics of the products. Store brand chocolates created lower expectations, but when they were tasted they were as acceptable as premium chocolates. Claims of a high percentage of cocoa and single cocoa origin on labels did not generate higher expectations than regular dark chocolates. Copyright © 2011 Elsevier Ltd. All rights reserved.
Maximizing the Science Output of GOES-R SUVI during Operations
NASA Astrophysics Data System (ADS)
Shaw, M.; Vasudevan, G.; Mathur, D. P.; Mansir, D.; Shing, L.; Edwards, C. G.; Seaton, D. B.; Darnel, J.; Nwachuku, C.
2017-12-01
Regular manual calibrations are an often-unavoidable demand on ground operations personnel during long-term missions. This paper describes a set of features built into the instrument control software and the techniques employed by the Solar Ultraviolet Imager (SUVI) team to automate a large fraction of regular on-board calibration activities, allowing SUVI to be operated with little manual commanding from the ground and little interruption to nominal sequencing. SUVI is a Generalized Cassegrain telescope with a large field of view that images the Sun in six extreme ultraviolet (EUV) narrow bandpasses centered at 9.4, 13.1, 17.1, 19.5, 28.4 and 30.4 nm. It is part of the payload of the Geostationary Operational Environmental Satellite (GOES-R) mission.
"Change deafness" arising from inter-feature masking within a single auditory object.
Barascud, Nicolas; Griffiths, Timothy D; McAlpine, David; Chait, Maria
2014-03-01
Our ability to detect prominent changes in complex acoustic scenes depends not only on the ear's sensitivity but also on the capacity of the brain to process competing incoming information. Here, employing a combination of psychophysics and magnetoencephalography (MEG), we investigate listeners' sensitivity in situations when two features belonging to the same auditory object change in close succession. The auditory object under investigation is a sequence of tone pips characterized by a regularly repeating frequency pattern. Signals consisted of an initial, regularly alternating sequence of three short (60 msec) pure tone pips (in the form ABCABC…) followed by a long pure tone with a frequency that is either expected based on the on-going regular pattern ("LONG expected"-i.e., "LONG-expected") or constitutes a pattern violation ("LONG-unexpected"). The change in LONG-expected is manifest as a change in duration (when the long pure tone exceeds the established duration of a tone pip), whereas the change in LONG-unexpected is manifest as a change in both the frequency pattern and a change in the duration. Our results reveal a form of "change deafness," in that although changes in both the frequency pattern and the expected duration appear to be processed effectively by the auditory system-cortical signatures of both changes are evident in the MEG data-listeners often fail to detect changes in the frequency pattern when that change is closely followed by a change in duration. By systematically manipulating the properties of the changing features and measuring behavioral and MEG responses, we demonstrate that feature changes within the same auditory object, which occur close together in time, appear to compete for perceptual resources.
Competitive Facility Location with Random Demands
NASA Astrophysics Data System (ADS)
Uno, Takeshi; Katagiri, Hideki; Kato, Kosuke
2009-10-01
This paper proposes a new location problem of competitive facilities, e.g. shops and stores, with uncertain demands in the plane. By representing the demands for facilities as random variables, the location problem is formulated to a stochastic programming problem, and for finding its solution, three deterministic programming problems: expectation maximizing problem, probability maximizing problem, and satisfying level maximizing problem are considered. After showing that one of their optimal solutions can be found by solving 0-1 programming problems, their solution method is proposed by improving the tabu search algorithm with strategic vibration. Efficiency of the solution method is shown by applying to numerical examples of the facility location problems.
Physical renormalization condition for de Sitter QED
NASA Astrophysics Data System (ADS)
Hayashinaka, Takahiro; Xue, She-Sheng
2018-05-01
We considered a new renormalization condition for the vacuum expectation values of the scalar and spinor currents induced by a homogeneous and constant electric field background in de Sitter spacetime. Following a semiclassical argument, the condition named maximal subtraction imposes the exponential suppression on the massive charged particle limit of the renormalized currents. The maximal subtraction changes the behaviors of the induced currents previously obtained by the conventional minimal subtraction scheme. The maximal subtraction is favored for a couple of physically decent predictions including the identical asymptotic behavior of the scalar and spinor currents, the removal of the IR hyperconductivity from the scalar current, and the finite current for the massless fermion.
Bechtold, Jordan; Hipwell, Alison; Lewis, David A; Loeber, Rolf; Pardini, Dustin
2016-08-01
Adolescents who regularly use marijuana may be at heightened risk of developing subclinical and clinical psychotic symptoms. However, this association could be explained by reverse causation or other factors. To address these limitations, the current study examined whether adolescents who engage in regular marijuana use exhibit a systematic increase in subclinical psychotic symptoms that persists during periods of sustained abstinence. The sample comprised 1,009 boys who were recruited in 1st and 7th grades. Self-reported frequency of marijuana use, subclinical psychotic symptoms, and several time-varying confounds (e.g., other substance use, internalizing/externalizing problems) were recorded annually from age 13 to 18. Fixed-effects (within-individual change) models examined whether adolescents exhibited an increase in their subclinical psychotic symptoms as a function of their recent and/or cumulative history of regular marijuana use and whether these effects were sustained following abstinence. Models controlled for all time-stable factors (default) and several time-varying covariates as potential confounds. For each year adolescent boys engaged in regular marijuana use, their expected level of subsequent subclinical psychotic symptoms rose by 21% and their expected odds of experiencing subsequent subclinical paranoia or hallucinations rose by 133% and 92%, respectively. The effect of prior regular marijuana use on subsequent subclinical psychotic symptoms persisted even when adolescents stopped using marijuana for a year. These effects were after controlling for all time-stable and several time-varying confounds. No support was found for reverse causation. These results suggest that regular marijuana use may significantly increase the risk that an adolescent will experience persistent subclinical psychotic symptoms.
Information-Theoretic Properties of Auditory Sequences Dynamically Influence Expectation and Memory
ERIC Educational Resources Information Center
Agres, Kat; Abdallah, Samer; Pearce, Marcus
2018-01-01
A basic function of cognition is to detect regularities in sensory input to facilitate the prediction and recognition of future events. It has been proposed that these implicit expectations arise from an internal predictive coding model, based on knowledge acquired through processes such as statistical learning, but it is unclear how different…
Ten Tips for Using Co-Planning Time More Efficiently
ERIC Educational Resources Information Center
Murawski, Wendy W.
2012-01-01
In this era of collaboration, educators are frequently expected to co-plan with one another on a regular basis. Unfortunately, the expectation of co-planning is not often accompanied by the time required or by the strategies necessary to plan effectively and efficiently for the inclusive classroom. This article provides 10 concrete tips for…
NASA Astrophysics Data System (ADS)
Dong, Bo-Qing; Jia, Yan; Li, Jingna; Wu, Jiahong
2018-05-01
This paper focuses on a system of the 2D magnetohydrodynamic (MHD) equations with the kinematic dissipation given by the fractional operator (-Δ )^α and the magnetic diffusion by partial Laplacian. We are able to show that this system with any α >0 always possesses a unique global smooth solution when the initial data is sufficiently smooth. In addition, we make a detailed study on the large-time behavior of these smooth solutions and obtain optimal large-time decay rates. Since the magnetic diffusion is only partial here, some classical tools such as the maximal regularity property for the 2D heat operator can no longer be applied. A key observation on the structure of the MHD equations allows us to get around the difficulties due to the lack of full Laplacian magnetic diffusion. The results presented here are the sharpest on the global regularity problem for the 2D MHD equations with only partial magnetic diffusion.
ERIC Educational Resources Information Center
Huvila, Isto; Daniels, Mats; Cajander, Åsa; Åhlfeldt, Rose-Mharie
2016-01-01
Introduction: We report results of a study of how ordering and reading of printouts of medical records by regular and inexperienced readers relate to how the records are used, to the health information practices of patients, and to their expectations of the usefulness of new e-Health services and online access to medical records. Method: The study…
ERIC Educational Resources Information Center
Margraf, Hannah; Pinquart, Martin
2016-01-01
Individuals with emotional and behavioral disturbances (EBD) and those attending special schools tend to have poorer adult outcomes than adolescents without EBD and peers from regular schools. Using a four-group comparison (students with or without EBD from special schools and students with or without EBD from regular schools), the present study…
The neural substrates of impaired finger tapping regularity after stroke.
Calautti, Cinzia; Jones, P Simon; Guincestre, Jean-Yves; Naccarato, Marcello; Sharma, Nikhil; Day, Diana J; Carpenter, T Adrian; Warburton, Elizabeth A; Baron, Jean-Claude
2010-03-01
Not only finger tapping speed, but also tapping regularity can be impaired after stroke, contributing to reduced dexterity. The neural substrates of impaired tapping regularity after stroke are unknown. Previous work suggests damage to the dorsal premotor cortex (PMd) and prefrontal cortex (PFCx) affects externally-cued hand movement. We tested the hypothesis that these two areas are involved in impaired post-stroke tapping regularity. In 19 right-handed patients (15 men/4 women; age 45-80 years; purely subcortical in 16) partially to fully recovered from hemiparetic stroke, tri-axial accelerometric quantitative assessment of tapping regularity and BOLD fMRI were obtained during fixed-rate auditory-cued index-thumb tapping, in a single session 10-230 days after stroke. A strong random-effect correlation between tapping regularity index and fMRI signal was found in contralesional PMd such that the worse the regularity the stronger the activation. A significant correlation in the opposite direction was also present within contralesional PFCx. Both correlations were maintained if maximal index tapping speed, degree of paresis and time since stroke were added as potential confounds. Thus, the contralesional PMd and PFCx appear to be involved in the impaired ability of stroke patients to fingertap in pace with external cues. The findings for PMd are consistent with repetitive TMS investigations in stroke suggesting a role for this area in affected-hand movement timing. The inverse relationship with tapping regularity observed for the PFCx and the PMd suggests these two anatomically-connected areas negatively co-operate. These findings have implications for understanding the disruption and reorganization of the motor systems after stroke. Copyright (c) 2009 Elsevier Inc. All rights reserved.
Boverman, Gregory; Isaacson, David; Newell, Jonathan C; Saulnier, Gary J; Kao, Tzu-Jen; Amm, Bruce C; Wang, Xin; Davenport, David M; Chong, David H; Sahni, Rakesh; Ashe, Jeffrey M
2017-04-01
In electrical impedance tomography (EIT), we apply patterns of currents on a set of electrodes at the external boundary of an object, measure the resulting potentials at the electrodes, and, given the aggregate dataset, reconstruct the complex conductivity and permittivity within the object. It is possible to maximize sensitivity to internal conductivity changes by simultaneously applying currents and measuring potentials on all electrodes but this approach also maximizes sensitivity to changes in impedance at the interface. We have, therefore, developed algorithms to assess contact impedance changes at the interface as well as to efficiently and simultaneously reconstruct internal conductivity/permittivity changes within the body. We use simple linear algebraic manipulations, the generalized singular value decomposition, and a dual-mesh finite-element-based framework to reconstruct images in real time. We are also able to efficiently compute the linearized reconstruction for a wide range of regularization parameters and to compute both the generalized cross-validation parameter as well as the L-curve, objective approaches to determining the optimal regularization parameter, in a similarly efficient manner. Results are shown using data from a normal subject and from a clinical intensive care unit patient, both acquired with the GE GENESIS prototype EIT system, demonstrating significantly reduced boundary artifacts due to electrode drift and motion artifact.
AN ERP STUDY OF REGULAR AND IRREGULAR ENGLISH PAST TENSE INFLECTION
Newman, Aaron J.; Ullman, Michael T.; Pancheva, Roumyana; Waligura, Diane L.; Neville, Helen J.
2006-01-01
Compositionality is a critical and universal characteristic of human language. It is found at numerous levels, including the combination of morphemes into words and of words into phrases and sentences. These compositional patterns can generally be characterized by rules. For example, the past tense of most English verbs (“regulars”) is formed by adding an -ed suffix. However, many complex linguistic forms have rather idiosyncratic mappings. For example, “irregular” English verbs have past tense forms that cannot be derived from their stems in a consistent manner. Whether regular and irregular forms depend on fundamentally distinct neurocognitive processes (rule-governed combination vs. lexical memorization), or whether a single processing system is sufficient to explain the phenomena, has engendered considerable investigation and debate. We recorded event-related potentials while participants read English sentences that were either correct or had violations of regular past tense inflection, irregular past tense inflection, syntactic phrase structure, or lexical semantics. Violations of regular past tense and phrase structure, but not of irregular past tense or lexical semantics, elicited left-lateralized anterior negativities (LANs). These seem to reflect neurocognitive substrates that underlie compositional processes across linguistic domains, including morphology and syntax. Regular, irregular, and phrase structure violations all elicited later positivities that were maximal over right parietal sites (P600s), and which seem to index aspects of controlled syntactic processing of both phrase structure and morphosyntax. The results suggest distinct neurocognitive substrates for processing regular and irregular past tense forms: regulars depending on compositional processing, and irregulars stored in lexical memory. PMID:17070703
Trust regions in Kriging-based optimization with expected improvement
NASA Astrophysics Data System (ADS)
Regis, Rommel G.
2016-06-01
The Kriging-based Efficient Global Optimization (EGO) method works well on many expensive black-box optimization problems. However, it does not seem to perform well on problems with steep and narrow global minimum basins and on high-dimensional problems. This article develops a new Kriging-based optimization method called TRIKE (Trust Region Implementation in Kriging-based optimization with Expected improvement) that implements a trust-region-like approach where each iterate is obtained by maximizing an Expected Improvement (EI) function within some trust region. This trust region is adjusted depending on the ratio of the actual improvement to the EI. This article also develops the Kriging-based CYCLONE (CYClic Local search in OptimizatioN using Expected improvement) method that uses a cyclic pattern to determine the search regions where the EI is maximized. TRIKE and CYCLONE are compared with EGO on 28 test problems with up to 32 dimensions and on a 36-dimensional groundwater bioremediation application in appendices supplied as an online supplement available at http://dx.doi.org/10.1080/0305215X.2015.1082350. The results show that both algorithms yield substantial improvements over EGO and they are competitive with a radial basis function method.
NASA Astrophysics Data System (ADS)
Li, Yinan; Qiao, Youming; Wang, Xin; Duan, Runyao
2018-03-01
We study the problem of transforming a tripartite pure state to a bipartite one using stochastic local operations and classical communication (SLOCC). It is known that the tripartite-to-bipartite SLOCC convertibility is characterized by the maximal Schmidt rank of the given tripartite state, i.e. the largest Schmidt rank over those bipartite states lying in the support of the reduced density operator. In this paper, we further study this problem and exhibit novel results in both multi-copy and asymptotic settings, utilizing powerful results from the structure of matrix spaces. In the multi-copy regime, we observe that the maximal Schmidt rank is strictly super-multiplicative, i.e. the maximal Schmidt rank of the tensor product of two tripartite pure states can be strictly larger than the product of their maximal Schmidt ranks. We then provide a full characterization of those tripartite states whose maximal Schmidt rank is strictly super-multiplicative when taking tensor product with itself. Notice that such tripartite states admit strict advantages in tripartite-to-bipartite SLOCC transformation when multiple copies are provided. In the asymptotic setting, we focus on determining the tripartite-to-bipartite SLOCC entanglement transformation rate. Computing this rate turns out to be equivalent to computing the asymptotic maximal Schmidt rank of the tripartite state, defined as the regularization of its maximal Schmidt rank. Despite the difficulty caused by the super-multiplicative property, we provide explicit formulas for evaluating the asymptotic maximal Schmidt ranks of two important families of tripartite pure states by resorting to certain results of the structure of matrix spaces, including the study of matrix semi-invariants. These formulas turn out to be powerful enough to give a sufficient and necessary condition to determine whether a given tripartite pure state can be transformed to the bipartite maximally entangled state under SLOCC, in the asymptotic setting. Applying the recent progress on the non-commutative rank problem, we can verify this condition in deterministic polynomial time.
Done, Aaron J; Traustadóttir, Tinna
2016-12-01
Older individuals who exercise regularly exhibit greater resistance to oxidative stress than their sedentary peers, suggesting that exercise can modify age-associated loss of resistance to oxidative stress. However, we recently demonstrated that a single bout of exercise confers protection against a subsequent oxidative challenge in young, but not older adults. We therefore hypothesized that repeated bouts of exercise would be needed to increase resistance to an oxidative challenge in sedentary older middle-aged adults. Sedentary older middle-aged men and women (50-63 years, n = 11) participated in an 8-week exercise intervention. Maximal oxygen consumption was measured before and after the intervention. The exercise intervention consisted of three sessions per week, for 45 min at an intensity corresponding to 70-85 % maximal heart rate (HR max ). Resistance to oxidative stress was measured by F 2 -isoprostane response to a forearm ischemia/reperfusion (I/R) trial. Each participant underwent the I/R trial before and after the exercise intervention. The intervention elicited a significant increase in maximal oxygen consumption (VO 2max ) (P < 0.0001). Baseline levels of F 2 -isoprostanes pre- and post-intervention did not differ, but the F 2 -isoprostane response to the I/R trial was significantly lower following the exercise intervention (time-by-trial interaction, P = 0.043). Individual improvements in aerobic fitness were associated with greater improvements in the F 2 -isoprostane response (r = -0.761, P = 0.011), further supporting the role of aerobic fitness in resistance to oxidative stress. These data demonstrate that regular exercise with improved fitness leads to increased resistance to oxidative stress in older middle-aged adults and that this measure is modifiable in previously sedentary individuals.
Tancredi, Giancarlo; Lambiase, Caterina; Favoriti, Alessandra; Ricupito, Francesca; Paoli, Sara; Duse, Marzia; De Castro, Giovanna; Zicari, Anna Maria; Vitaliti, Giovanna; Falsaperla, Raffaele; Lubrano, Riccardo
2016-04-27
An increasing number of children with chronic disease require a complete medical examination to be able to practice physical activity. Particularly children with solitary functioning kidney (SFK) need an accurate functional evaluation to perform sports activities safely. The aim of our study was to evaluate the influence of regular physical activity on the cardiorespiratory function of children with solitary functioning kidney. Twenty-nine patients with congenital SFK, mean age 13.9 ± 5.0 years, and 36 controls (C), mean age 13.8 ± 3.7 years, underwent a cardiorespiratory assessment with spirometry and maximal cardiopulmonary exercise testing. All subjects were divided in two groups: sedentary (S) and trained (T) patients, by means of a standardized questionnaire about their weekly physical activity. We found that mean values of maximal oxygen consumption (VO2max) and exercise time (ET) were higher in T subjects than in S subjects. Particularly SFK-T presented mean values of VO2max similar to C-T and significantly higher than C-S (SFK-T: 44.7 ± 6.3 vs C-S: 37.8 ± 3.7 ml/min/kg; p < 0.0008). We also found significantly higher mean values of ET (minutes) in minutes in SFK-T than C-S subjects (SFK-T: 12.9 ± 1.6 vs C-S: 10.8 ± 2.5 min; p <0.02). Our study showed that regular moderate/high level of physical activity improve aerobic capacity (VO2max) and exercise tolerance in congenital SFK patients without increasing the risks for cardiovascular accidents and accordingly sports activities should be strongly encouraged in SFK patients to maximize health benefits.
Lay theories of smoking and young adult nonsmokers' and smokers' smoking expectations.
Fitz, Caroline C; Kaufman, Annette; Moore, Philip J
2015-04-01
This study investigated the relationship between lay theories of cigarette smoking and expectations to smoke. An incremental lay theory of smoking entails the belief that smoking behavior can change; an entity theory entails the belief that smoking behavior cannot change. Undergraduate nonsmokers and smokers completed a survey that assessed lay theories of smoking and smoking expectations. Results demonstrated that lay theories of smoking were differentially associated with smoking expectations for nonsmokers and smokers: stronger incremental beliefs were associated with greater expectations of trying smoking for nonsmokers but lower expectations of becoming a regular smoker for smokers. Implications for interventions are discussed. © The Author(s) 2013.
Balasubramanian, Hari; Biehl, Sebastian; Dai, Longjie; Muriel, Ana
2014-03-01
Appointments in primary care are of two types: 1) prescheduled appointments, which are booked in advance of a given workday; and 2) same-day appointments, which are booked as calls come during the workday. The challenge for practices is to provide preferred time slots for prescheduled appointments and yet see as many same-day patients as possible during regular work hours. It is also important, to the extent possible, to match same-day patients with their own providers (so as to maximize continuity of care). In this paper, we present a mathematical framework (a stochastic dynamic program) for same-day patient allocation in multi-physician practices in which calls for same-day appointments come in dynamically over a workday. Allocation decisions have to be made in the presence of prescheduled appointments and without complete demand information. The objective is to maximize a weighted measure that includes the number of same-day patients seen during regular work hours as well as the continuity provided to these patients. Our experimental design is motivated by empirical data we collected at a 3-provider family medicine practice in Massachusetts. Our results show that the location of prescheduled appointments - i.e. where in the day these appointments are booked - has a significant impact on the number of same-day patients a practice can see during regular work hours, as well as the continuity the practice is able to provide. We find that a 2-Blocks policy which books prescheduled appointments in two clusters - early morning and early afternoon - works very well. We also provide a simple, easily implementable policy for schedulers to assign incoming same-day requests to appointment slots. Our results show that this policy provides near-optimal same-day assignments in a variety of settings.
Chen, Yi-Shin
2018-01-01
Conventional decision theory suggests that under risk, people choose option(s) by maximizing the expected utility. However, theories deal ambiguously with different options that have the same expected utility. A network approach is proposed by introducing ‘goal’ and ‘time’ factors to reduce the ambiguity in strategies for calculating the time-dependent probability of reaching a goal. As such, a mathematical foundation that explains the irrational behavior of choosing an option with a lower expected utility is revealed, which could imply that humans possess rationality in foresight. PMID:29702665
Pan, Wei; Chen, Yi-Shin
2018-01-01
Conventional decision theory suggests that under risk, people choose option(s) by maximizing the expected utility. However, theories deal ambiguously with different options that have the same expected utility. A network approach is proposed by introducing 'goal' and 'time' factors to reduce the ambiguity in strategies for calculating the time-dependent probability of reaching a goal. As such, a mathematical foundation that explains the irrational behavior of choosing an option with a lower expected utility is revealed, which could imply that humans possess rationality in foresight.
Older adults' exercise behavior: roles of selected constructs of social-cognitive theory.
Umstattd, M Renée; Hallam, Jeffrey
2007-04-01
Exercise is consistently related to physical and psychological health benefits in older adults. Bandura's social-cognitive theory (SCT) is one theoretical perspective on understanding and predicting exercise behavior. Thus, the authors examined whether three SCT variables-self-efficacy, self-regulation, and outcome-expectancy value-predicted older adults' (N = 98) exercise behavior. Bivariate analyses revealed that regular exercise was associated with being male, White, and married; having higher income, education, and self-efficacy; using self-regulation skills; and having favorable outcome-expectancy values (p < .05). In a simultaneous multivariate model, however, self-regulation (p = .0097) was the only variable independently associated with regular exercise. Thus, exercise interventions targeting older adults should include components aimed at increasing the use of self-regulation strategies.
Can Monkeys Make Investments Based on Maximized Pay-off?
Steelandt, Sophie; Dufour, Valérie; Broihanne, Marie-Hélène; Thierry, Bernard
2011-01-01
Animals can maximize benefits but it is not known if they adjust their investment according to expected pay-offs. We investigated whether monkeys can use different investment strategies in an exchange task. We tested eight capuchin monkeys (Cebus apella) and thirteen macaques (Macaca fascicularis, Macaca tonkeana) in an experiment where they could adapt their investment to the food amounts proposed by two different experimenters. One, the doubling partner, returned a reward that was twice the amount given by the subject, whereas the other, the fixed partner, always returned a constant amount regardless of the amount given. To maximize pay-offs, subjects should invest a maximal amount with the first partner and a minimal amount with the second. When tested with the fixed partner only, one third of monkeys learned to remove a maximal amount of food for immediate consumption before investing a minimal one. With both partners, most subjects failed to maximize pay-offs by using different decision rules with each partner' quality. A single Tonkean macaque succeeded in investing a maximal amount to one experimenter and a minimal amount to the other. The fact that only one of over 21 subjects learned to maximize benefits in adapting investment according to experimenters' quality indicates that such a task is difficult for monkeys, albeit not impossible. PMID:21423777
Thermal comfort of dual-chamber ski gloves
NASA Astrophysics Data System (ADS)
Dotti, F.; Colonna, M.; Ferri, A.
2017-10-01
In this work, the special design of a pair of ski gloves has been assessed in terms of thermal comfort. The glove 2in1 Gore-Tex has a dual-chamber construction, with two possible wearing configurations: one called “grip” to maximize finger flexibility and one called “warm” to maximize thermal insulation in extremely cold conditions. The dual-chamber gloves has been compared with two regular ski gloves produced by the same company. An intermittent test on a treadmill was carried out in a climatic chamber: it was made of four intense activity phases, during which the volunteer ran at 9 km/h on a 5% slope for 4 minutes, spaced out by 5-min resting phases. Finger temperature measurements were compared with the thermal sensations expressed by two volunteers during the test.
Vacuum polarization in the field of a multidimensional global monopole
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grats, Yu. V., E-mail: grats@phys.msu.ru; Spirin, P. A.
2016-11-15
An approximate expression for the Euclidean Green function of a massless scalar field in the spacetime of a multidimensional global monopole has been derived. Expressions for the vacuum expectation values 〈ϕ{sup 2}〉{sub ren} and 〈T{sub 00}〉{sub ren} have been derived by the dimensional regularization method. Comparison with the results obtained by alternative regularization methods is made.
Evidence for surprise minimization over value maximization in choice behavior
Schwartenbeck, Philipp; FitzGerald, Thomas H. B.; Mathys, Christoph; Dolan, Ray; Kronbichler, Martin; Friston, Karl
2015-01-01
Classical economic models are predicated on the idea that the ultimate aim of choice is to maximize utility or reward. In contrast, an alternative perspective highlights the fact that adaptive behavior requires agents’ to model their environment and minimize surprise about the states they frequent. We propose that choice behavior can be more accurately accounted for by surprise minimization compared to reward or utility maximization alone. Minimizing surprise makes a prediction at variance with expected utility models; namely, that in addition to attaining valuable states, agents attempt to maximize the entropy over outcomes and thus ‘keep their options open’. We tested this prediction using a simple binary choice paradigm and show that human decision-making is better explained by surprise minimization compared to utility maximization. Furthermore, we replicated this entropy-seeking behavior in a control task with no explicit utilities. These findings highlight a limitation of purely economic motivations in explaining choice behavior and instead emphasize the importance of belief-based motivations. PMID:26564686
Joseph Buongiorno; Mo Zhou; Craig Johnston
2017-01-01
Markov decision process models were extended to reflect some consequences of the risk attitude of forestry decision makers. One approach consisted of maximizing the expected value of a criterion subject to an upper bound on the variance or, symmetrically, minimizing the variance subject to a lower bound on the expected value. The other method used the certainty...
The Fermi LAT Very Important Project (VIP) List of Active Galactic Nuclei
NASA Astrophysics Data System (ADS)
Thompson, David J.; Fermi Large Area Telescope Collaboration
2018-01-01
Using nine years of Fermi Gamma-ray Space Telescope Large Area Telescope (LAT) observations, we have identified 30 projects for Active Galactic Nuclei (AGN) that appear to provide strong prospects for significant scientific advances. This Very Important Project (VIP) AGN list includes AGNs that have good multiwavelength coverage, are regularly detected by the Fermi LAT, and offer scientifically interesting timing or spectral properties. Each project has one or more LAT scientists identified who are actively monitoring the source. They will be regularly updating the LAT results for these VIP AGNs, working together with multiwavelength observers and theorists to maximize the scientific return during the coming years of the Fermi mission. See https://confluence.slac.stanford.edu/display/GLAMCOG/VIP+List+of+AGNs+for+Continued+Study
Maximal sfermion flavour violation in super-GUTs
Ellis, John; Olive, Keith A.; Velasco-Sevilla, Liliana
2016-10-20
We consider supersymmetric grand unified theories with soft supersymmetry-breaking scalar masses m 0 specified above the GUT scale (super-GUTs) and patterns of Yukawa couplings motivated by upper limits on flavour-changing interactions beyond the Standard Model. If the scalar masses are smaller than the gaugino masses m 1/2, as is expected in no-scale models, the dominant effects of renormalisation between the input scale and the GUT scale are generally expected to be those due to the gauge couplings, which are proportional to m 1/2 and generation independent. In this case, the input scalar masses m 0 may violate flavour maximally, amore » scenario we call MaxSFV, and there is no supersymmetric flavour problem. As a result, we illustrate this possibility within various specific super-GUT scenarios that are deformations of no-scale gravity« less
Zeng, Nianyin; Wang, Zidong; Li, Yurong; Du, Min; Cao, Jie; Liu, Xiaohui
2013-12-01
In this paper, the expectation maximization (EM) algorithm is applied to the modeling of the nano-gold immunochromatographic assay (nano-GICA) via available time series of the measured signal intensities of the test and control lines. The model for the nano-GICA is developed as the stochastic dynamic model that consists of a first-order autoregressive stochastic dynamic process and a noisy measurement. By using the EM algorithm, the model parameters, the actual signal intensities of the test and control lines, as well as the noise intensity can be identified simultaneously. Three different time series data sets concerning the target concentrations are employed to demonstrate the effectiveness of the introduced algorithm. Several indices are also proposed to evaluate the inferred models. It is shown that the model fits the data very well.
The Naïve Utility Calculus: Computational Principles Underlying Commonsense Psychology.
Jara-Ettinger, Julian; Gweon, Hyowon; Schulz, Laura E; Tenenbaum, Joshua B
2016-08-01
We propose that human social cognition is structured around a basic understanding of ourselves and others as intuitive utility maximizers: from a young age, humans implicitly assume that agents choose goals and actions to maximize the rewards they expect to obtain relative to the costs they expect to incur. This 'naïve utility calculus' allows both children and adults observe the behavior of others and infer their beliefs and desires, their longer-term knowledge and preferences, and even their character: who is knowledgeable or competent, who is praiseworthy or blameworthy, who is friendly, indifferent, or an enemy. We review studies providing support for the naïve utility calculus, and we show how it captures much of the rich social reasoning humans engage in from infancy. Copyright © 2016 Elsevier Ltd. All rights reserved.
Speeded Reaching Movements around Invisible Obstacles
Hudson, Todd E.; Wolfe, Uta; Maloney, Laurence T.
2012-01-01
We analyze the problem of obstacle avoidance from a Bayesian decision-theoretic perspective using an experimental task in which reaches around a virtual obstacle were made toward targets on an upright monitor. Subjects received monetary rewards for touching the target and incurred losses for accidentally touching the intervening obstacle. The locations of target-obstacle pairs within the workspace were varied from trial to trial. We compared human performance to that of a Bayesian ideal movement planner (who chooses motor strategies maximizing expected gain) using the Dominance Test employed in Hudson et al. (2007). The ideal movement planner suffers from the same sources of noise as the human, but selects movement plans that maximize expected gain in the presence of that noise. We find good agreement between the predictions of the model and actual performance in most but not all experimental conditions. PMID:23028276
Chow, Sy-Miin; Lu, Zhaohua; Sherwood, Andrew; Zhu, Hongtu
2016-03-01
The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation-maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed.
Clustering performance comparison using K-means and expectation maximization algorithms.
Jung, Yong Gyu; Kang, Min Soo; Heo, Jun
2014-11-14
Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K -means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K -means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results.
NASA Astrophysics Data System (ADS)
Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.
2018-04-01
Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.
McAuley, Edward; Szabo, Amanda; Gothe, Neha; Olson, Erin A
2011-07-01
Attenuating the physical decline and increases in disability associated with the aging process is an important public health priority. Evidence suggests that regular physical activity participation improves functional performance, such as walking, standing balance, flexibility, and getting up out of a chair, and also plays an important role in the disablement process by providing a protective effect against functional limitations. Whether these effects are direct or indirect has yet to be reliably established. In this review, the authors take the perspective that such relationships are indirect and operate through self-efficacy expectations. They first provide an introduction to social cognitive theory followed by an overview of self-efficacy's reciprocal relationship with physical activity. They then consider the literature that documents the effects of physical activity on functional performance and functional limitations in older adults and the extent to which self-efficacy might mediate these relationships. Furthermore, they also present evidence that suggests that self-efficacy plays a pivotal role in a model in which the protective effects conferred by physical activity on functional limitations operate through functional performance. The article concludes with a brief section making recommendations for the development of strategies within physical activity and rehabilitative programs for maximizing the major sources of efficacy information.
McAuley, Edward; Szabo, Amanda; Gothe, Neha; Olson, Erin A.
2013-01-01
Attenuating the physical decline and increases in disability associated with the aging process is an important public health priority. Evidence suggests that regular physical activity participation improves functional performance, such as walking, standing balance, flexibility, and getting up out of a chair, and also plays an important role in the disablement process by providing a protective effect against functional limitations. Whether these effects are direct or indirect has yet to be reliably established. In this review, the authors take the perspective that such relationships are indirect and operate through self-efficacy expectations. They first provide an introduction to social cognitive theory followed by an overview of self-efficacy's reciprocal relationship with physical activity. They then consider the literature that documents the effects of physical activity on functional performance and functional limitations in older adults and the extent to which self-efficacy might mediate these relationships. Furthermore, they also present evidence that suggests that self-efficacy plays a pivotal role in a model in which the protective effects conferred by physical activity on functional limitations operate through functional performance. The article concludes with a brief section making recommendations for the development of strategies within physical activity and rehabilitative programs for maximizing the major sources of efficacy information. PMID:24353482
Hawke, Lisa D; Relihan, Jacqueline; Miller, Joshua; McCann, Emma; Rong, Jessica; Darnay, Karleigh; Docherty, Samantha; Chaim, Gloria; Henderson, Joanna L
2018-06-01
Engaging youth as partners in academic research projects offers many benefits for the youth and the research team. However, it is not always clear to researchers how to engage youth effectively to optimize the experience and maximize the impact. This article provides practical recommendations to help researchers engage youth in meaningful ways in academic research, from initial planning to project completion. These general recommendations can be applied to all types of research methodologies, from community action-based research to highly technical designs. Youth can and do provide valuable input into academic research projects when their contributions are authentically valued, their roles are clearly defined, communication is clear, and their needs are taken into account. Researchers should be aware of the risk of tokenizing the youth they engage and work proactively to take their feedback into account in a genuine way. Some adaptations to regular research procedures are recommended to improve the success of the youth engagement initiative. By following these guidelines, academic researchers can make youth engagement a key tenet of their youth-oriented research initiatives, increasing the feasibility, youth-friendliness and ecological validity of their work and ultimately improve the value and impact of the results their research produces. © 2018 The Authors. Health Expectations published by John Wiley & Sons Ltd.
Hopkins, Melanie J; Smith, Andrew B
2015-03-24
How ecological and morphological diversity accrues over geological time has been much debated by paleobiologists. Evidence from the fossil record suggests that many clades reach maximal diversity early in their evolutionary history, followed by a decline in evolutionary rates as ecological space fills or due to internal constraints. Here, we apply recently developed methods for estimating rates of morphological evolution during the post-Paleozoic history of a major invertebrate clade, the Echinoidea. Contrary to expectation, rates of evolution were lowest during the initial phase of diversification following the Permo-Triassic mass extinction and increased over time. Furthermore, although several subclades show high initial rates and net decreases in rates of evolution, consistent with "early bursts" of morphological diversification, at more inclusive taxonomic levels, these bursts appear as episodic peaks. Peak rates coincided with major shifts in ecological morphology, primarily associated with innovations in feeding strategies. Despite having similar numbers of species in today's oceans, regular echinoids have accrued far less morphological diversity than irregular echinoids due to lower intrinsic rates of morphological evolution and less morphological innovation, the latter indicative of constrained or bounded evolution. These results indicate that rates of evolution are extremely heterogenous through time and their interpretation depends on the temporal and taxonomic scale of analysis.
Context-aware adaptive spelling in motor imagery BCI
NASA Astrophysics Data System (ADS)
Perdikis, S.; Leeb, R.; Millán, J. d. R.
2016-06-01
Objective. This work presents a first motor imagery-based, adaptive brain-computer interface (BCI) speller, which is able to exploit application-derived context for improved, simultaneous classifier adaptation and spelling. Online spelling experiments with ten able-bodied users evaluate the ability of our scheme, first, to alleviate non-stationarity of brain signals for restoring the subject’s performances, second, to guide naive users into BCI control avoiding initial offline BCI calibration and, third, to outperform regular unsupervised adaptation. Approach. Our co-adaptive framework combines the BrainTree speller with smooth-batch linear discriminant analysis adaptation. The latter enjoys contextual assistance through BrainTree’s language model to improve online expectation-maximization maximum-likelihood estimation. Main results. Our results verify the possibility to restore single-sample classification and BCI command accuracy, as well as spelling speed for expert users. Most importantly, context-aware adaptation performs significantly better than its unsupervised equivalent and similar to the supervised one. Although no significant differences are found with respect to the state-of-the-art PMean approach, the proposed algorithm is shown to be advantageous for 30% of the users. Significance. We demonstrate the possibility to circumvent supervised BCI recalibration, saving time without compromising the adaptation quality. On the other hand, we show that this type of classifier adaptation is not as efficient for BCI training purposes.
Context-aware adaptive spelling in motor imagery BCI.
Perdikis, S; Leeb, R; Millán, J D R
2016-06-01
This work presents a first motor imagery-based, adaptive brain-computer interface (BCI) speller, which is able to exploit application-derived context for improved, simultaneous classifier adaptation and spelling. Online spelling experiments with ten able-bodied users evaluate the ability of our scheme, first, to alleviate non-stationarity of brain signals for restoring the subject's performances, second, to guide naive users into BCI control avoiding initial offline BCI calibration and, third, to outperform regular unsupervised adaptation. Our co-adaptive framework combines the BrainTree speller with smooth-batch linear discriminant analysis adaptation. The latter enjoys contextual assistance through BrainTree's language model to improve online expectation-maximization maximum-likelihood estimation. Our results verify the possibility to restore single-sample classification and BCI command accuracy, as well as spelling speed for expert users. Most importantly, context-aware adaptation performs significantly better than its unsupervised equivalent and similar to the supervised one. Although no significant differences are found with respect to the state-of-the-art PMean approach, the proposed algorithm is shown to be advantageous for 30% of the users. We demonstrate the possibility to circumvent supervised BCI recalibration, saving time without compromising the adaptation quality. On the other hand, we show that this type of classifier adaptation is not as efficient for BCI training purposes.
Length and elasticity of side reins affect rein tension at trot.
Clayton, Hilary M; Larson, Britt; Kaiser, LeeAnn J; Lavagnino, Michael
2011-06-01
This study investigated the horse's contribution to tension in the reins. The experimental hypotheses were that tension in side reins (1) increases biphasically in each trot stride, (2) changes inversely with rein length, and (3) changes with elasticity of the reins. Eight riding horses trotted in hand at consistent speed in a straight line wearing a bit and bridle and three types of side reins (inelastic, stiff elastic, compliant elastic) were evaluated in random order at long, neutral, and short lengths. Strain gauge transducers (240 Hz) measured minimal, maximal and mean rein tension, rate of loading and impulse. The effects of rein type and length were evaluated using ANOVA with Bonferroni post hoc tests. Rein tension oscillated in a regular pattern with a peak during each diagonal stance phase. Within each rein type, minimal, maximal and mean tensions were higher with shorter reins. At neutral or short lengths, minimal tension increased and maximal tension decreased with elasticity of the reins. Short, inelastic reins had the highest maximal tension and rate of loading. Since the tension variables respond differently to rein elasticity at different lengths, it is recommended that a set of variables representing different aspects of rein tension should be reported. Copyright © 2010 Elsevier Ltd. All rights reserved.
Optimization of Multiple Related Negotiation through Multi-Negotiation Network
NASA Astrophysics Data System (ADS)
Ren, Fenghui; Zhang, Minjie; Miao, Chunyan; Shen, Zhiqi
In this paper, a Multi-Negotiation Network (MNN) and a Multi- Negotiation Influence Diagram (MNID) are proposed to optimally handle Multiple Related Negotiations (MRN) in a multi-agent system. Most popular, state-of-the-art approaches perform MRN sequentially. However, a sequential procedure may not optimally execute MRN in terms of maximizing the global outcome, and may even lead to unnecessary losses in some situations. The motivation of this research is to use a MNN to handle MRN concurrently so as to maximize the expected utility of MRN. Firstly, both the joint success rate and the joint utility by considering all related negotiations are dynamically calculated based on a MNN. Secondly, by employing a MNID, an agent's possible decision on each related negotiation is reflected by the value of expected utility. Lastly, through comparing expected utilities between all possible policies to conduct MRN, an optimal policy is generated to optimize the global outcome of MRN. The experimental results indicate that the proposed approach can improve the global outcome of MRN in a successful end scenario, and avoid unnecessary losses in an unsuccessful end scenario.
Contract W911NF-12-C-0102 (Advanced Diamond Technologies, Inc.)
2013-06-24
resistivity, residual stress and Raman spectra measurement is finished. Raman spectra shows basically regular nanocrystalline diamond signature as expected...diamond films including thickness, resistivity, residual stress and Raman spectra measurement is finished. Raman spectra shows basically regular...15743 WF600B05 3000 0.02 0.03 0.0018 4 Fig. 2 Raman spectra (λ=532 nm) of (a) all diamond with different doping level and (b) diamond only with
Network clustering and community detection using modulus of families of loops.
Shakeri, Heman; Poggi-Corradini, Pietro; Albin, Nathan; Scoglio, Caterina
2017-01-01
We study the structure of loops in networks using the notion of modulus of loop families. We introduce an alternate measure of network clustering by quantifying the richness of families of (simple) loops. Modulus tries to minimize the expected overlap among loops by spreading the expected link usage optimally. We propose weighting networks using these expected link usages to improve classical community detection algorithms. We show that the proposed method enhances the performance of certain algorithms, such as spectral partitioning and modularity maximization heuristics, on standard benchmarks.
Cosmological coherent state expectation values in loop quantum gravity I. Isotropic kinematics
NASA Astrophysics Data System (ADS)
Dapor, Andrea; Liegener, Klaus
2018-07-01
This is the first paper of a series dedicated to loop quantum gravity (LQG) coherent states and cosmology. The concept is based on the effective dynamics program of Loop Quantum Cosmology, where the classical dynamics generated by the expectation value of the Hamiltonian on semiclassical states is found to be in agreement with the quantum evolution of such states. We ask the question of whether this expectation value agrees with the one obtained in the full theory. The answer is in the negative, Dapor and Liegener (2017 arXiv:1706.09833). This series of papers is dedicated to detailing the computations that lead to that surprising result. In the current paper, we construct the family of coherent states in LQG which represent flat (k = 0) Robertson–Walker spacetimes, and present the tools needed to compute expectation values of polynomial operators in holonomy and flux on such states. These tools will be applied to the LQG Hamiltonian operator (in Thiemann regularization) in the second paper of the series. The third paper will present an extension to cosmologies and a comparison with alternative regularizations of the Hamiltonian.
Depressive symptoms, depression proneness, and outcome expectancies for cigarette smoking.
Friedman-Wheeler, Dara G; Ahrens, Anthony H; Haaga, David A F; McIntosh, Elizabeth; Thorndike, Frances P
2007-08-01
The high rates of cigarette smoking among depressed persons may be partially explained by increased positive expectancies for cigarette smoking among this population. In view of theoretical and empirical work on depressed people's negative views of the future, though, it would be expected that depressed smokers would hold particularly negative expectancies about the effects of cigarette smoking. The two current studies examined the relations between depression and smoking outcome expectancies in (a) a general population of adult regular smokers and (b) adult smokers seeking to quit smoking. Depressive symptoms and depression proneness both showed significant positive correlations with positive expectancies for cigarette smoking. Several positive correlations with negative expectancies also emerged. Thus, experiencing depressive symptoms may serve to amplify both favorable and unfavorable expectancies about the effects of smoking.
Waldmann, Elisa; Vogt, Anja; Crispin, Alexander; Altenhofer, Julia; Riks, Ina; Parhofer, Klaus G
2017-04-01
In this study, we evaluated the effect of mipomersen in patients with severe LDL-hypercholesterolaemia and atherosclerosis, treated by lipid lowering drugs and regular lipoprotein apheresis. This prospective, randomized, controlled phase II single center trial enrolled 15 patients (9 males, 6 females; 59 ± 9 y, BMI 27 ± 4 kg/m 2 ) with established atherosclerosis, LDL-cholesterol ≥130 mg/dL (3.4 mmol/L) despite maximal possible drug therapy, and fulfilling German criteria for regular lipoprotein apheresis. All patients were on stable lipid lowering drug therapy and regular apheresis for >3 months. Patients randomized to treatment (n = 11) self-injected mipomersen 200 mg sc weekly, at day 4 after apheresis, for 26 weeks. Patients randomized to control (n = 4) continued apheresis without injection. The primary endpoint was the change in pre-apheresis LDL-cholesterol. Of the patients randomized to mipomersen, 3 discontinued the drug early (<12 weeks therapy) for side effects. For these, another 3 were recruited and randomized. Further, 4 patients discontinued mipomersen between 12 and 26 weeks for side effects (moderate to severe injection site reactions n = 3 and elevated liver enzymes n = 1). In those treated for >12 weeks, mipomersen reduced pre-apheresis LDL-cholesterol significantly by 22.6 ± 17.0%, from a baseline of 4.8 ± 1.2 mmol/L to 3.7 ± 0.9 mmol/L, while there was no significant change in the control group (+1.6 ± 9.3%), with the difference between the groups being significant (p=0.02). Mipomersen also decreased pre-apheresis lipoprotein(a) (Lp(a)) concentration from a median baseline of 40.2 mg/dL (32.5,71) by 16% (-19.4,13.6), though without significance (p=0.21). Mipomersen reduces LDL-cholesterol (significantly) and Lp(a) (non-significantly) in patients on maximal lipid-lowering drug therapy and regular apheresis, but is often associated with side effects. Copyright © 2017 Elsevier B.V. All rights reserved.
Optimal Resource Allocation in Library Systems
ERIC Educational Resources Information Center
Rouse, William B.
1975-01-01
Queueing theory is used to model processes as either waiting or balking processes. The optimal allocation of resources to these processes is defined as that which maximizes the expected value of the decision-maker's utility function. (Author)
The Dynamics of Crime and Punishment
NASA Astrophysics Data System (ADS)
Hausken, Kjell; Moxnes, John F.
This article analyzes crime development which is one of the largest threats in today's world, frequently referred to as the war on crime. The criminal commits crimes in his free time (when not in jail) according to a non-stationary Poisson process which accounts for fluctuations. Expected values and variances for crime development are determined. The deterrent effect of imprisonment follows from the amount of time in imprisonment. Each criminal maximizes expected utility defined as expected benefit (from crime) minus expected cost (imprisonment). A first-order differential equation of the criminal's utility-maximizing response to the given punishment policy is then developed. The analysis shows that if imprisonment is absent, criminal activity grows substantially. All else being equal, any equilibrium is unstable (labile), implying growth of criminal activity, unless imprisonment increases sufficiently as a function of criminal activity. This dynamic approach or perspective is quite interesting and has to our knowledge not been presented earlier. The empirical data material for crime intensity and imprisonment for Norway, England and Wales, and the US supports the model. Future crime development is shown to depend strongly on the societally chosen imprisonment policy. The model is intended as a valuable tool for policy makers who can envision arbitrarily sophisticated imprisonment functions and foresee the impact they have on crime development.
Acceptable regret in medical decision making.
Djulbegovic, B; Hozo, I; Schwartz, A; McMasters, K M
1999-09-01
When faced with medical decisions involving uncertain outcomes, the principles of decision theory hold that we should select the option with the highest expected utility to maximize health over time. Whether a decision proves right or wrong can be learned only in retrospect, when it may become apparent that another course of action would have been preferable. This realization may bring a sense of loss, or regret. When anticipated regret is compelling, a decision maker may choose to violate expected utility theory to avoid regret. We formulate a concept of acceptable regret in medical decision making that explicitly introduces the patient's attitude toward loss of health due to a mistaken decision into decision making. In most cases, minimizing expected regret results in the same decision as maximizing expected utility. However, when acceptable regret is taken into consideration, the threshold probability below which we can comfortably withhold treatment is a function only of the net benefit of the treatment, and the threshold probability above which we can comfortably administer the treatment depends only on the magnitude of the risks associated with the therapy. By considering acceptable regret, we develop new conceptual relations that can help decide whether treatment should be withheld or administered, especially when the diagnosis is uncertain. This may be particularly beneficial in deciding what constitutes futile medical care.
Maximization, learning, and economic behavior
Erev, Ido; Roth, Alvin E.
2014-01-01
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design. PMID:25024182
Maximization, learning, and economic behavior.
Erev, Ido; Roth, Alvin E
2014-07-22
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design.
Memory-efficient decoding of LDPC codes
NASA Technical Reports Server (NTRS)
Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon
2005-01-01
We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.
Decay, excitation, and ionization of lithium Rydberg states by blackbody radiation
NASA Astrophysics Data System (ADS)
Ovsiannikov, V. D.; Glukhov, I. L.
2010-09-01
Details of interaction between the blackbody radiation and neutral lithium atoms were studied in the temperature ranges T = 100-2000 K. The rates of thermally induced decays, excitations and ionization were calculated for S-, P- and D-series of Rydberg states in the Fues' model potential approach. The quantitative regularities for the states of the maximal rates of blackbody-radiation-induced processes were determined. Approximation formulas were proposed for analytical representation of the depopulation rates.
Bayesian Recurrent Neural Network for Language Modeling.
Chien, Jen-Tzung; Ku, Yuan-Chu
2016-02-01
A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.
Phenomenology of maximal and near-maximal lepton mixing
NASA Astrophysics Data System (ADS)
Gonzalez-Garcia, M. C.; Peña-Garay, Carlos; Nir, Yosef; Smirnov, Alexei Yu.
2001-01-01
The possible existence of maximal or near-maximal lepton mixing constitutes an intriguing challenge for fundamental theories of flavor. We study the phenomenological consequences of maximal and near-maximal mixing of the electron neutrino with other (x=tau and/or muon) neutrinos. We describe the deviations from maximal mixing in terms of a parameter ɛ≡1-2 sin2 θex and quantify the present experimental status for \\|ɛ\\|<0.3. We show that both probabilities and observables depend on ɛ quadratically when effects are due to vacuum oscillations and they depend on ɛ linearly if matter effects dominate. The most important information on νe mixing comes from solar neutrino experiments. We find that the global analysis of solar neutrino data allows maximal mixing with confidence level better than 99% for 10-8 eV2<~Δm2<~2×10-7 eV2. In the mass ranges Δm2>~1.5×10-5 eV2 and 4×10-10 eV2<~Δm2<~2×10-7 eV2 the full interval \\|ɛ\\|<0.3 is allowed within ~4σ (99.995% CL) We suggest ways to measure ɛ in future experiments. The observable that is most sensitive to ɛ is the rate [NC]/[CC] in combination with the day-night asymmetry in the SNO detector. With theoretical and statistical uncertainties, the expected accuracy after 5 years is Δɛ~0.07. We also discuss the effects of maximal and near-maximal νe mixing in atmospheric neutrinos, supernova neutrinos, and neutrinoless double beta decay.
New tip design and shock wave pattern of electrohydraulic probes for endoureteral lithotripsy.
Vorreuther, R
1993-02-01
A new tip design of a 3.3F electrohydraulic probe for endoureteral lithotripsy was evaluated in comparison to a regular probe. The peak pressure, as well as the slope of the shock front, depend solely on the voltage. Increasing the capacity leads merely to broader pulses. A laser-like short high-pressure pulse has a greater impact on stone disintegration than a corresponding broader low-pressure pulse of the same energy. Using the regular probe, only positive pressures were obtained. Pressure distribution around the regular tip was approximately spherical, whereas the modified probe tip "beamed" the shock wave to a great extent. In addition, a negative-pressure half-cycle was added to the initial positive peak pressure, which resulted in a higher maximal pressure amplitude. The directed shock wave had a greater depth of penetration into a model stone. Thus, the ability of the new probe to destroy harder stones especially should be greater. The trauma to the ureter was reduced when touching the wall tangentially. No difference in the effect of the two probes was seen when placing the probe directly on the mucosa.
Managing a closed-loop supply chain inventory system with learning effects
NASA Astrophysics Data System (ADS)
Jauhari, Wakhid Ahmad; Dwicahyani, Anindya Rachma; Hendaryani, Oktiviandri; Kurdhi, Nughthoh Arfawi
2018-02-01
In this paper, we propose a closed-loop supply chain model consisting of a retailer and a manufacturer. We intend to investigate the impact of learning in regular production, remanufacturing and reworking. The customer demand is assumed deterministic and will be satisfied from both regular production and remanufacturing process. The return rate of used items depends on quality. We propose a mathematical model with the objective is to maximize the joint total profit by simultaneously determining the length of ordering cycle for the retailer and the number of regular production and remanufacturing cycle. The algorithm is suggested for finding the optimal solution. A numerical example is presented to illustrate the application of using a proposed model. The results show that the integrated model performs better in reducing total cost compared to the independent model. The total cost is most affected by the changes in the values of unit production cost and acceptable quality level. In addition, the changes in the defective items proportion and the fraction of holding costs significantly influence the retailer's ordering period.
Assessing park-and-ride impacts.
DOT National Transportation Integrated Search
2010-06-01
Efficient transportation systems are vital to quality-of-life and mobility issues, and an effective park-and-ride (P&R) : network can help maximize system performance. Properly placed P&R facilities are expected to result in fewer calls : to increase...
Are H-reflex and M-wave recruitment curve parameters related to aerobic capacity?
Piscione, Julien; Grosset, Jean-François; Gamet, Didier; Pérot, Chantal
2012-10-01
Soleus Hoffmann reflex (H-reflex) amplitude is affected by a training period and type and level of training are also well known to modify aerobic capacities. Previously, paired changes in H-reflex and aerobic capacity have been evidenced after endurance training. The aim of this study was to investigate possible links between H- and M-recruitment curve parameters and aerobic capacity collected on a cohort of subjects (56 young men) that were not involved in regular physical training. Maximal H-reflex normalized with respect to maximal M-wave (H(max)/M(max)) was measured as well as other parameters of the H- or M-recruitment curves that provide information about the reflex or direct excitability of the motoneuron pool, such as thresholds of stimulus intensity to obtain H or M response (H(th) and M(th)), the ascending slope of H-reflex, or M-wave recruitment curves (H(slp) and M(slp)) and their ratio (H(slp)/M(slp)). Aerobic capacity, i.e., maximal oxygen consumption and maximal aerobic power (MAP) were, respectively, estimated from a running field test and from an incremental test on a cycle ergometer. Maximal oxygen consumption was only correlated with M(slp), an indicator of muscle fiber heterogeneity (p < 0.05), whereas MAP was not correlated with any of the tested parameters (p > 0.05). Although higher H-reflex are often described for subjects with a high aerobic capacity because of endurance training, at a basic level (i.e., without training period context) no correlation was observed between maximal H-reflex and aerobic capacity. Thus, none of the H-reflex or M-wave recruitment curve parameters, except M(slp), was related to the aerobic capacity of young, untrained male subjects.
Three faces of node importance in network epidemiology: Exact results for small graphs
NASA Astrophysics Data System (ADS)
Holme, Petter
2017-12-01
We investigate three aspects of the importance of nodes with respect to susceptible-infectious-removed (SIR) disease dynamics: influence maximization (the expected outbreak size given a set of seed nodes), the effect of vaccination (how much deleting nodes would reduce the expected outbreak size), and sentinel surveillance (how early an outbreak could be detected with sensors at a set of nodes). We calculate the exact expressions of these quantities, as functions of the SIR parameters, for all connected graphs of three to seven nodes. We obtain the smallest graphs where the optimal node sets are not overlapping. We find that (i) node separation is more important than centrality for more than one active node, (ii) vaccination and influence maximization are the most different aspects of importance, and (iii) the three aspects are more similar when the infection rate is low.
Mauser, Wolfram; Klepper, Gernot; Zabel, Florian; Delzeit, Ruth; Hank, Tobias; Putzenlechner, Birgitta; Calzadilla, Alvaro
2015-01-01
Global biomass demand is expected to roughly double between 2005 and 2050. Current studies suggest that agricultural intensification through optimally managed crops on today's cropland alone is insufficient to satisfy future demand. In practice though, improving crop growth management through better technology and knowledge almost inevitably goes along with (1) improving farm management with increased cropping intensity and more annual harvests where feasible and (2) an economically more efficient spatial allocation of crops which maximizes farmers' profit. By explicitly considering these two factors we show that, without expansion of cropland, today's global biomass potentials substantially exceed previous estimates and even 2050s' demands. We attribute 39% increase in estimated global production potentials to increasing cropping intensities and 30% to the spatial reallocation of crops to their profit-maximizing locations. The additional potentials would make cropland expansion redundant. Their geographic distribution points at possible hotspots for future intensification. PMID:26558436
Mauser, Wolfram; Klepper, Gernot; Zabel, Florian; Delzeit, Ruth; Hank, Tobias; Putzenlechner, Birgitta; Calzadilla, Alvaro
2015-11-12
Global biomass demand is expected to roughly double between 2005 and 2050. Current studies suggest that agricultural intensification through optimally managed crops on today's cropland alone is insufficient to satisfy future demand. In practice though, improving crop growth management through better technology and knowledge almost inevitably goes along with (1) improving farm management with increased cropping intensity and more annual harvests where feasible and (2) an economically more efficient spatial allocation of crops which maximizes farmers' profit. By explicitly considering these two factors we show that, without expansion of cropland, today's global biomass potentials substantially exceed previous estimates and even 2050s' demands. We attribute 39% increase in estimated global production potentials to increasing cropping intensities and 30% to the spatial reallocation of crops to their profit-maximizing locations. The additional potentials would make cropland expansion redundant. Their geographic distribution points at possible hotspots for future intensification.
Expected Power-Utility Maximization Under Incomplete Information and with Cox-Process Observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fujimoto, Kazufumi, E-mail: m_fuji@kvj.biglobe.ne.jp; Nagai, Hideo, E-mail: nagai@sigmath.es.osaka-u.ac.jp; Runggaldier, Wolfgang J., E-mail: runggal@math.unipd.it
2013-02-15
We consider the problem of maximization of expected terminal power utility (risk sensitive criterion). The underlying market model is a regime-switching diffusion model where the regime is determined by an unobservable factor process forming a finite state Markov process. The main novelty is due to the fact that prices are observed and the portfolio is rebalanced only at random times corresponding to a Cox process where the intensity is driven by the unobserved Markovian factor process as well. This leads to a more realistic modeling for many practical situations, like in markets with liquidity restrictions; on the other hand itmore » considerably complicates the problem to the point that traditional methodologies cannot be directly applied. The approach presented here is specific to the power-utility. For log-utilities a different approach is presented in Fujimoto et al. (Preprint, 2012).« less
Schrempf, Alexandra; Giehr, Julia; Röhrl, Ramona; Steigleder, Sarah; Heinze, Jürgen
2017-04-01
One of the central tenets of life-history theory is that organisms cannot simultaneously maximize all fitness components. This results in the fundamental trade-off between reproduction and life span known from numerous animals, including humans. Social insects are a well-known exception to this rule: reproductive queens outlive nonreproductive workers. Here, we take a step forward and show that under identical social and environmental conditions the fecundity-longevity trade-off is absent also within the queen caste. A change in reproduction did not alter life expectancy, and even a strong enforced increase in reproductive efforts did not reduce residual life span. Generally, egg-laying rate and life span were positively correlated. Queens of perennial social insects thus seem to maximize at the same time two fitness parameters that are normally negatively correlated. Even though they are not immortal, they best approach a hypothetical "Darwinian demon" in the animal kingdom.
WFIRST: Exoplanet Target Selection and Scheduling with Greedy Optimization
NASA Astrophysics Data System (ADS)
Keithly, Dean; Garrett, Daniel; Delacroix, Christian; Savransky, Dmitry
2018-01-01
We present target selection and scheduling algorithms for missions with direct imaging of exoplanets, and the Wide Field Infrared Survey Telescope (WFIRST) in particular, which will be equipped with a coronagraphic instrument (CGI). Optimal scheduling of CGI targets can maximize the expected value of directly imaged exoplanets (completeness). Using target completeness as a reward metric and integration time plus overhead time as a cost metric, we can maximize the sum completeness for a mission with a fixed duration. We optimize over these metrics to create a list of target stars using a greedy optimization algorithm based off altruistic yield optimization (AYO) under ideal conditions. We simulate full missions using EXOSIMS by observing targets in this list for their predetermined integration times. In this poster, we report the theoretical maximum sum completeness, mean number of detected exoplanets from Monte Carlo simulations, and the ideal expected value of the simulated missions.
Carlsen, K H; Oseid, S; Sandnes, T; Trondskog, B; Røksund, O
1991-03-20
Geilomo hospital for children with asthma and allergy is situated 800 m above sea level in a non-polluted area in the central part of Norway. 31 children who were admitted to this hospital from different parts of Norway (mostly from the main cities) were studied for six weeks. They underwent physical training and daily measurements were taken of lung function and the effect of bronchodilators. The bronchial responsiveness of the children improved significantly from week 1 to week 6, as measured by reduction in lung function after sub-maximal running on a treadmill. There was significant improvement in daily symptom score, and in degree of obstruction as shown by physical examination. The children's improvement was probably the result of a stay in a mountainous area with very little air pollution or allergens, combined with regular planned physical activity, and regular medication and surveillance.
Parent and teen agreement on driving expectations prior to teen licensure.
Hamann, Cara J; Ramirez, Marizen; Yang, Jingzhen; Chande, Vidya; Peek-Asa, Corinne
2014-01-01
To examine pre-licensure agreement on driving expectations and predictors of teen driving expectations among parent-teen dyads. Cross-sectional survey of 163 parent-teen dyads. Descriptive statistics, weighted Kappa coefficients, and linear regression were used to examine expectations about post-licensure teen driving. Teens reported high pre-licensure unsupervised driving (N = 79, 48.5%) and regular access to a car (N = 130, 81.8%). Parents and teens had low agreement on teen driving expectations (eg, after dark, κw = 0.23). Each time teens currently drove to/from school, their expectation of driving in risky conditions post-licensure increased (β = 0.21, p = .02). Pre-licensure improvement of parent-teen agreement on driving expectations are needed to have the greatest impact on preventing teens from driving in high risk conditions.
Exercise prescription for the elderly: current recommendations.
Mazzeo, R S; Tanaka, H
2001-01-01
The benefits for elderly individuals of regular participation in both cardiovascular and resistance-training programmes are great. Health benefits include a significant reduction in risk of coronary heart disease, diabetes mellitus and insulin resistance, hypertension and obesity as well as improvements in bone density, muscle mass, arterial compliance and energy metabolism. Additionally, increases in cardiovascular fitness (maximal oxygen consumption and endurance), muscle strength and overall functional capacity are forthcoming allowing elderly individuals to maintain their independence, increase levels of spontaneous physical activity and freely participate in activities associated with daily living. Taken together, these benefits associated with involvement in regular exercise can significantly improve the quality of life in elderly populations. It is noteworthy that the quality and quantity of exercise necessary to elicit important health benefits will differ from that needed to produce significant gains in fitness. This review describes the current recommendations for exercise prescriptions for the elderly for both cardiovascular and strength/resistance-training programmes. However, it must be noted that the benefits described are of little value if elderly individuals do not become involved in regular exercise regimens. Consequently, the major challenges facing healthcare professionals today concern: (i) the implementation of educational programmes designed to inform elderly individuals of the health and functional benefits associated with regular physical activity as well as how safe and effective such programmes can be; and (ii) design interventions that will both increase involvement in regular exercise as well as improve adherence and compliance to such programmes.
Maximizing investments in work zone safety in Oregon : final report.
DOT National Transportation Integrated Search
2011-05-01
Due to the federal stimulus program and the 2009 Jobs and Transportation Act, the Oregon Department of Transportation (ODOT) anticipates that a large increase in highway construction will occur. There is the expectation that, since transportation saf...
ERIC Educational Resources Information Center
Lashway, Larry
1997-01-01
Principals today are expected to maximize their schools' performances with limited resources while also adopting educational innovations. This synopsis reviews five recent publications that offer some important insights about the nature of principals' leadership strategies: (1) "Leadership Styles and Strategies" (Larry Lashway); (2) "Facilitative…
Ceriani, Luca; Ruberto, Teresa; Delaloye, Angelika Bischof; Prior, John O; Giovanella, Luca
2010-03-01
The purposes of this study were to characterize the performance of a 3-dimensional (3D) ordered-subset expectation maximization (OSEM) algorithm in the quantification of left ventricular (LV) function with (99m)Tc-labeled agent gated SPECT (G-SPECT), the QGS program, and a beating-heart phantom and to optimize the reconstruction parameters for clinical applications. A G-SPECT image of a dynamic heart phantom simulating the beating left ventricle was acquired. The exact volumes of the phantom were known and were as follows: end-diastolic volume (EDV) of 112 mL, end-systolic volume (ESV) of 37 mL, and stroke volume (SV) of 75 mL; these volumes produced an LV ejection fraction (LVEF) of 67%. Tomographic reconstructions were obtained after 10-20 iterations (I) with 4, 8, and 16 subsets (S) at full width at half maximum (FWHM) gaussian postprocessing filter cutoff values of 8-15 mm. The QGS program was used for quantitative measurements. Measured values ranged from 72 to 92 mL for EDV, from 18 to 32 mL for ESV, and from 54 to 63 mL for SV, and the calculated LVEF ranged from 65% to 76%. Overall, the combination of 10 I, 8 S, and a cutoff filter value of 10 mm produced the most accurate results. The plot of the measures with respect to the expectation maximization-equivalent iterations (I x S product) revealed a bell-shaped curve for the LV volumes and a reverse distribution for the LVEF, with the best results in the intermediate range. In particular, FWHM cutoff values exceeding 10 mm affected the estimation of the LV volumes. The QGS program is able to correctly calculate the LVEF when used in association with an optimized 3D OSEM algorithm (8 S, 10 I, and FWHM of 10 mm) but underestimates the LV volumes. However, various combinations of technical parameters, including a limited range of I and S (80-160 expectation maximization-equivalent iterations) and low cutoff values (< or =10 mm) for the gaussian postprocessing filter, produced results with similar accuracies and without clinically relevant differences in the LV volumes and the estimated LVEF.
On the Achievable Throughput Over TVWS Sensor Networks
Caleffi, Marcello; Cacciapuoti, Angela Sara
2016-01-01
In this letter, we study the throughput achievable by an unlicensed sensor network operating over TV white space spectrum in presence of coexistence interference. Through the letter, we first analytically derive the achievable throughput as a function of the channel ordering. Then, we show that the problem of deriving the maximum expected throughput through exhaustive search is computationally unfeasible. Finally, we derive a computational-efficient algorithm characterized by polynomial-time complexity to compute the channel set maximizing the expected throughput and, stemming from this, we derive a closed-form expression of the maximum expected throughput. Numerical simulations validate the theoretical analysis. PMID:27043565
Optimal management of batteries in electric systems
Atcitty, Stanley; Butler, Paul C.; Corey, Garth P.; Symons, Philip C.
2002-01-01
An electric system including at least a pair of battery strings and an AC source minimizes the use and maximizes the efficiency of the AC source by using the AC source only to charge all battery strings at the same time. Then one or more battery strings is used to power the load while management, such as application of a finish charge, is provided to one battery string. After another charge cycle, the roles of the battery strings are reversed so that each battery string receives regular management.
Bisimulation equivalence of differential-algebraic systems
NASA Astrophysics Data System (ADS)
Megawati, Noorma Yulia; Schaft, Arjan van der
2018-01-01
In this paper, the notion of bisimulation relation for linear input-state-output systems is extended to general linear differential-algebraic (DAE) systems. Geometric control theory is used to derive a linear-algebraic characterisation of bisimulation relations, and an algorithm for computing the maximal bisimulation relation between two linear DAE systems. The general definition is specialised to the case where the matrix pencil sE - A is regular. Furthermore, by developing a one-sided version of bisimulation, characterisations of simulation and abstraction are obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, R.A.; Bryden, N.A.; Polansky, M.M.
1986-03-05
To determine if degree of training effects urinary Cr losses, Cr excretion of 8 adult trained and 5 untrained runners was determined on rest days and following exercise at 90% of maximal oxygen uptake on a treadmill to exhaustion with 30 second exercise and 30 second rest periods. Subjects were fed a constant daily diet containing 9 ..mu..g of Cr per 1000 calories to minimize changes due to diet. Maximal oxygen consumption of the trained runners was in the good or above range based upon their age and that of the untrained runners was average or below. While consuming themore » control diet, basal urinary Cr excretion of subjects who exercise regularly was significantly lower than that of the sedentary control subjects, 0.09 +/- 0.01 and 0.21 +/- 0.03 ..mu..g/day (mean +/- SEM), respectively. Daily urinary Cr excretion of trained subjects was significantly higher on the day of a single exercise bout at 90% of maximal oxygen consumption compared to nonexercise days, 0.12 +/- 0.02 and 0.09 +/- 0.01 ..mu..g/day, respectively. Urinary Cr excretion of 5 untrained subjects was not altered following controlled exercise. These data demonstrate that basal urinary Cr excretion and excretion in response to exercise are related to maximal oxygen consumption and therefore degree of fitness.« less
Maximal likelihood correspondence estimation for face recognition across pose.
Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang
2014-10-01
Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database.
NASA Astrophysics Data System (ADS)
Grecu, M.; Tian, L.; Heymsfield, G. M.
2017-12-01
A major challenge in deriving accurate estimates of physical properties of falling snow particles from single frequency space- or airborne radar observations is that snow particles exhibit a large variety of shapes and their electromagnetic scattering characteristics are highly dependent on these shapes. Triple frequency (Ku-Ka-W) radar observations are expected to facilitate the derivation of more accurate snow estimates because specific snow particle shapes tend to have specific signatures in the associated two-dimensional dual-reflectivity-ratio (DFR) space. However, the derivation of accurate snow estimates from triple frequency radar observations is by no means a trivial task. This is because the radar observations can be subject to non-negligible attenuation (especially at W-band when super-cooled water is present), which may significantly impact the interpretation of the information in the DFR space. Moreover, the electromagnetic scattering properties of snow particles are computationally expensive to derive, which makes the derivation of reliable parameterizations usable in estimation methodologies challenging. In this study, we formulate an two-step Expectation Maximization (EM) methodology to derive accurate snow estimates in Extratropical Cyclones (ECTs) from triple frequency airborne radar observations. The Expectation (E) step consists of a least-squares triple frequency estimation procedure applied with given assumptions regarding the relationships between the density of snow particles and their sizes, while the Maximization (M) step consists of the optimization of the assumptions used in step E. The electromagnetic scattering properties of snow particles are derived using the Rayleigh-Gans approximation. The methodology is applied to triple frequency radar observations collected during the Olympic Mountains Experiment (OLYMPEX). Results show that snowfall estimates above the freezing level in ETCs consistent with the triple frequency radar observations as well as with independent rainfall estimates below the freezing level may be derived using the EM methodology formulated in the study.
Hudson, H M; Ma, J; Green, P
1994-01-01
Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.
Wyse, Cathy; Cathcart, Andy; Sutherland, Rona; Ward, Susan; McMillan, Lesley; Gibson, Graham; Padgett, Miles; Skeldon, Kenneth
2005-06-01
Exercise-induced oxidative stress (EIOS) refers to a condition where the balance of free radical production and antioxidant systems is disturbed during exercise in favour of pro-oxidant free radicals. Breath ethane is a product of free radical-mediated oxidation of cell membrane lipids and is considered to be a reliable marker of oxidative stress. The heatshock protein, haem oxygenase, is induced by oxidative stress and degrades haemoglobin to bilirubin, with concurrent production of carbon monoxide (CO). The aim of this study was to investigate the effect of maximal exercise on exhaled ethane and CO in human, canine, and equine athletes. Human athletes (n = 8) performed a maximal exercise test on a treadmill, and canine (n = 12) and equine (n = 11) athletes exercised at gallop on a sand racetrack. Breath samples were taken at regular intervals during exercise in the human athletes, and immediately before and after exercise in the canine and equine athletes. Breath samples were stored in gas-impermeable bags for analysis of ethane by laser spectroscopy, and CO was measured directly using an electrochemical CO monitor. Maximal exercise was associated with significant increases in exhaled ethane in the human, equine, and canine athletes. Decreased concentrations of exhaled CO were detected after maximal exercise in the human athletes, but CO was rarely detectable in the canine and equine athletes. The ethane breath test allows non-invasive and real-time detection of oxidative stress, and this method will facilitate further investigation of the processes mediating EIOS in human and animal athletes.
Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A.; Valdés-Hernández, Pedro A.; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A.
2017-01-01
The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website. PMID:29200994
Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A; Valdés-Hernández, Pedro A; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A
2017-01-01
The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kronfeld, Andrea; Müller-Forell, Wibke; Buchholz, Hans-Georg
Purpose: Image registration is one prerequisite for the analysis of brain regions in magnetic-resonance-imaging (MRI) or positron-emission-tomography (PET) studies. Diffeomorphic anatomical registration through exponentiated Lie algebra (DARTEL) is a nonlinear, diffeomorphic algorithm for image registration and construction of image templates. The goal of this small animal study was (1) the evaluation of a MRI and calculation of several cannabinoid type 1 (CB1) receptor PET templates constructed using DARTEL and (2) the analysis of the image registration accuracy of MR and PET images to their DARTEL templates with reference to analytical and iterative PET reconstruction algorithms. Methods: Five male Sprague Dawleymore » rats were investigated for template construction using MRI and [{sup 18}F]MK-9470 PET for CB1 receptor representation. PET images were reconstructed using the algorithms filtered back-projection, ordered subset expectation maximization in 2D, and maximum a posteriori in 3D. Landmarks were defined on each MR image, and templates were constructed under different settings, i.e., based on different tissue class images [gray matter (GM), white matter (WM), and GM + WM] and regularization forms (“linear elastic energy,” “membrane energy,” and “bending energy”). Registration accuracy for MRI and PET templates was evaluated by means of the distance between landmark coordinates. Results: The best MRI template was constructed based on gray and white matter images and the regularization form linear elastic energy. In this case, most distances between landmark coordinates were <1 mm. Accordingly, MRI-based spatial normalization was most accurate, but results of the PET-based spatial normalization were quite comparable. Conclusions: Image registration using DARTEL provides a standardized and automatic framework for small animal brain data analysis. The authors were able to show that this method works with high reliability and validity. Using DARTEL templates together with nonlinear registration algorithms allows for accurate spatial normalization of combined MRI/PET or PET-only studies.« less
NASA Astrophysics Data System (ADS)
Hoeksema, J. T.; Baldner, C. S.; Bush, R. I.; Schou, J.; Scherrer, P. H.
2018-03-01
The Helioseismic and Magnetic Imager (HMI) instrument is a major component of NASA's Solar Dynamics Observatory (SDO) spacecraft. Since commencement of full regular science operations on 1 May 2010, HMI has operated with remarkable continuity, e.g. during the more than five years of the SDO prime mission that ended 30 September 2015, HMI collected 98.4% of all possible 45-second velocity maps; minimizing gaps in these full-disk Dopplergrams is crucial for helioseismology. HMI velocity, intensity, and magnetic-field measurements are used in numerous investigations, so understanding the quality of the data is important. This article describes the calibration measurements used to track the performance of the HMI instrument, and it details trends in important instrument parameters during the prime mission. Regular calibration sequences provide information used to improve and update the calibration of HMI data. The set-point temperature of the instrument front window and optical bench is adjusted regularly to maintain instrument focus, and changes in the temperature-control scheme have been made to improve stability in the observable quantities. The exposure time has been changed to compensate for a 20% decrease in instrument throughput. Measurements of the performance of the shutter and tuning mechanisms show that they are aging as expected and continue to perform according to specification. Parameters of the tunable optical-filter elements are regularly adjusted to account for drifts in the central wavelength. Frequent measurements of changing CCD-camera characteristics, such as gain and flat field, are used to calibrate the observations. Infrequent expected events such as eclipses, transits, and spacecraft off-points interrupt regular instrument operations and provide the opportunity to perform additional calibration. Onboard instrument anomalies are rare and seem to occur quite uniformly in time. The instrument continues to perform very well.
SparseBeads data: benchmarking sparsity-regularized computed tomography
NASA Astrophysics Data System (ADS)
Jørgensen, Jakob S.; Coban, Sophia B.; Lionheart, William R. B.; McDonald, Samuel A.; Withers, Philip J.
2017-12-01
Sparsity regularization (SR) such as total variation (TV) minimization allows accurate image reconstruction in x-ray computed tomography (CT) from fewer projections than analytical methods. Exactly how few projections suffice and how this number may depend on the image remain poorly understood. Compressive sensing connects the critical number of projections to the image sparsity, but does not cover CT, however empirical results suggest a similar connection. The present work establishes for real CT data a connection between gradient sparsity and the sufficient number of projections for accurate TV-regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels, number of projections and noise levels to allow the systematic assessment of parameters affecting performance of SR reconstruction algorithms6. Using the SparseBeads data, TV-regularized reconstruction quality was assessed as a function of numbers of projections and gradient sparsity. The critical number of projections for satisfactory TV-regularized reconstruction increased almost linearly with the gradient sparsity. This establishes a quantitative guideline from which one may predict how few projections to acquire based on expected sample sparsity level as an aid in planning of dose- or time-critical experiments. The results are expected to hold for samples of similar characteristics, i.e. consisting of few, distinct phases with relatively simple structure. Such cases are plentiful in porous media, composite materials, foams, as well as non-destructive testing and metrology. For samples of other characteristics the proposed methodology may be used to investigate similar relations.
Ravier, Gilles; Bouzigon, Romain; Beliard, Samuel; Tordi, Nicolas; Grappe, Frederic
2018-04-04
Ravier, G, Bouzigon, R, Beliard, S, Tordi, N, and Grappe, F. Benefits of compression garments worn during handball-specific circuit on short-term fatigue in professional players. J Strength Cond Res XX(X): 000-000, 2016-The purpose of this study was to investigate the benefits of full-leg length compression garments (CGs) worn during a handball-specific circuit exercises on athletic performance and acute fatigue-induced changes in strength and muscle soreness in professional handball players. Eighteen men (mean ± SD: age 23.22 ± 4.97 years; body mass: 82.06 ± 9.69 kg; height: 184.61 ± 4.78 cm) completed 2 identical sessions either wearing regular gym short or CGs in a randomized crossover design. Exercise circuits of explosive activities included 3 periods of 12 minutes of sprints, jumps, and agility drills every 25 seconds. Before, immediately after and 24 hours postexercise, maximal voluntary knee extension (maximal voluntary contraction, MVC), rate of force development (RFD), and muscle soreness were assessed. During the handball-specific circuit sprint and jump performances were unchanged in both conditions. Immediately after performing the circuit exercises MVC, RFD, and PPT decreased significantly compared with preexercise with CGs and noncompression clothes. Decrement was similar in both conditions for RFD (effect size, ES = 0.40) and PPT for the soleus (ES = 0.86). However, wearing CGs attenuated decrement in MVC (p < 0.001) with a smaller decrease (ES = 1.53) in CGs compared with regular gym shorts condition (-5.4 vs. -18.7%, respectively). Full recovery was observed 24 hours postexercise in both conditions for muscle soreness, MVC, and RFD. These findings suggest that wearing CGs during a handball-specific circuit provides benefits on the impairment of the maximal muscle force characteristics and is likely to be worthwhile for handball players involved in activities such as tackles.
Gehring, Dominic; Wissler, Sabrina; Lohrer, Heinz; Nauck, Tanja; Gollhofer, Albert
2014-03-01
A thorough understanding of the functional aspects of ankle joint control is essential to developing effective injury prevention. It is of special interest to understand how neuromuscular control mechanisms and mechanical constraints stabilize the ankle joint. Therefore, the aim of the present study was to determine how expecting ankle tilts and the application of an ankle brace influence ankle joint control when imitating the ankle sprain mechanism during walking. Ankle kinematics and muscle activity were assessed in 17 healthy men. During gait rapid perturbations were applied using a trapdoor (tilting with 24° inversion and 15° plantarflexion). The subjects either knew that a perturbation would definitely occur (expected tilts) or there was only the possibility that a perturbation would occur (potential tilts). Both conditions were conducted with and without a semi-rigid ankle brace. Expecting perturbations led to an increased ankle eversion at foot contact, which was mediated by an altered muscle preactivation pattern. Moreover, the maximal inversion angle (-7%) and velocity (-4%), as well as the reactive muscle response were significantly reduced when the perturbation was expected. While wearing an ankle brace did not influence muscle preactivation nor the ankle kinematics before ground contact, it significantly reduced the maximal ankle inversion angle (-14%) and velocity (-11%) as well as reactive neuromuscular responses. The present findings reveal that expecting ankle inversion modifies neuromuscular joint control prior to landing. Although such motor control strategies are weaker in their magnitude compared with braces, they seem to assist ankle joint stabilization in a close-to-injury situation. Copyright © 2013 Elsevier B.V. All rights reserved.
Holtzman, Tahl; Jörntell, Henrik
2011-01-01
Temporal coding of spike-times using oscillatory mechanisms allied to spike-time dependent plasticity could represent a powerful mechanism for neuronal communication. However, it is unclear how temporal coding is constructed at the single neuronal level. Here we investigate a novel class of highly regular, metronome-like neurones in the rat brainstem which form a major source of cerebellar afferents. Stimulation of sensory inputs evoked brief periods of inhibition that interrupted the regular firing of these cells leading to phase-shifted spike-time advancements and delays. Alongside phase-shifting, metronome cells also behaved as band-pass filters during rhythmic sensory stimulation, with maximal spike-stimulus synchronisation at frequencies close to the idiosyncratic firing frequency of each neurone. Phase-shifting and band-pass filtering serve to temporally align ensembles of metronome cells, leading to sustained volleys of near-coincident spike-times, thereby transmitting synchronised sensory information to downstream targets in the cerebellar cortex. PMID:22046297
Optimal Implementations for Reliable Circadian Clocks
NASA Astrophysics Data System (ADS)
Hasegawa, Yoshihiko; Arita, Masanori
2014-09-01
Circadian rhythms are acquired through evolution to increase the chances for survival through synchronizing with the daylight cycle. Reliable synchronization is realized through two trade-off properties: regularity to keep time precisely, and entrainability to synchronize the internal time with daylight. We find by using a phase model with multiple inputs that achieving the maximal limit of regularity and entrainability entails many inherent features of the circadian mechanism. At the molecular level, we demonstrate the role sharing of two light inputs, phase advance and delay, as is well observed in mammals. At the behavioral level, the optimal phase-response curve inevitably contains a dead zone, a time during which light pulses neither advance nor delay the clock. We reproduce the results of phase-controlling experiments entrained by two types of periodic light pulses. Our results indicate that circadian clocks are designed optimally for reliable clockwork through evolution.
Expectations Do Not Alter Early Sensory Processing during Perceptual Decision-Making.
Rungratsameetaweemana, Nuttida; Itthipuripat, Sirawaj; Salazar, Annalisa; Serences, John T
2018-06-13
Two factors play important roles in shaping perception: the allocation of selective attention to behaviorally relevant sensory features, and prior expectations about regularities in the environment. Signal detection theory proposes distinct roles of attention and expectation on decision-making such that attention modulates early sensory processing, whereas expectation influences the selection and execution of motor responses. Challenging this classic framework, recent studies suggest that expectations about sensory regularities enhance the encoding and accumulation of sensory evidence during decision-making. However, it is possible, that these findings reflect well documented attentional modulations in visual cortex. Here, we tested this framework in a group of male and female human participants by examining how expectations about stimulus features (orientation and color) and expectations about motor responses impacted electroencephalography (EEG) markers of early sensory processing and the accumulation of sensory evidence during decision-making (the early visual negative potential and the centro-parietal positive potential, respectively). We first demonstrate that these markers are sensitive to changes in the amount of sensory evidence in the display. Then we show, counter to recent findings, that neither marker is modulated by either feature or motor expectations, despite a robust effect of expectations on behavior. Instead, violating expectations about likely sensory features and motor responses impacts posterior alpha and frontal theta oscillations, signals thought to index overall processing time and cognitive conflict. These findings are inconsistent with recent theoretical accounts and suggest instead that expectations primarily influence decisions by modulating post-perceptual stages of information processing. SIGNIFICANCE STATEMENT Expectations about likely features or motor responses play an important role in shaping behavior. Classic theoretical frameworks posit that expectations modulate decision-making by biasing late stages of decision-making including the selection and execution of motor responses. In contrast, recent accounts suggest that expectations also modulate decisions by improving the quality of early sensory processing. However, these effects could instead reflect the influence of selective attention. Here we examine the effect of expectations about sensory features and motor responses on a set of electroencephalography (EEG) markers that index early sensory processing and later post-perceptual processing. Counter to recent empirical results, expectations have little effect on early sensory processing but instead modulate EEG markers of time-on-task and cognitive conflict. Copyright © 2018 the authors 0270-6474/18/385632-17$15.00/0.
Hermassi, Souhail; Chelly, Mohamed Souhaiel; Fieseler, Georg; Bartels, Thomas; Schulze, Stephan; Delank, Karl-Stefan; Shephard, Roy J; Schwesig, René
2017-09-01
Background Team handball is an intense ball sport with specific requirements on technical skills, tactical understanding, and physical performance. The ability of handball players to develop explosive efforts (e. g. sprinting, jumping, changing direction) is crucial to success. Objective The purpose of this pilot study was to examine the effects of an in-season high-intensity strength training program on the physical performance of elite handball players. Materials and methods Twenty-two handball players (a single national-level Tunisian team) were randomly assigned to a control group (CG; n = 10) or a training group (TG; n = 12). At the beginning of the pilot study, all subjects performed a battery of motor tests: one repetition maximum (1-RM) half-squat test, a repeated sprint test [6 × (2 × 15 m) shuttle sprints], squat jumps, counter movement jumps (CMJ), and the Yo-Yo intermittent recovery test level 1. The TG additionally performed a maximal leg strength program twice a week for 10 weeks immediately before engaging in regular handball training. Each strength training session included half-squat exercises to strengthen the lower limbs (80 - 95 % of 1-RM, 1 - 3 repetitions, 3 - 6 sets, 3 - 4 min rest between sets). The control group underwent no additional strength training. The motor test battery was repeated at the end of the study interventions. Results In the TG, 3 parameters (maximal strength of lower limb: η² = 0.74; CMJ: η² = 0.70, and RSA best time: η² = 0.25) showed significant improvements, with large effect sizes (e. g. CMJ: d = 3.77). A reduction in performance for these same 3 parameters was observed in the CG (d = -0.24). Conclusions The results support our hypothesis that additional strength training twice a week enhances the maximal strength of the lower limbs and jumping or repeated sprinting performance. There was no evidence of shuttle sprints ahead of regular training compromising players' speed and endurance capacities. © Georg Thieme Verlag KG Stuttgart · New York.
Simulation study of a high performance brain PET system with dodecahedral geometry.
Tao, Weijie; Chen, Gaoyu; Weng, Fenghua; Zan, Yunlong; Zhao, Zhixiang; Peng, Qiyu; Xu, Jianfeng; Huang, Qiu
2018-05-25
In brain imaging, the spherical PET system achieves the highest sensitivity when the solid angle is concerned. However it is not practical. In this work we designed an alternative sphere-like scanner, the dodecahedral scanner, which has a high sensitivity in imaging and a high feasibility to manufacture. We simulated this system and compared the performance with a few other dedicated brain PET systems. Monte Carlo simulations were conducted to generate data of the dedicated brain PET system with the dodecahedral geometry (11 regular pentagon detectors). The data were then reconstructed using the in-house developed software with the fully three-dimensional maximum-likelihood expectation maximization (3D-MLEM) algorithm. Results show that the proposed system has a high sensitivity distribution for the whole field of view (FOV). With a depth-of-interaction (DOI) resolution around 6.67 mm, the proposed system achieves the spatial resolution of 1.98 mm. Our simulation study also shows that the proposed system improves the image contrast and reduces noise compared with a few other dedicated brain PET systems. Finally, simulations with the Hoffman phantom show the potential application of the proposed system in clinical applications. In conclusion, the proposed dodecahedral PET system is potential for widespread applications in high-sensitivity, high-resolution PET imaging, to lower the injected dose. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Hopkins, Melanie J.; Smith, Andrew B.
2015-01-01
How ecological and morphological diversity accrues over geological time has been much debated by paleobiologists. Evidence from the fossil record suggests that many clades reach maximal diversity early in their evolutionary history, followed by a decline in evolutionary rates as ecological space fills or due to internal constraints. Here, we apply recently developed methods for estimating rates of morphological evolution during the post-Paleozoic history of a major invertebrate clade, the Echinoidea. Contrary to expectation, rates of evolution were lowest during the initial phase of diversification following the Permo-Triassic mass extinction and increased over time. Furthermore, although several subclades show high initial rates and net decreases in rates of evolution, consistent with “early bursts” of morphological diversification, at more inclusive taxonomic levels, these bursts appear as episodic peaks. Peak rates coincided with major shifts in ecological morphology, primarily associated with innovations in feeding strategies. Despite having similar numbers of species in today’s oceans, regular echinoids have accrued far less morphological diversity than irregular echinoids due to lower intrinsic rates of morphological evolution and less morphological innovation, the latter indicative of constrained or bounded evolution. These results indicate that rates of evolution are extremely heterogenous through time and their interpretation depends on the temporal and taxonomic scale of analysis. PMID:25713369
On the Spatial Distribution of High Velocity Al-26 Near the Galactic Center
NASA Technical Reports Server (NTRS)
Sturner, Steven J.
2000-01-01
We present results of simulations of the distribution of 1809 keV radiation from the decay of Al-26 in the Galaxy. Recent observations of this emission line using the Gamma Ray Imaging Spectrometer (GRIS) have indicated that the bulk of the AL-26 must have a velocity of approx. 500 km/ s. We have previously shown that a velocity this large could be maintained over the 10(exp 6) year lifetime of the Al-26 if it is trapped in dust grains that are reaccelerated periodically in the ISM. Here we investigate whether a dust grain velocity of approx. 500 km/ s will produce a distribution of 1809 keV emission in latitude that is consistent with the narrow distribution seen by COMPTEL. We find that dust grain velocities in the range 275 - 1000 km/ s are able to reproduce the COMPTEL 1809 keV emission maps reconstructed using the Richardson-Lucy and Maximum Entropy image reconstruction methods while the emission map reconstructed using the Multiresolution Regularized Expectation Maximization algorithm is not well fit by any of our models. The Al-26 production rate that is needed to reproduce the observed 1809 keV intensity yields in a Galactic mass of Al-26 of approx. 1.5 - 2 solar mass which is in good agreement with both other observations and theoretical production rates.
NASA Astrophysics Data System (ADS)
Newchurch, M.; Zavodsky, B.; Chance, K.; Haynes, J.; Lefer, B. L.; Naeger, A.
2016-12-01
The AQ research community has a long legacy of using space-based observations (e.g., Solar Backscatter Ultraviolet Instrument [SBUV], Global Ozone Monitoring Experiment [GOME], Ozone Monitoring Instrument [OMI], and the Ozone Mapping & Profiler Suite [OMPS]) to study atmospheric chemistry. These measurements have been used to observe day-to-day and year-to-year changes in atmospheric constituents. However, they have not been able to capture the diurnal variability of pollution with enough temporal or spatial fidelity and a low enough latency for regular use by operational decision makers. As a result, the operational AQ community has traditionally relied on ground-based (e.g., collection stations, LIDAR) and airborne observing systems to study tropospheric chemistry. In order to maximize its utility for applications and decision support, there is a need to educate the community about the game-changing potential for the geostationary TEMPO mission well ahead of its expected launch date early in the third decade of this millinium. This NASA mission will engage user communities and enable science across the NASA Applied Science Focus Areas of Health and Air Quality, Disasters, Water Resources, and Ecological Forecasting, In addition, topics discussed will provide opportunities for collaborations extending TEMPO applications to future program areas in Agriculture, Weather and Climate (including Numerical Weather Prediction), Energy, and Oceans.
Do Lower Calorie or Lower Fat Foods Have More Sodium Than Their Regular Counterparts?
John, Katherine A.; Maalouf, Joyce; B. Barsness, Christina; Yuan, Keming; Cogswell, Mary E.; Gunn, Janelle P.
2016-01-01
The objective of this study was to compare the sodium content of a regular food and its lower calorie/fat counterpart. Four food categories, among the top 20 contributing the most sodium to the US diet, met the criteria of having the most matches between regular foods and their lower calorie/fat counterparts. A protocol was used to search websites to create a list of “matches”, a regular and comparable lower calorie/fat food(s) under each brand. Nutrient information was recorded and analyzed for matches. In total, 283 matches were identified across four food categories: savory snacks (N = 44), cheese (N = 105), salad dressings (N = 90), and soups (N = 44). As expected, foods modified from their regular versions had significantly reduced average fat (total fat and saturated fat) and caloric profiles. Mean sodium content among modified salad dressings and cheeses was on average 8%–12% higher, while sodium content did not change with modification of savory snacks. Modified soups had significantly lower mean sodium content than their regular versions (28%–38%). Consumers trying to maintain a healthy diet should consider that sodium content may vary in foods modified to be lower in calories/fat. PMID:27548218
Zhang, ZhiZhuo; Chang, Cheng Wei; Hugo, Willy; Cheung, Edwin; Sung, Wing-Kin
2013-03-01
Although de novo motifs can be discovered through mining over-represented sequence patterns, this approach misses some real motifs and generates many false positives. To improve accuracy, one solution is to consider some additional binding features (i.e., position preference and sequence rank preference). This information is usually required from the user. This article presents a de novo motif discovery algorithm called SEME (sampling with expectation maximization for motif elicitation), which uses pure probabilistic mixture model to model the motif's binding features and uses expectation maximization (EM) algorithms to simultaneously learn the sequence motif, position, and sequence rank preferences without asking for any prior knowledge from the user. SEME is both efficient and accurate thanks to two important techniques: the variable motif length extension and importance sampling. Using 75 large-scale synthetic datasets, 32 metazoan compendium benchmark datasets, and 164 chromatin immunoprecipitation sequencing (ChIP-Seq) libraries, we demonstrated the superior performance of SEME over existing programs in finding transcription factor (TF) binding sites. SEME is further applied to a more difficult problem of finding the co-regulated TF (coTF) motifs in 15 ChIP-Seq libraries. It identified significantly more correct coTF motifs and, at the same time, predicted coTF motifs with better matching to the known motifs. Finally, we show that the learned position and sequence rank preferences of each coTF reveals potential interaction mechanisms between the primary TF and the coTF within these sites. Some of these findings were further validated by the ChIP-Seq experiments of the coTFs. The application is available online.
Firing patterns of spontaneously active motor units in spinal cord-injured subjects.
Zijdewind, Inge; Thomas, Christine K
2012-04-01
Involuntary motor unit activity at low rates is common in hand muscles paralysed by spinal cord injury. Our aim was to describe these patterns of motor unit behaviour in relation to motoneurone and motor unit properties. Intramuscular electromyographic activity (EMG), surface EMG and force were recorded for 30 min from thenar muscles of nine men with chronic cervical SCI. Motor units fired for sustained periods (>10 min) at regular (coefficient of variation ≤ 0.15, CV, n =19 units) or irregular intervals (CV>0.15, n =14). Regularly firing units started and stopped firing independently suggesting that intrinsic motoneurone properties were important for recruitment and derecruitment. Recruitment (3.6 Hz, SD 1.2), maximal (10.2 Hz, SD 2.3, range: 7.5-15.4 Hz) and derecruitment frequencies were low (3.3 Hz, SD 1.6), as were firing rate increases after recruitment (~20 intervals in 3 s). Once active, firing often covaried, promoting the idea that units received common inputs.Half of the regularly firing units showed a very slow decline (>40 s) in discharge before derecruitment and had interspike intervals longer than their estimated after hyperpolarisation potential (AHP) duration (estimated by death rate and breakpoint analyses). The other units were derecruited more abruptly and had shorter estimated AHP durations. Overall, regularly firing units had longer estimated AHP durations and were weaker than irregularly firing units, suggesting they were lower threshold units. Sustained firing of units at regular rates may reflect activation of persistent inward currents, visible here in the absence of voluntary drive, whereas irregularly firing units may only respond to synaptic noise.
Kurnianingsih, Yoanna A; Sim, Sam K Y; Chee, Michael W L; Mullette-Gillman, O'Dhaniel A
2015-01-01
We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble) and choice strategies (what gamble information influences choices) within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk, and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning. We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61-80 years old) were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic decision-making for losses through changes in both individual preferences and the strategies individuals employ.
American Airlines Propeller STOL Transport Economic Risk Analysis
NASA Technical Reports Server (NTRS)
Ransone, B.
1972-01-01
A Monte Carlo risk analysis on the economics of STOL transports in air passenger traffic established the probability of making the expected internal rate of financial return, or better, in a hypothetical regular Washington/New York intercity operation.
Engaging Older Adult Volunteers in National Service
ERIC Educational Resources Information Center
McBride, Amanda Moore; Greenfield, Jennifer C.; Morrow-Howell, Nancy; Lee, Yung Soo; McCrary, Stacey
2012-01-01
Volunteer-based programs are increasingly designed as interventions to affect the volunteers and the beneficiaries of the volunteers' activities. To achieve the intended impacts for both, programs need to leverage the volunteers' engagement by meeting their expectations, retaining them, and maximizing their perceptions of benefits. Programmatic…
Kok, Maryse C; Kea, Aschenaki Z; Datiko, Daniel G; Broerse, Jacqueline E W; Dieleman, Marjolein; Taegtmeyer, Miriam; Tulloch, Olivia
2015-09-30
Health extension workers (HEWs) in Ethiopia have a unique position, connecting communities to the health sector. This intermediary position requires strong interpersonal relationships with actors in both the community and health sector, in order to enhance HEW performance. This study aimed to understand how relationships between HEWs, the community and health sector were shaped, in order to inform policy on optimizing HEW performance in providing maternal health services. We conducted a qualitative study in six districts in the Sidama zone, which included focus group discussions (FGDs) with HEWs, women and men from the community and semi-structured interviews with HEWs; key informants working in programme management, health service delivery and supervision of HEWs; mothers; and traditional birth attendants. Respondents were asked about facilitators and barriers regarding HEWs' relationships with the community and health sector. Interviews and FGDs were recorded, transcribed, translated, coded and thematically analysed. HEWs were selected by their communities, which enhanced trust and engagement between them. Relationships were facilitated by programme design elements related to support, referral, supervision, training, monitoring and accountability. Trust, communication and dialogue and expectations influenced the strength of relationships. From the community side, the health development army supported HEWs in liaising with community members. From the health sector side, top-down supervision and inadequate training possibilities hampered relationships and demotivated HEWs. Health professionals, administrators, HEWs and communities occasionally met to monitor HEW and programme performance. Expectations from the community and health sector regarding HEWs' tasks sometimes differed, negatively affecting motivation and satisfaction of HEWs. HEWs' relationships with the community and health sector can be constrained as a result of inadequate support systems, lack of trust, communication and dialogue and differing expectations. Clearly defined roles at all levels and standardized support, monitoring and accountability, referral, supervision and training, which are executed regularly with clear communication lines, could improve dialogue and trust between HEWs and actors from the community and health sector. This is important to increase HEW performance and maximize the value of HEWs' unique position.
Fermion-number violation in regularizations that preserve fermion-number symmetry
NASA Astrophysics Data System (ADS)
Golterman, Maarten; Shamir, Yigal
2003-01-01
There exist both continuum and lattice regularizations of gauge theories with fermions which preserve chiral U(1) invariance (“fermion number”). Such regularizations necessarily break gauge invariance but, in a covariant gauge, one recovers gauge invariance to all orders in perturbation theory by including suitable counterterms. At the nonperturbative level, an apparent conflict then arises between the chiral U(1) symmetry of the regularized theory and the existence of ’t Hooft vertices in the renormalized theory. The only possible resolution of the paradox is that the chiral U(1) symmetry is broken spontaneously in the enlarged Hilbert space of the covariantly gauge-fixed theory. The corresponding Goldstone pole is unphysical. The theory must therefore be defined by introducing a small fermion-mass term that breaks explicitly the chiral U(1) invariance and is sent to zero after the infinite-volume limit has been taken. Using this careful definition (and a lattice regularization) for the calculation of correlation functions in the one-instanton sector, we show that the ’t Hooft vertices are recovered as expected.
NASA Astrophysics Data System (ADS)
Permatasari, T. D.; Thamrin, A.; Hanum, H.
2018-03-01
Patients with chronic kidney disease, have a higher risk for psychological distress such as anxiety, depression and cognitive decline. Combination of Hemodialysis (HD)/hemoperfusion (HP) regularly able to eliminate uremic toxin with mild-to-large molecular weight better. HD/HP can remove metabolites, toxin, and pathogenic factors and regulate the water, electrolyte and acid-base balance to improve the quality of patient’s sleep and appetite also reduces itching of the skin, which in turn improve the quality and life expectancy. This research was a cross sectional research with a pre-experimental design conducted from July to September 2015 with 17 regular hemodialysis patients as samples. Inclusion criteria were regular hemodialysis patients and willingly participated in the research. The assessmentwas conducted using BDI to assess depression. To obtained the results, data were analyzed using T-Test and showed that that the average BDI score before the combination of HD/HP 18.59±9 to 8.18±2.83 after the combination (p<0.001). In conclusion, combination HD/HP can lower depression scores in patients with regular HD.
NASA Astrophysics Data System (ADS)
Qiu, Sihang; Chen, Bin; Wang, Rongxiao; Zhu, Zhengqiu; Wang, Yuan; Qiu, Xiaogang
2018-04-01
Hazardous gas leak accident has posed a potential threat to human beings. Predicting atmospheric dispersion and estimating its source become increasingly important in emergency management. Current dispersion prediction and source estimation models cannot satisfy the requirement of emergency management because they are not equipped with high efficiency and accuracy at the same time. In this paper, we develop a fast and accurate dispersion prediction and source estimation method based on artificial neural network (ANN), particle swarm optimization (PSO) and expectation maximization (EM). The novel method uses a large amount of pre-determined scenarios to train the ANN for dispersion prediction, so that the ANN can predict concentration distribution accurately and efficiently. PSO and EM are applied for estimating the source parameters, which can effectively accelerate the process of convergence. The method is verified by the Indianapolis field study with a SF6 release source. The results demonstrate the effectiveness of the method.
Merton's problem for an investor with a benchmark in a Barndorff-Nielsen and Shephard market.
Lennartsson, Jan; Lindberg, Carl
2015-01-01
To try to outperform an externally given benchmark with known weights is the most common equity mandate in the financial industry. For quantitative investors, this task is predominantly approached by optimizing their portfolios consecutively over short time horizons with one-period models. We seek in this paper to provide a theoretical justification to this practice when the underlying market is of Barndorff-Nielsen and Shephard type. This is done by verifying that an investor who seeks to maximize her expected terminal exponential utility of wealth in excess of her benchmark will in fact use an optimal portfolio equivalent to the one-period Markowitz mean-variance problem in continuum under the corresponding Black-Scholes market. Further, we can represent the solution to the optimization problem as in Feynman-Kac form. Hence, the problem, and its solution, is analogous to Merton's classical portfolio problem, with the main difference that Merton maximizes expected utility of terminal wealth, not wealth in excess of a benchmark.
NASA Astrophysics Data System (ADS)
Hawthorne, Bryant; Panchal, Jitesh H.
2014-07-01
A bilevel optimization formulation of policy design problems considering multiple objectives and incomplete preferences of the stakeholders is presented. The formulation is presented for Feed-in-Tariff (FIT) policy design for decentralized energy infrastructure. The upper-level problem is the policy designer's problem and the lower-level problem is a Nash equilibrium problem resulting from market interactions. The policy designer has two objectives: maximizing the quantity of energy generated and minimizing policy cost. The stakeholders decide on quantities while maximizing net present value and minimizing capital investment. The Nash equilibrium problem in the presence of incomplete preferences is formulated as a stochastic linear complementarity problem and solved using expected value formulation, expected residual minimization formulation, and the Monte Carlo technique. The primary contributions in this article are the mathematical formulation of the FIT policy, the extension of computational policy design problems to multiple objectives, and the consideration of incomplete preferences of stakeholders for policy design problems.
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Huber, David J.; Martin, Kevin
2017-05-01
This paper† describes a technique in which we improve upon the prior performance of the Rapid Serial Visual Presentation (RSVP) EEG paradigm for image classification though the insertion of visual attention distracters and overall sequence reordering based upon the expected ratio of rare to common "events" in the environment and operational context. Inserting distracter images maintains the ratio of common events to rare events at an ideal level, maximizing the rare event detection via P300 EEG response to the RSVP stimuli. The method has two steps: first, we compute the optimal number of distracters needed for an RSVP stimuli based on the desired sequence length and expected number of targets and insert the distracters into the RSVP sequence, and then we reorder the RSVP sequence to maximize P300 detection. We show that by reducing the ratio of target events to nontarget events using this method, we can allow RSVP sequences with more targets without sacrificing area under the ROC curve (azimuth).
Choosing Fitness-Enhancing Innovations Can Be Detrimental under Fluctuating Environments
Xue, Julian Z.; Costopoulos, Andre; Guichard, Frederic
2011-01-01
The ability to predict the consequences of one's behavior in a particular environment is a mechanism for adaptation. In the absence of any cost to this activity, we might expect agents to choose behaviors that maximize their fitness, an example of directed innovation. This is in contrast to blind mutation, where the probability of becoming a new genotype is independent of the fitness of the new genotypes. Here, we show that under environments punctuated by rapid reversals, a system with both genetic and cultural inheritance should not always maximize fitness through directed innovation. This is because populations highly accurate at selecting the fittest innovations tend to over-fit the environment during its stable phase, to the point that a rapid environmental reversal can cause extinction. A less accurate population, on the other hand, can track long term trends in environmental change, keeping closer to the time-average of the environment. We use both analytical and agent-based models to explore when this mechanism is expected to occur. PMID:22125601
Castillo-Barnes, Diego; Peis, Ignacio; Martínez-Murcia, Francisco J.; Segovia, Fermín; Illán, Ignacio A.; Górriz, Juan M.; Ramírez, Javier; Salas-Gonzalez, Diego
2017-01-01
A wide range of segmentation approaches assumes that intensity histograms extracted from magnetic resonance images (MRI) have a distribution for each brain tissue that can be modeled by a Gaussian distribution or a mixture of them. Nevertheless, intensity histograms of White Matter and Gray Matter are not symmetric and they exhibit heavy tails. In this work, we present a hidden Markov random field model with expectation maximization (EM-HMRF) modeling the components using the α-stable distribution. The proposed model is a generalization of the widely used EM-HMRF algorithm with Gaussian distributions. We test the α-stable EM-HMRF model in synthetic data and brain MRI data. The proposed methodology presents two main advantages: Firstly, it is more robust to outliers. Secondly, we obtain similar results than using Gaussian when the Gaussian assumption holds. This approach is able to model the spatial dependence between neighboring voxels in tomographic brain MRI. PMID:29209194
Undocumented migration to Venezuela.
Van Roy, R
1984-01-01
"In 1980 Venezuela took...steps to regularize the undocumented migrant population. While the number responding to the amnesty was small relative to expectations, the majority of illegals appeared to have regularized their status. For the first time it was possible to assess objectively the characteristics of the undocumented population. Moreover, the problem of illegal migrants seems to have been temporarily solved, a result of both the amnesty and the country's declining economic activity." Topics covered in the present article include the nationality, geographic distribution, sex and age distribution, educational status, and occupations of undocumented migrants. excerpt
Using return on investment to maximize conservation effectiveness in Argentine grasslands.
Murdoch, William; Ranganathan, Jai; Polasky, Stephen; Regetz, James
2010-12-07
The rapid global loss of natural habitats and biodiversity, and limited resources, place a premium on maximizing the expected benefits of conservation actions. The scarcity of information on the fine-grained distribution of species of conservation concern, on risks of loss, and on costs of conservation actions, especially in developing countries, makes efficient conservation difficult. The distribution of ecosystem types (unique ecological communities) is typically better known than species and arguably better represents the entirety of biodiversity than do well-known taxa, so we use conserving the diversity of ecosystem types as our conservation goal. We define conservation benefit to include risk of conversion, spatial effects that reward clumping of habitat, and diminishing returns to investment in any one ecosystem type. Using Argentine grasslands as an example, we compare three strategies: protecting the cheapest land ("minimize cost"), maximizing conservation benefit regardless of cost ("maximize benefit"), and maximizing conservation benefit per dollar ("return on investment"). We first show that the widely endorsed goal of saving some percentage (typically 10%) of a country or habitat type, although it may inspire conservation, is a poor operational goal. It either leads to the accumulation of areas with low conservation benefit or requires infeasibly large sums of money, and it distracts from the real problem: maximizing conservation benefit given limited resources. Second, given realistic budgets, return on investment is superior to the other conservation strategies. Surprisingly, however, over a wide range of budgets, minimizing cost provides more conservation benefit than does the maximize-benefit strategy.
Gender differences in the predictors of physical activity among assisted living residents.
Chen, Yuh-Min; Li, Yueh-Ping; Yen, Min-Ling
2015-05-01
To explore gender differences in the predictors of physical activity (PA) among assisted living residents. A cross-sectional design was adopted. A convenience sample of 304 older adults was recruited from four assisted living facilities in Taiwan. Two separate simultaneous multiple regression analyses were conducted to identify the predictors of PA for older men and women. Independent variables entered into the regression models were age, marital status, educational level, past regular exercise participation, number of chronic diseases, functional status, self-rated health, depression, and self-efficacy expectations. In older men, a junior high school or higher educational level, past regular exercise participation, better functional status, better self-rated health, and higher self-efficacy expectations predicted more PA, accounting for 61.3% of the total variance in PA. In older women, better self-rated health, lower depression, and higher self-efficacy expectations predicted more PA, accounting for 50% of the total variance in PA. Predictors of PA differed between the two genders. The results have crucial implications for developing gender-specific PA interventions. Through a clearer understanding of gender-specific predictors, healthcare providers can implement gender-sensitive PA-enhancing interventions to assist older residents in performing sufficient PA. © 2015 Sigma Theta Tau International.
Placebo caffeine reduces withdrawal in abstinent coffee drinkers.
Mills, Llewellyn; Boakes, Robert A; Colagiuri, Ben
2016-04-01
Expectancies have been shown to play a role in the withdrawal syndrome of many drugs of addiction; however, no studies have examined the effects of expectancies across a broad range of caffeine withdrawal symptoms, including craving. The purpose of the current study was to use caffeine as a model to test the effect of expectancy on withdrawal symptoms, specifically whether the belief that one has ingested caffeine is sufficient to reduce caffeine withdrawal symptoms and cravings in abstinent coffee drinkers. We had 24-h abstinent regular coffee drinkers complete the Caffeine Withdrawal Symptom Questionnaire (CWSQ) before and after receiving decaffeinated coffee. One-half of the participants were led to believe the coffee was regular caffeinated coffee (the 'Told Caffeine' condition) and one-half were told that it was decaffeinated (the 'Told Decaf' condition). Participants in the Told Caffeine condition reported a significantly greater reduction in the factors of cravings, fatigue, lack of alertness and flu-like feelings of the CWSQ, than those in the Told Decaf condition. Our results indicated that the belief that one has consumed caffeine can affect caffeine withdrawal symptoms, especially cravings, even when no caffeine was consumed. © The Author(s) 2016.
Effect of core stability training on throwing velocity in female handball players.
Saeterbakken, Atle H; van den Tillaar, Roland; Seiler, Stephen
2011-03-01
The purpose was to study the effect of a sling exercise training (SET)-based core stability program on maximal throwing velocity among female handball players. Twenty-four female high-school handball players (16.6 ± 0.3 years, 63 ± 6 kg, and 169 ± 7 cm) participated and were initially divided into a SET training group (n = 14) and a control group (CON, n = 10). Both groups performed their regular handball training for 6 weeks. In addition, twice a week, the SET group performed a progressive core stability-training program consisting of 6 unstable closed kinetic chain exercises. Maximal throwing velocity was measured before and after the training period using photocells. Maximal throwing velocity significantly increased 4.9% from 17.9 ± 0.5 to 18.8 ± 0.4 m·s in the SET group after the training period (p < 0.01), but was unchanged in the control group (17.1 ± 0.4 vs. 16.9 ± 0.4 m·s). These results suggest that core stability training using unstable, closed kinetic chain movements can significantly improve maximal throwing velocity. A stronger and more stable lumbopelvic-hip complex may contribute to higher rotational velocity in multisegmental movements. Strength coaches can incorporate exercises exposing the joints for destabilization force during training in closed kinetic chain exercises. This may encourage an effective neuromuscular pattern and increase force production and can improve a highly specific performance task such as throwing.
Neuromuscular fatigue following constant versus variable-intensity endurance cycling in triathletes.
Lepers, R; Theurel, J; Hausswirth, C; Bernard, T
2008-07-01
The aim of this study was to determine whether or not variable power cycling produced greater neuromuscular fatigue of knee extensor muscles than constant power cycling at the same mean power output. Eight male triathletes (age: 33+/-5 years, mass: 74+/-4 kg, VO2max: 62+/-5 mL kg(-1) min(-1), maximal aerobic power: 392+/-17 W) performed two 30 min trials on a cycle ergometer in a random order. Cycling exercise was performed either at a constant power output (CP) corresponding to 75% of the maximal aerobic power (MAP) or a variable power output (VP) with alternating +/-15%, +/-5%, and +/-10% of 75% MAP approximately every 5 min. Maximal voluntary contraction (MVC) torque, maximal voluntary activation level and excitation-contraction coupling process of knee extensor muscles were evaluated before and immediately after the exercise using the technique of electrically evoked contractions (single and paired stimulations). Oxygen uptake, ventilation and heart rate were also measured at regular intervals during the exercise. Averaged metabolic variables were not significantly different between the two conditions. Similarly, reductions in MVC torque (approximately -11%, P<0.05) after cycling were not different (P>0.05) between CP and VP trials. The magnitude of central and peripheral fatigue was also similar at the end of the two cycling exercises. It is concluded that, following 30 min of endurance cycling, semi-elite triathletes experienced no additional neuromuscular fatigue by varying power (from +/-5% to 15%) compared with a protocol that involved a constant power.
Shape regularized active contour based on dynamic programming for anatomical structure segmentation
NASA Astrophysics Data System (ADS)
Yu, Tianli; Luo, Jiebo; Singhal, Amit; Ahuja, Narendra
2005-04-01
We present a method to incorporate nonlinear shape prior constraints into segmenting different anatomical structures in medical images. Kernel space density estimation (KSDE) is used to derive the nonlinear shape statistics and enable building a single model for a class of objects with nonlinearly varying shapes. The object contour is coerced by image-based energy into the correct shape sub-distribution (e.g., left or right lung), without the need for model selection. In contrast to an earlier algorithm that uses a local gradient-descent search (susceptible to local minima), we propose an algorithm that iterates between dynamic programming (DP) and shape regularization. DP is capable of finding an optimal contour in the search space that maximizes a cost function related to the difference between the interior and exterior of the object. To enforce the nonlinear shape prior, we propose two shape regularization methods, global and local regularization. Global regularization is applied after each DP search to move the entire shape vector in the shape space in a gradient descent fashion to the position of probable shapes learned from training. The regularized shape is used as the starting shape for the next iteration. Local regularization is accomplished through modifying the search space of the DP. The modified search space only allows a certain amount of deformation of the local shape from the starting shape. Both regularization methods ensure the consistency between the resulted shape with the training shapes, while still preserving DP"s ability to search over a large range and avoid local minima. Our algorithm was applied to two different segmentation tasks for radiographic images: lung field and clavicle segmentation. Both applications have shown that our method is effective and versatile in segmenting various anatomical structures under prior shape constraints; and it is robust to noise and local minima caused by clutter (e.g., blood vessels) and other similar structures (e.g., ribs). We believe that the proposed algorithm represents a major step in the paradigm shift to object segmentation under nonlinear shape constraints.
Critical spaces for quasilinear parabolic evolution equations and applications
NASA Astrophysics Data System (ADS)
Prüss, Jan; Simonett, Gieri; Wilke, Mathias
2018-02-01
We present a comprehensive theory of critical spaces for the broad class of quasilinear parabolic evolution equations. The approach is based on maximal Lp-regularity in time-weighted function spaces. It is shown that our notion of critical spaces coincides with the concept of scaling invariant spaces in case that the underlying partial differential equation enjoys a scaling invariance. Applications to the vorticity equations for the Navier-Stokes problem, convection-diffusion equations, the Nernst-Planck-Poisson equations in electro-chemistry, chemotaxis equations, the MHD equations, and some other well-known parabolic equations are given.
On the Teaching of Portfolio Theory.
ERIC Educational Resources Information Center
Biederman, Daniel K.
1992-01-01
Demonstrates how a simple portfolio problem expressed explicitly as an expected utility maximization problem can be used to instruct students in portfolio theory. Discusses risk aversion, decision making under uncertainty, and the limitations of the traditional mean variance approach. Suggests students may develop a greater appreciation of general…
TIME SHARING WITH AN EXPLICIT PRIORITY QUEUING DISCIPLINE.
exponentially distributed service times and an ordered priority queue. Each new arrival buys a position in this queue by offering a non-negative bribe to the...parameters is investigated through numerical examples. Finally, to maximize the expected revenue per unit time accruing from bribes , an optimization
Program Monitoring: Problems and Cases.
ERIC Educational Resources Information Center
Lundin, Edward; Welty, Gordon
Designed as the major component of a comprehensive model of educational management, a behavioral model of decision making is presented that approximates the synoptic model of neoclassical economic theory. The synoptic model defines all possible alternatives and provides a basis for choosing that alternative which maximizes expected utility. The…
A Bayesian Approach to Interactive Retrieval
ERIC Educational Resources Information Center
Tague, Jean M.
1973-01-01
A probabilistic model for interactive retrieval is presented. Bayesian statistical decision theory principles are applied: use of prior and sample information about the relationship of document descriptions to query relevance; maximization of expected value of a utility function, to the problem of optimally restructuring search strategies in an…
Creating an Agent Based Framework to Maximize Information Utility
2008-03-01
information utility may be a qualitative description of the information, where one would expect the adjectives low value, fair value , high value. For...operations. Information in this category may have a fair value rating. Finally, many seemingly unrelated events, such as reports of snipers in buildings
Insel, Nathan; Barnes, Carol A.
2015-01-01
The medial prefrontal cortex is thought to be important for guiding behavior according to an animal's expectations. Efforts to decode the region have focused not only on the question of what information it computes, but also how distinct circuit components become engaged during behavior. We find that the activity of regular-firing, putative projection neurons contains rich information about behavioral context and firing fields cluster around reward sites, while activity among putative inhibitory and fast-spiking neurons is most associated with movement and accompanying sensory stimulation. These dissociations were observed even between adjacent neurons with apparently reciprocal, inhibitory–excitatory connections. A smaller population of projection neurons with burst-firing patterns did not show clustered firing fields around rewards; these neurons, although heterogeneous, were generally less selective for behavioral context than regular-firing cells. The data suggest a network that tracks an animal's behavioral situation while, at the same time, regulating excitation levels to emphasize high valued positions. In this scenario, the function of fast-spiking inhibitory neurons is to constrain network output relative to incoming sensory flow. This scheme could serve as a bridge between abstract sensorimotor information and single-dimensional codes for value, providing a neural framework to generate expectations from behavioral state. PMID:24700585
Robust radio interferometric calibration using the t-distribution
NASA Astrophysics Data System (ADS)
Kazemi, S.; Yatawatta, S.
2013-10-01
A major stage of radio interferometric data processing is calibration or the estimation of systematic errors in the data and the correction for such errors. A stochastic error (noise) model is assumed, and in most cases, this underlying model is assumed to be Gaussian. However, outliers in the data due to interference or due to errors in the sky model would have adverse effects on processing based on a Gaussian noise model. Most of the shortcomings of calibration such as the loss in flux or coherence, and the appearance of spurious sources, could be attributed to the deviations of the underlying noise model. In this paper, we propose to improve the robustness of calibration by using a noise model based on Student's t-distribution. Student's t-noise is a special case of Gaussian noise when the variance is unknown. Unlike Gaussian-noise-model-based calibration, traditional least-squares minimization would not directly extend to a case when we have a Student's t-noise model. Therefore, we use a variant of the expectation-maximization algorithm, called the expectation-conditional maximization either algorithm, when we have a Student's t-noise model and use the Levenberg-Marquardt algorithm in the maximization step. We give simulation results to show the robustness of the proposed calibration method as opposed to traditional Gaussian-noise-model-based calibration, especially in preserving the flux of weaker sources that are not included in the calibration model.
Can differences in breast cancer utilities explain disparities in breast cancer care?
Schleinitz, Mark D; DePalo, Dina; Blume, Jeffrey; Stein, Michael
2006-12-01
Black, older, and less affluent women are less likely to receive adjuvant breast cancer therapy than their counterparts. Whereas preference contributes to disparities in other health care scenarios, it is unclear if preference explains differential rates of breast cancer care. To ascertain utilities from women of diverse backgrounds for the different stages of, and treatments for, breast cancer and to determine whether a treatment decision modeled from utilities is associated with socio-demographic characteristics. A stratified sample (by age and race) of 156 English-speaking women over 25 years old not currently undergoing breast cancer treatment. We assessed utilities using standard gamble for 5 breast cancer stages, and time-tradeoff for 3 therapeutic modalities. We incorporated each subject's utilities into a Markov model to determine whether her quality-adjusted life expectancy would be maximized with chemotherapy for a hypothetical, current diagnosis of stage II breast cancer. We used logistic regression to determine whether socio-demographic variables were associated with this optimal strategy. Median utilities for the 8 health states were: stage I disease, 0.91 (interquartile range 0.50 to 1.00); stage II, 0.75 (0.26 to 0.99); stage III, 0.51 (0.25 to 0.94); stage IV (estrogen receptor positive), 0.36 (0 to 0.75); stage IV (estrogen receptor negative), 0.40 (0 to 0.79); chemotherapy 0.50 (0 to 0.92); hormonal therapy 0.58 (0 to 1); and radiation therapy 0.83 (0.10 to 1). Utilities for early stage disease and treatment modalities, but not metastatic disease, varied with socio-demographic characteristics. One hundred and twenty-two of 156 subjects had utilities that maximized quality-adjusted life expectancy given stage II breast cancer with chemotherapy. Age over 50, black race, and low household income were associated with at least 5-fold lower odds of maximizing quality-adjusted life expectancy with chemotherapy, whereas women who were married or had a significant other were 4-fold more likely to maximize quality-adjusted life expectancy with chemotherapy. Differences in utility for breast cancer health states may partially explain the lower rate of adjuvant therapy for black, older, and less affluent women. Further work must clarify whether these differences result from health preference alone or reflect women's perceptions of sources of disparity, such as access to care, poor communication with providers, limitations in health knowledge or in obtaining social and workplace support during therapy.
Optimal joint detection and estimation that maximizes ROC-type curves
Wunderlich, Adam; Goossens, Bart; Abbey, Craig K.
2017-01-01
Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation. PMID:27093544
Optimal Joint Detection and Estimation That Maximizes ROC-Type Curves.
Wunderlich, Adam; Goossens, Bart; Abbey, Craig K
2016-09-01
Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation.
Dantan, Etienne; Foucher, Yohann; Lorent, Marine; Giral, Magali; Tessier, Philippe
2018-06-01
Defining thresholds of prognostic markers is essential for stratified medicine. Such thresholds are mostly estimated from purely statistical measures regardless of patient preferences potentially leading to unacceptable medical decisions. Quality-Adjusted Life-Years are a widely used preferences-based measure of health outcomes. We develop a time-dependent Quality-Adjusted Life-Years-based expected utility function for censored data that should be maximized to estimate an optimal threshold. We performed a simulation study to compare estimated thresholds when using the proposed expected utility approach and purely statistical estimators. Two applications illustrate the usefulness of the proposed methodology which was implemented in the R package ROCt ( www.divat.fr ). First, by reanalysing data of a randomized clinical trial comparing the efficacy of prednisone vs. placebo in patients with chronic liver cirrhosis, we demonstrate the utility of treating patients with a prothrombin level higher than 89%. Second, we reanalyze the data of an observational cohort of kidney transplant recipients: we conclude to the uselessness of the Kidney Transplant Failure Score to adapt the frequency of clinical visits. Applying such a patient-centered methodology may improve future transfer of novel prognostic scoring systems or markers in clinical practice.
Adar, Shay; Dor, Roi
2018-02-01
Habitat choice is an important decision that influences animals' fitness. Insect larvae are less mobile than the adults. Consequently, the contribution of the maternal choice of habitat to the survival and development of the offspring is considered to be crucial. According to the "preference-performance hypothesis", ovipositing females are expected to choose habitats that will maximize the performance of their offspring. We tested this hypothesis in wormlions (Diptera: Vermileonidae), which are small sand-dwelling insects that dig pit-traps in sandy patches and ambush small arthropods. Larvae prefer relatively deep and obstacle-free sand, and here we tested the habitat preference of the ovipositing female. In contrast to our expectation, ovipositing females showed no clear preference for either a deep sand or obstacle-free habitat, in contrast to the larval choice. This suboptimal female choice led to smaller pits being constructed later by the larvae, which may reduce prey capture success of the larvae. We offer several explanations for this apparently suboptimal female behavior, related either to maximizing maternal rather than offspring fitness, or to constraints on the female's behavior. Female's ovipositing habitat choice may have weaker negative consequences than expected for the offspring, as larvae can partially correct suboptimal maternal choice. Copyright © 2017 Elsevier B.V. All rights reserved.
Geostatistical regularization operators for geophysical inverse problems on irregular meshes
NASA Astrophysics Data System (ADS)
Jordi, C.; Doetsch, J.; Günther, T.; Schmelzbach, C.; Robertsson, J. OA
2018-05-01
Irregular meshes allow to include complicated subsurface structures into geophysical modelling and inverse problems. The non-uniqueness of these inverse problems requires appropriate regularization that can incorporate a priori information. However, defining regularization operators for irregular discretizations is not trivial. Different schemes for calculating smoothness operators on irregular meshes have been proposed. In contrast to classical regularization constraints that are only defined using the nearest neighbours of a cell, geostatistical operators include a larger neighbourhood around a particular cell. A correlation model defines the extent of the neighbourhood and allows to incorporate information about geological structures. We propose an approach to calculate geostatistical operators for inverse problems on irregular meshes by eigendecomposition of a covariance matrix that contains the a priori geological information. Using our approach, the calculation of the operator matrix becomes tractable for 3-D inverse problems on irregular meshes. We tested the performance of the geostatistical regularization operators and compared them against the results of anisotropic smoothing in inversions of 2-D surface synthetic electrical resistivity tomography (ERT) data as well as in the inversion of a realistic 3-D cross-well synthetic ERT scenario. The inversions of 2-D ERT and seismic traveltime field data with geostatistical regularization provide results that are in good accordance with the expected geology and thus facilitate their interpretation. In particular, for layered structures the geostatistical regularization provides geologically more plausible results compared to the anisotropic smoothness constraints.
Reliability analysis based on the losses from failures.
Todinov, M T
2006-04-01
The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the early-life failures region and the expected losses given failure characterizing the corresponding time intervals. For complex systems whose components are not logically arranged in series, discrete simulation algorithms and software have been created for determining the losses from failures in terms of expected lost production time, cost of intervention, and cost of replacement. Different system topologies are assessed to determine the effect of modifications of the system topology on the expected losses from failures. It is argued that the reliability allocation in a production system should be done to maximize the profit/value associated with the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the profit by minimizing the total cost has been developed. Reliability allocation that maximizes the profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually the reliabilities of the components in the block that minimize the sum of the capital, operation costs, and the expected losses from failures. A Monte Carlo simulation based net present value (NPV) cash-flow model has also been proposed, which has significant advantages to cash-flow models based on the expected value of the losses from failures per time interval. Unlike these models, the proposed model has the capability to reveal the variation of the NPV due to different number of failures occurring during a specified time interval (e.g., during one year). The model also permits tracking the impact of the distribution pattern of failure occurrences and the time dependence of the losses from failures.
Motivational deficit in childhood depression and hyperactivity.
Layne, C; Berry, E
1983-07-01
A recent theory states that the immediate cause of adult depression is low motivation, where motivation is the multiplicative product of a person's expectation for a reward times his value for that reward. The present experiment supported the extension of this theory to childhood depression. The expectations, values, and motivations of three groups of children (Ns = 18; mean ages = 10) were measured: A depressed group and a hyperactive control group were selected from a population of clinically disturbed children, while normal controls were selected from regular classrooms. As predicted, the depressed children exhibited reduced motivation, primarily because their expectations were pessimistic. Unexpected findings were that the depressives' expectations were not abnormally irrational; and that hyperactives exhibited optimistic expectations, inflated values, and, hence, inflated motivation--especially for tangible rewards. Cognitive therapy techniques that focus upon expectations were recommended for the treatment of both depressed and hyperactive children.
Allocating dissipation across a molecular machine cycle to maximize flux
Brown, Aidan I.; Sivak, David A.
2017-01-01
Biomolecular machines consume free energy to break symmetry and make directed progress. Nonequilibrium ATP concentrations are the typical free energy source, with one cycle of a molecular machine consuming a certain number of ATP, providing a fixed free energy budget. Since evolution is expected to favor rapid-turnover machines that operate efficiently, we investigate how this free energy budget can be allocated to maximize flux. Unconstrained optimization eliminates intermediate metastable states, indicating that flux is enhanced in molecular machines with fewer states. When maintaining a set number of states, we show that—in contrast to previous findings—the flux-maximizing allocation of dissipation is not even. This result is consistent with the coexistence of both “irreversible” and reversible transitions in molecular machine models that successfully describe experimental data, which suggests that, in evolved machines, different transitions differ significantly in their dissipation. PMID:29073016
Ning, Jing; Chen, Yong; Piao, Jin
2017-07-01
Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Akbaş, Halil; Bilgen, Bilge; Turhan, Aykut Melih
2015-11-01
This study proposes an integrated prediction and optimization model by using multi-layer perceptron neural network and particle swarm optimization techniques. Three different objective functions are formulated. The first one is the maximization of methane percentage with single output. The second one is the maximization of biogas production with single output. The last one is the maximization of biogas quality and biogas production with two outputs. Methane percentage, carbon dioxide percentage, and other contents' percentage are used as the biogas quality criteria. Based on the formulated models and data from a wastewater treatment facility, optimal values of input variables and their corresponding maximum output values are found out for each model. It is expected that the application of the integrated prediction and optimization models increases the biogas production and biogas quality, and contributes to the quantity of electricity production at the wastewater treatment facility. Copyright © 2015 Elsevier Ltd. All rights reserved.
Optimisation of the mean boat velocity in rowing.
Rauter, G; Baumgartner, L; Denoth, J; Riener, R; Wolf, P
2012-01-01
In rowing, motor learning may be facilitated by augmented feedback that displays the ratio between actual mean boat velocity and maximal achievable mean boat velocity. To provide this ratio, the aim of this work was to develop and evaluate an algorithm calculating an individual maximal mean boat velocity. The algorithm optimised the horizontal oar movement under constraints such as the individual range of the horizontal oar displacement, individual timing of catch and release and an individual power-angle relation. Immersion and turning of the oar were simplified, and the seat movement of a professional rower was implemented. The feasibility of the algorithm, and of the associated ratio between actual boat velocity and optimised boat velocity, was confirmed by a study on four subjects: as expected, advanced rowing skills resulted in higher ratios, and the maximal mean boat velocity depended on the range of the horizontal oar displacement.
Meurrens, Julie; Steiner, Thomas; Ponette, Jonathan; Janssen, Hans Antonius; Ramaekers, Monique; Wehrlin, Jon Peter; Vandekerckhove, Philippe; Deldicque, Louise
2016-12-01
The aims of the present study were to investigate the impact of three whole blood donations on endurance capacity and hematological parameters and to determine the duration to fully recover initial endurance capacity and hematological parameters after each donation. Twenty-four moderately trained subjects were randomly divided in a donation (n = 16) and a placebo (n = 8) group. Each of the three donations was interspersed by 3 months, and the recovery of endurance capacity and hematological parameters was monitored up to 1 month after donation. Maximal power output, peak oxygen consumption, and hemoglobin mass decreased (p < 0.001) up to 4 weeks after a single blood donation with a maximal decrease of 4, 10, and 7%, respectively. Hematocrit, hemoglobin concentration, ferritin, and red blood cell count (RBC), all key hematological parameters for oxygen transport, were lowered by a single donation (p < 0.001) and cumulatively further affected by the repetition of the donations (p < 0.001). The maximal decrease after a blood donation was 11% for hematocrit, 10% for hemoglobin concentration, 50% for ferritin, and 12% for RBC (p < 0.001). Maximal power output cumulatively increased in the placebo group as the maximal exercise tests were repeated (p < 0.001), which indicates positive training adaptations. This increase in maximal power output over the whole duration of the study was not observed in the donation group. Maximal, but not submaximal, endurance capacity was altered after blood donation in moderately trained people and the expected increase in capacity after multiple maximal exercise tests was not present when repeating whole blood donations.
Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.
ERIC Educational Resources Information Center
Wang, Yuh-Yin Wu; Schafer, William D.
This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…
Aging Education: A Worldwide Imperative
ERIC Educational Resources Information Center
McGuire, Sandra L.
2017-01-01
Life expectancy is increasing worldwide. Unfortunately, people are generally not prepared for this long life ahead and have ageist attitudes that inhibit maximizing the "longevity dividend" they have been given. Aging education can prepare people for life's later years and combat ageism. It can reimage aging as a time of continued…
ERIC Educational Resources Information Center
Casabianca, Jodi M.; Lewis, Charles
2015-01-01
Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…
The Probabilistic Nature of Preferential Choice
ERIC Educational Resources Information Center
Rieskamp, Jorg
2008-01-01
Previous research has developed a variety of theories explaining when and why people's decisions under risk deviate from the standard economic view of expected utility maximization. These theories are limited in their predictive accuracy in that they do not explain the probabilistic nature of preferential choice, that is, why an individual makes…
Relevance of a Managerial Decision-Model to Educational Administration.
ERIC Educational Resources Information Center
Lundin, Edward.; Welty, Gordon
The rational model of classical economic theory assumes that the decision maker has complete information on alternatives and consequences, and that he chooses the alternative that maximizes expected utility. This model does not allow for constraints placed on the decision maker resulting from lack of information, organizational pressures,…
India's growing participation in global clinical trials.
Gupta, Yogendra K; Padhy, Biswa M
2011-06-01
Lower operational costs, recent regulatory reforms and several logistic advantages make India an attractive destination for conducting clinical trials. Efforts for maintaining stringent ethical standards and the launch of Pharmacovigilance Program of India are expected to maximize the potential of the country for clinical research. Copyright © 2011. Published by Elsevier Ltd.
Stochastic Approximation Methods for Latent Regression Item Response Models
ERIC Educational Resources Information Center
von Davier, Matthias; Sinharay, Sandip
2010-01-01
This article presents an application of a stochastic approximation expectation maximization (EM) algorithm using a Metropolis-Hastings (MH) sampler to estimate the parameters of an item response latent regression model. Latent regression item response models are extensions of item response theory (IRT) to a latent variable model with covariates…
ERIC Educational Resources Information Center
Chen, Ping
2017-01-01
Calibration of new items online has been an important topic in item replenishment for multidimensional computerized adaptive testing (MCAT). Several online calibration methods have been proposed for MCAT, such as multidimensional "one expectation-maximization (EM) cycle" (M-OEM) and multidimensional "multiple EM cycles"…
Optimization Techniques for College Financial Aid Managers
ERIC Educational Resources Information Center
Bosshardt, Donald I.; Lichtenstein, Larry; Palumbo, George; Zaporowski, Mark P.
2010-01-01
In the context of a theoretical model of expected profit maximization, this paper shows how historic institutional data can be used to assist enrollment managers in determining the level of financial aid for students with varying demographic and quality characteristics. Optimal tuition pricing in conjunction with empirical estimation of…
2005-04-01
experience. The critical incident interview uses recollection of a specific incident as its starting point and employs a semistructured interview format...context assessment, expectancies, and judgments. The four sweeps in the critical incident interview include: Sweep 1 - Prompting the interviewee to
An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models
ERIC Educational Resources Information Center
Lee, Taehun
2010-01-01
In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…
ERIC Educational Resources Information Center
Hess, Frederick M.; McShane, Michael Q.
2013-01-01
There are at least four key places where the Common Core intersects with current efforts to improve education in the United States--testing, professional development, expectations, and accountability. Understanding them can help educators, parents, and policymakers maximize the chance that the Common Core is helpful to these efforts and, perhaps…
Designing Contributing Student Pedagogies to Promote Students' Intrinsic Motivation to Learn
ERIC Educational Resources Information Center
Herman, Geoffrey L.
2012-01-01
In order to maximize the effectiveness of our pedagogies, we must understand how our pedagogies align with prevailing theories of cognition and motivation and design our pedagogies according to this understanding. When implementing Contributing Student Pedagogies (CSPs), students are expected to make meaningful contributions to the learning of…
Charter School Discipline: Examples of Policies and School Climate Efforts from the Field
ERIC Educational Resources Information Center
Kern, Nora; Kim, Suzie
2016-01-01
Students need a safe and supportive school environment to maximize their academic and social-emotional learning potential. A school's discipline policies and practices directly impact school climate and student achievement. Together, discipline policies and positive school climate efforts can reinforce behavioral expectations and ensure student…
Llewellyn-Thomas, H; Thiel, E; Paterson, M; Naylor, D
1999-04-01
To elicit patients' maximal acceptable waiting times (MAWT) for non-urgent coronary artery bypass grafting (CABG), and to determine if MAWT is related to prior expectations of waiting times, symptom burden, expected relief, or perceived risks of myocardial infarction while waiting. Seventy-two patients on an elective CABG waiting list chose between two hypothetical but plausible options: a 1-month wait with 2% risk of surgical mortality, and a 6-month wait with 1% risk of surgical mortality. Waiting time in the 6-month option was varied up if respondents chose the 6-month/lower risk option, and down if they chose the 1-month/higher risk option, until the MAWT switch point was reached. Patients also reported their expected waiting time, perceived risks of myocardial infarction while waiting, current function, expected functional improvement and the value of that improvement. Only 17 (24%) patients chose the 6-month/1% risk option, while 55 (76%) chose the 1-month/2% risk option. The median MAWT was 2 months; scores ranged from 1 to 12 months (with two outliers). Many perceived high cumulative risks of myocardial infarction if waiting for 1 (upper quartile, > or = 1.45%) or 6 (upper quartile, > or = 10%) months. However, MAWT scores were related only to expected waiting time (r = 0.47; P < 0.0001). Most patients reject waiting 6 months for elective CABG, even if offered along with a halving in surgical mortality (from 2% to 1%). Intolerance for further delay seems to be determined primarily by patients' attachment to their scheduled surgical dates. Many also have severely inflated perceptions of their risk of myocardial infarction in the queue. These results suggest a need for interventions to modify patients' inaccurate risk perceptions, particularly if a scheduled surgical date must be deferred.
Applications of compressed sensing image reconstruction to sparse view phase tomography
NASA Astrophysics Data System (ADS)
Ueda, Ryosuke; Kudo, Hiroyuki; Dong, Jian
2017-10-01
X-ray phase CT has a potential to give the higher contrast in soft tissue observations. To shorten the measure- ment time, sparse-view CT data acquisition has been attracting the attention. This paper applies two major compressed sensing (CS) approaches to image reconstruction in the x-ray sparse-view phase tomography. The first CS approach is the standard Total Variation (TV) regularization. The major drawbacks of TV regularization are a patchy artifact and loss in smooth intensity changes due to the piecewise constant nature of image model. The second CS method is a relatively new approach of CS which uses a nonlinear smoothing filter to design the regularization term. The nonlinear filter based CS is expected to reduce the major artifact in the TV regular- ization. The both cost functions can be minimized by the very fast iterative reconstruction method. However, in the past research activities, it is not clearly demonstrated how much image quality difference occurs between the TV regularization and the nonlinear filter based CS in x-ray phase CT applications. We clarify the issue by applying the two CS applications to the case of x-ray phase tomography. We provide results with numerically simulated data, which demonstrates that the nonlinear filter based CS outperforms the TV regularization in terms of textures and smooth intensity changes.
Impact of chronobiology on neuropathic pain treatment.
Gilron, Ian
2016-01-01
Inflammatory pain exhibits circadian rhythmicity. Recently, a distinct diurnal pattern has been described for peripheral neuropathic conditions. This diurnal variation has several implications: advancing understanding of chronobiology may facilitate identification of new and improved treatments; developing pain-contingent strategies that maximize treatment at times of the day associated with highest pain intensity may provide optimal pain relief as well as minimize treatment-related adverse effects (e.g., daytime cognitive dysfunction); and consideration of the impact of chronobiology on pain measurement may lead to improvements in analgesic study design that will maximize assay sensitivity of clinical trials. Recent and ongoing chronobiology studies are thus expected to advance knowledge and treatment of neuropathic pain.
NASA Technical Reports Server (NTRS)
Conrad, G. W.; Conrad, A. H.; Spooner, B. S. (Principal Investigator)
1992-01-01
Application of reference standard reagents to alternatively depolymerize or stabilize microtubules in a cell that undergoes very regular cytoskeleton-dependent shape changes provides a model system in which some expected components of the environments of spacecraft and space can be tested on Earth for their effects on the cytoskeleton. The fertilized eggs of Ilyanassa obsoleta undergo polar lobe formation by repeated, dramatic, constriction and relaxation of a microfilamentous band localized in the cortical cytoplasm and activated by microtubules.
The nature of the evolution of galaxies by mergers
NASA Technical Reports Server (NTRS)
Chatterjee, Tapan K.
1993-01-01
The merger theory for the formation of elliptical galaxies is examined by conducting a dynamical study of the expected frequency of merging galaxies on the basis of the collisional theory, using galaxy models without halos. The expected merger rates obtained on the basis of the collisional theory fall about a magnitude below the observational value in the present epoch. In the light of current observational evidence and the results obtained, a marked regularity in the formation of ellipticals is indicated, followed by secular evolution by mergers.
Simulating a Skilled Typist: A Study of Skilled Cognitive-Motor Performance.
1981-05-01
points out, such behavior is to be expected from a metronome model of typing in which the typist ini- tiates a stroke regularly to some sort of...long. As we show, this behavior is also to be expected from models not involving such an internal clock. All other things being equal, the model... behavior actually engaged in by expert typ- ists. The Units of Typing Seem to Be Largely at the Word Level or Smaller The units of typing in our model are
Speed- and Circuit-Based High-Intensity Interval Training on Recovery Oxygen Consumption
SCHLEPPENBACH, LINDSAY N.; EZER, ANDREAS B.; GRONEMUS, SARAH A.; WIDENSKI, KATELYN R.; BRAUN, SAORI I.; JANOT, JEFFREY M.
2017-01-01
Due to the current obesity epidemic in the United States, there is growing interest in efficient, effective ways to increase energy expenditure and weight loss. Research has shown that high-intensity exercise elicits a higher Excess Post-Exercise Oxygen Consumption (EPOC) throughout the day compared to steady-state exercise. Currently, there is no single research study that examines the differences in Recovery Oxygen Consumption (ROC) resulting from high-intensity interval training (HIIT) modalities. The purpose of this study is to review the impact of circuit training (CT) and speed interval training (SIT), on ROC in both regular exercising and sedentary populations. A total of 26 participants were recruited from the UW-Eau Claire campus and divided into regularly exercising and sedentary groups, according to self-reported exercise participation status. Oxygen consumption was measured during and after two HIIT sessions and was used to estimate caloric expenditure. There was no significant difference in caloric expenditure during and after exercise among individuals who regularly exercise and individuals who are sedentary. There was also no significant difference in ROC between regular exercisers and sedentary or between SIT and CT. However, there was a significantly higher caloric expenditure in SIT vs. CT regardless of exercise status. It is recommended that individuals engage in SIT vs. CT when the goal is to maximize overall caloric expenditure. With respect to ROC, individuals can choose either modalities of HIIT to achieve similar effects on increased oxygen consumption post-exercise. PMID:29170696
Speed- and Circuit-Based High-Intensity Interval Training on Recovery Oxygen Consumption.
Schleppenbach, Lindsay N; Ezer, Andreas B; Gronemus, Sarah A; Widenski, Katelyn R; Braun, Saori I; Janot, Jeffrey M
2017-01-01
Due to the current obesity epidemic in the United States, there is growing interest in efficient, effective ways to increase energy expenditure and weight loss. Research has shown that high-intensity exercise elicits a higher Excess Post-Exercise Oxygen Consumption (EPOC) throughout the day compared to steady-state exercise. Currently, there is no single research study that examines the differences in Recovery Oxygen Consumption (ROC) resulting from high-intensity interval training (HIIT) modalities. The purpose of this study is to review the impact of circuit training (CT) and speed interval training (SIT), on ROC in both regular exercising and sedentary populations. A total of 26 participants were recruited from the UW-Eau Claire campus and divided into regularly exercising and sedentary groups, according to self-reported exercise participation status. Oxygen consumption was measured during and after two HIIT sessions and was used to estimate caloric expenditure. There was no significant difference in caloric expenditure during and after exercise among individuals who regularly exercise and individuals who are sedentary. There was also no significant difference in ROC between regular exercisers and sedentary or between SIT and CT. However, there was a significantly higher caloric expenditure in SIT vs. CT regardless of exercise status. It is recommended that individuals engage in SIT vs. CT when the goal is to maximize overall caloric expenditure. With respect to ROC, individuals can choose either modalities of HIIT to achieve similar effects on increased oxygen consumption post-exercise.
NASA Technical Reports Server (NTRS)
DeYoung, J. A.; McKinley, A.; Davis, J. A.; Hetzel, P.; Bauch, A.
1996-01-01
Eight laboratories are participating in an international two-way satellite time and frequency transfer (TWSTFT) experiment. Regular time and frequency transfers have been performed over a period of almost two years, including both European and transatlantic time transfers. The performance of the regular TWSTFT sessions over an extended period has demonstrated conclusively the usefulness of the TWSTFT method for routine international time and frequency comparisons. Regular measurements are performed three times per week resulting in a regular but unevenly spaced data set. A method is presented that allows an estimate of the values of delta (sub y)(gamma) to be formed from these data. In order to maximize efficient use of paid satellite time an investigation to determine the optimal length of a single TWSTFT session is presented. The optimal experiment length is determined by evaluating how long white phase modulation (PM) instabilities are the dominant noise source during the typical 300-second sampling times currently used. A detailed investigation of the frequency transfers realized via the transatlantic TWSTFT links UTC(USNO)-UTC(NPL), UTC(USNO)-UTC(PTB), and UTC(PTB)-UTC(NPL) is presented. The investigation focuses on the frequency instabilities realized, a three cornered hat resolution of the delta (sub y) (gamma) values, and a comparison of the transatlantic and inter-European determination of UTC(PTB)-UTC(NPL). Future directions of this TWSTFT experiment are outlined.
Optimal secondary source position in exterior spherical acoustical holophony
NASA Astrophysics Data System (ADS)
Pasqual, A. M.; Martin, V.
2012-02-01
Exterior spherical acoustical holophony is a branch of spatial audio reproduction that deals with the rendering of a given free-field radiation pattern (the primary field) by using a compact spherical loudspeaker array (the secondary source). More precisely, the primary field is known on a spherical surface surrounding the primary and secondary sources and, since the acoustic fields are described in spherical coordinates, they are naturally subjected to spherical harmonic analysis. Besides, the inverse problem of deriving optimal driving signals from a known primary field is ill-posed because the secondary source cannot radiate high-order spherical harmonics efficiently, especially in the low-frequency range. As a consequence, a standard least-squares solution will overload the transducers if the primary field contains such harmonics. Here, this is avoided by discarding the strongly decaying spherical waves, which are identified through inspection of the radiation efficiency curves of the secondary source. However, such an unavoidable regularization procedure increases the least-squares error, which also depends on the position of the secondary source. This paper deals with the above-mentioned questions in the context of far-field directivity reproduction at low and medium frequencies. In particular, an optimal secondary source position is sought, which leads to the lowest reproduction error in the least-squares sense without overloading the transducers. In order to address this issue, a regularization quality factor is introduced to evaluate the amount of regularization required. It is shown that the optimal position improves significantly the holophonic reconstruction and maximizes the regularization quality factor (minimizes the amount of regularization), which is the main general contribution of this paper. Therefore, this factor can also be used as a cost function to obtain the optimal secondary source position.
Firing patterns of spontaneously active motor units in spinal cord-injured subjects
Zijdewind, Inge; Thomas, Christine K
2012-01-01
Involuntary motor unit activity at low rates is common in hand muscles paralysed by spinal cord injury. Our aim was to describe these patterns of motor unit behaviour in relation to motoneurone and motor unit properties. Intramuscular electromyographic activity (EMG), surface EMG and force were recorded for 30 min from thenar muscles of nine men with chronic cervical SCI. Motor units fired for sustained periods (>10 min) at regular (coefficient of variation ≤ 0.15, CV, n = 19 units) or irregular intervals (CV > 0.15, n = 14). Regularly firing units started and stopped firing independently suggesting that intrinsic motoneurone properties were important for recruitment and derecruitment. Recruitment (3.6 Hz, SD 1.2), maximal (10.2 Hz, SD 2.3, range: 7.5–15.4 Hz) and derecruitment frequencies were low (3.3 Hz, SD 1.6), as were firing rate increases after recruitment (∼20 intervals in 3 s). Once active, firing often covaried, promoting the idea that units received common inputs. Half of the regularly firing units showed a very slow decline (>40 s) in discharge before derecruitment and had interspike intervals longer than their estimated afterhyperpolarisation potential (AHP) duration (estimated by death rate and breakpoint analyses). The other units were derecruited more abruptly and had shorter estimated AHP durations. Overall, regularly firing units had longer estimated AHP durations and were weaker than irregularly firing units, suggesting they were lower threshold units. Sustained firing of units at regular rates may reflect activation of persistent inward currents, visible here in the absence of voluntary drive, whereas irregularly firing units may only respond to synaptic noise. PMID:22310313
NASA Technical Reports Server (NTRS)
Eliason, E.; Hansen, C. J.; McEwen, A.; Delamere, W. A.; Bridges, N.; Grant, J.; Gulich, V.; Herkenhoff, K.; Keszthelyi, L.; Kirk, R.
2003-01-01
Science return from the Mars Reconnaissance Orbiter (MRO) High Resolution Imaging Science Experiment (HiRISE) will be optimized by maximizing science participation in the experiment. MRO is expected to arrive at Mars in March 2006, and the primary science phase begins near the end of 2006 after aerobraking (6 months) and a transition phase. The primary science phase lasts for almost 2 Earth years, followed by a 2-year relay phase in which science observations by MRO are expected to continue. We expect to acquire approx. 10,000 images with HiRISE over the course of MRO's two earth-year mission. HiRISE can acquire images with a ground sampling dimension of as little as 30 cm (from a typical altitude of 300 km), in up to 3 colors, and many targets will be re-imaged for stereo. With such high spatial resolution, the percent coverage of Mars will be very limited in spite of the relatively high data rate of MRO (approx. 10x greater than MGS or Odyssey). We expect to cover approx. 1% of Mars at approx. 1m/pixel or better, approx. 0.1% at full resolution, and approx. 0.05% in color or in stereo. Therefore, the placement of each HiRISE image must be carefully considered in order to maximize the scientific return from MRO. We believe that every observation should be the result of a mini research project based on pre-existing datasets. During operations, we will need a large database of carefully researched 'suggested' observations to select from. The HiRISE team is dedicated to involving the broad Mars community in creating this database, to the fullest degree that is both practical and legal. The philosophy of the team and the design of the ground data system are geared to enabling community involvement. A key aspect of this is that image data will be made available to the planetary community for science analysis as quickly as possible to encourage feedback and new ideas for targets.
Cerebellum, temporal predictability and the updating of a mental model.
Kotz, Sonja A; Stockert, Anika; Schwartze, Michael
2014-12-19
We live in a dynamic and changing environment, which necessitates that we adapt to and efficiently respond to changes of stimulus form ('what') and stimulus occurrence ('when'). Consequently, behaviour is optimal when we can anticipate both the 'what' and 'when' dimensions of a stimulus. For example, to perceive a temporally expected stimulus, a listener needs to establish a fairly precise internal representation of its external temporal structure, a function ascribed to classical sensorimotor areas such as the cerebellum. Here we investigated how patients with cerebellar lesions and healthy matched controls exploit temporal regularity during auditory deviance processing. We expected modulations of the N2b and P3b components of the event-related potential in response to deviant tones, and also a stronger P3b response when deviant tones are embedded in temporally regular compared to irregular tone sequences. We further tested to what degree structural damage to the cerebellar temporal processing system affects the N2b and P3b responses associated with voluntary attention to change detection and the predictive adaptation of a mental model of the environment, respectively. Results revealed that healthy controls and cerebellar patients display an increased N2b response to deviant tones independent of temporal context. However, while healthy controls showed the expected enhanced P3b response to deviant tones in temporally regular sequences, the P3b response in cerebellar patients was significantly smaller in these sequences. The current data provide evidence that structural damage to the cerebellum affects the predictive adaptation to the temporal structure of events and the updating of a mental model of the environment under voluntary attention. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Cerebellum, temporal predictability and the updating of a mental model
Kotz, Sonja A.; Stockert, Anika; Schwartze, Michael
2014-01-01
We live in a dynamic and changing environment, which necessitates that we adapt to and efficiently respond to changes of stimulus form (‘what’) and stimulus occurrence (‘when’). Consequently, behaviour is optimal when we can anticipate both the ‘what’ and ‘when’ dimensions of a stimulus. For example, to perceive a temporally expected stimulus, a listener needs to establish a fairly precise internal representation of its external temporal structure, a function ascribed to classical sensorimotor areas such as the cerebellum. Here we investigated how patients with cerebellar lesions and healthy matched controls exploit temporal regularity during auditory deviance processing. We expected modulations of the N2b and P3b components of the event-related potential in response to deviant tones, and also a stronger P3b response when deviant tones are embedded in temporally regular compared to irregular tone sequences. We further tested to what degree structural damage to the cerebellar temporal processing system affects the N2b and P3b responses associated with voluntary attention to change detection and the predictive adaptation of a mental model of the environment, respectively. Results revealed that healthy controls and cerebellar patients display an increased N2b response to deviant tones independent of temporal context. However, while healthy controls showed the expected enhanced P3b response to deviant tones in temporally regular sequences, the P3b response in cerebellar patients was significantly smaller in these sequences. The current data provide evidence that structural damage to the cerebellum affects the predictive adaptation to the temporal structure of events and the updating of a mental model of the environment under voluntary attention. PMID:25385781
Artacho, Paulina; Jouanneau, Isabelle; Le Galliard, Jean-François
2013-01-01
Studies of the relationship of performance and behavioral traits with environmental factors have tended to neglect interindividual variation even though quantification of this variation is fundamental to understanding how phenotypic traits can evolve. In ectotherms, functional integration of locomotor performance, thermal behavior, and energy metabolism is of special interest because of the potential for coadaptation among these traits. For this reason, we analyzed interindividual variation, covariation, and repeatability of the thermal sensitivity of maximal sprint speed, preferred body temperature, thermal precision, and resting metabolic rate measured in ca. 200 common lizards (Zootoca vivipara) that varied by sex, age, and body size. We found significant interindividual variation in selected body temperatures and in the thermal performance curve of maximal sprint speed for both the intercept (expected trait value at the average temperature) and the slope (measure of thermal sensitivity). Interindividual differences in maximal sprint speed across temperatures, preferred body temperature, and thermal precision were significantly repeatable. A positive relationship existed between preferred body temperature and thermal precision, implying that individuals selecting higher temperatures were more precise. The resting metabolic rate was highly variable but was not related to thermal sensitivity of maximal sprint speed or thermal behavior. Thus, locomotor performance, thermal behavior, and energy metabolism were not directly functionally linked in the common lizard.
Using return on investment to maximize conservation effectiveness in Argentine grasslands
Murdoch, William; Ranganathan, Jai; Polasky, Stephen; Regetz, James
2010-01-01
The rapid global loss of natural habitats and biodiversity, and limited resources, place a premium on maximizing the expected benefits of conservation actions. The scarcity of information on the fine-grained distribution of species of conservation concern, on risks of loss, and on costs of conservation actions, especially in developing countries, makes efficient conservation difficult. The distribution of ecosystem types (unique ecological communities) is typically better known than species and arguably better represents the entirety of biodiversity than do well-known taxa, so we use conserving the diversity of ecosystem types as our conservation goal. We define conservation benefit to include risk of conversion, spatial effects that reward clumping of habitat, and diminishing returns to investment in any one ecosystem type. Using Argentine grasslands as an example, we compare three strategies: protecting the cheapest land (“minimize cost”), maximizing conservation benefit regardless of cost (“maximize benefit”), and maximizing conservation benefit per dollar (“return on investment”). We first show that the widely endorsed goal of saving some percentage (typically 10%) of a country or habitat type, although it may inspire conservation, is a poor operational goal. It either leads to the accumulation of areas with low conservation benefit or requires infeasibly large sums of money, and it distracts from the real problem: maximizing conservation benefit given limited resources. Second, given realistic budgets, return on investment is superior to the other conservation strategies. Surprisingly, however, over a wide range of budgets, minimizing cost provides more conservation benefit than does the maximize-benefit strategy. PMID:21098281
Optimization of detectors for the ILC
NASA Astrophysics Data System (ADS)
Suehara, Taikan; ILD Group; SID Group
2016-04-01
International Linear Collider (ILC) is a next-generation e+e- linear collider to explore Higgs, Beyond-Standard-Models, top and electroweak particles with great precision. We are optimizing our two detectors, International Large Detector (ILD) and Silicon Detector (SiD) to maximize the physics reach expected in ILC with reasonable detector cost and good reliability. The optimization study on vertex detectors, main trackers and calorimeters is underway. We aim to conclude the optimization to establish final designs in a few years, to finish detector TDR and proposal in reply to expected ;green sign; of the ILC project.
How will better products improve the sensory-liking and willingness to buy insect-based foods?
Tan, Hui Shan Grace; Verbaan, Yoeri Timothy; Stieger, Markus
2017-02-01
Insects have been established to be a more sustainable alternative source of protein in comparison to conventional meats, but have little appeal to those who are unfamiliar with their taste. Yet little attention has been given to understanding how more appealing products could be developed, and whether that is sufficient to encourage consumption of a culturally unusual food. By evaluating appropriate (i.e. meatball) and inappropriate (i.e. dairy drink) mealworm products along with the original mealworm-free products, this study provided new insights into how the product influences sensory-liking and willingness to buy insect-based foods for trial and regular consumption. Willing (n=135) and unwilling tasters (n=79) were recruited to explore differences between individuals who differ in their intentions to eat insects. An appropriate product context improved the expected sensory-liking and willingness to buy mealworm products once and regularly. However, consumers should first be motivated to eat insects for a better product to improve consumption intentions. Descriptive sensory profiling revealed that mealworm products were expected and experienced to taste very different from the original mealworm-free products, but were generally preferred to taste similar to the original, albeit with some unique attributes. Using a familiar and liked product preparation could help to increase trial intentions, but the product should also be appropriate and taste good if it is to be regularly consumed. We conclude that even with high interest and good products, willing consumers still hesitate to consume insect-based foods regularly due to other practical and socio-cultural factors. We recommend that future research should not only give emphasis to increasing initial motivations to try, but should address the barriers to buying and preparing insects for regular consumption, where issues relating to availability, pricing, knowledge and the social environment inhibit the uptake of this culturally new food. Copyright © 2016 Elsevier Ltd. All rights reserved.
Bouwer, Fleur L; Werner, Carola M; Knetemann, Myrthe; Honing, Henkjan
2016-05-01
Beat perception is the ability to perceive temporal regularity in musical rhythm. When a beat is perceived, predictions about upcoming events can be generated. These predictions can influence processing of subsequent rhythmic events. However, statistical learning of the order of sounds in a sequence can also affect processing of rhythmic events and must be differentiated from beat perception. In the current study, using EEG, we examined the effects of attention and musical abilities on beat perception. To ensure we measured beat perception and not absolute perception of temporal intervals, we used alternating loud and soft tones to create a rhythm with two hierarchical metrical levels. To control for sequential learning of the order of the different sounds, we used temporally regular (isochronous) and jittered rhythmic sequences. The order of sounds was identical in both conditions, but only the regular condition allowed for the perception of a beat. Unexpected intensity decrements were introduced on the beat and offbeat. In the regular condition, both beat perception and sequential learning were expected to enhance detection of these deviants on the beat. In the jittered condition, only sequential learning was expected to affect processing of the deviants. ERP responses to deviants were larger on the beat than offbeat in both conditions. Importantly, this difference was larger in the regular condition than in the jittered condition, suggesting that beat perception influenced responses to rhythmic events in addition to sequential learning. The influence of beat perception was present both with and without attention directed at the rhythm. Moreover, beat perception as measured with ERPs correlated with musical abilities, but only when attention was directed at the stimuli. Our study shows that beat perception is possible when attention is not directed at a rhythm. In addition, our results suggest that attention may mediate the influence of musical abilities on beat perception. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Workplace characteristics and work-to-family conflict: does caregiving frequency matter?
Brown, Melissa; Pitt-Catsouphes, Marcie
2013-01-01
Many workers can expect to provide care to an elder relative at some point during their tenure in the workforce. This study extends previous research by exploring whether caregiving frequency (providing care on a regular, weekly basis vs. intermittently) moderates the relationship between certain workplace characteristics and work-to-family conflict. Utilizing a sample of 465 respondents from the National Study of the Changing Workforce (Families and Work Institute, 2008), results indicate that access to workplace flexibility has a stronger effect on reducing work-to-family conflict among intermittent caregivers than among those who provide care regularly.
Real-world spatial regularities affect visual working memory for objects.
Kaiser, Daniel; Stein, Timo; Peelen, Marius V
2015-12-01
Traditional memory research has focused on measuring and modeling the capacity of visual working memory for simple stimuli such as geometric shapes or colored disks. Although these studies have provided important insights, it is unclear how their findings apply to memory for more naturalistic stimuli. An important aspect of real-world scenes is that they contain a high degree of regularity: For instance, lamps appear above tables, not below them. In the present study, we tested whether such real-world spatial regularities affect working memory capacity for individual objects. Using a delayed change-detection task with concurrent verbal suppression, we found enhanced visual working memory performance for objects positioned according to real-world regularities, as compared to irregularly positioned objects. This effect was specific to upright stimuli, indicating that it did not reflect low-level grouping, because low-level grouping would be expected to equally affect memory for upright and inverted displays. These results suggest that objects can be held in visual working memory more efficiently when they are positioned according to frequently experienced real-world regularities. We interpret this effect as the grouping of single objects into larger representational units.
ACCELERATING MR PARAMETER MAPPING USING SPARSITY-PROMOTING REGULARIZATION IN PARAMETRIC DIMENSION
Velikina, Julia V.; Alexander, Andrew L.; Samsonov, Alexey
2013-01-01
MR parameter mapping requires sampling along additional (parametric) dimension, which often limits its clinical appeal due to a several-fold increase in scan times compared to conventional anatomic imaging. Data undersampling combined with parallel imaging is an attractive way to reduce scan time in such applications. However, inherent SNR penalties of parallel MRI due to noise amplification often limit its utility even at moderate acceleration factors, requiring regularization by prior knowledge. In this work, we propose a novel regularization strategy, which utilizes smoothness of signal evolution in the parametric dimension within compressed sensing framework (p-CS) to provide accurate and precise estimation of parametric maps from undersampled data. The performance of the method was demonstrated with variable flip angle T1 mapping and compared favorably to two representative reconstruction approaches, image space-based total variation regularization and an analytical model-based reconstruction. The proposed p-CS regularization was found to provide efficient suppression of noise amplification and preservation of parameter mapping accuracy without explicit utilization of analytical signal models. The developed method may facilitate acceleration of quantitative MRI techniques that are not suitable to model-based reconstruction because of complex signal models or when signal deviations from the expected analytical model exist. PMID:23213053
Lancashire, E R; Frobisher, C; Reulen, R C; Winter, D L; Glaser, A; Hawkins, M M
2010-02-24
Previous studies of educational attainment among childhood cancer survivors were small, had contradictory findings, and were not population based. This study investigated educational attainment in a large population-based cohort of survivors of all types of childhood cancer in Great Britain. Four levels of educational attainment among 10,183 cancer survivors--degree, teaching qualification, advanced (A') levels, and ordinary (O') levels--were compared with expected levels in the general population. A questionnaire was used to obtain educational attainment data for survivors, and comparable information for the general population was available from the General Household Survey. Factors associated with level of educational attainment achieved by cancer survivors were identified using multivariable logistic regression together with likelihood ratio tests. Logistic regression adjusting for age and sex was used for comparisons with the general population. All statistical tests were two-sided. Childhood cancer survivors had lower educational attainment than the general population (degree: odds ratio [OR] = 0.77, 99% confidence interval [CI] = 0.68 to 0.87; teaching qualification: OR = 0.85, 99% CI = 0.77 to 0.94; A'level: OR = 0.85, 99% CI = 0.78 to 0.93; O'level: OR = 0.81, 99% CI = 0.74 to 0.90; P < .001, all levels). Statistically significant deficits were restricted to central nervous system (CNS) neoplasm and leukemia survivors. For leukemia, only those treated with radiotherapy were considered. Odds ratios for achievement by irradiated CNS tumor survivors were 50%-74% of those for cranially irradiated leukemia or nonirradiated CNS tumor survivors. Survivors at greater risk of poorer educational outcomes included those treated with cranial irradiation, diagnosed with a CNS tumor, older at questionnaire completion, younger at diagnosis, diagnosed with epilepsy, and who were female. Specific groups of childhood cancer survivors achieve lower-than-expected educational attainment. Detailed educational support and implementation of regular cognitive assessment may be indicated for some groups to maximize long-term function.
When to initiate integrative neuromuscular training to reduce sports-related injuries in youth?
Myer, Gregory D.; Faigenbaum, Avery D.; Ford, Kevin R.; Best, Thomas M.; Bergeron, Michael F.; Hewett, Timothy E.
2011-01-01
Regular participation in organized youth sports does not ensure adequate exposure to skill- and health-related fitness activities; and sport training without preparatory conditioning does not appear to reduce risk of injury in young athletes. Recent trends indicate that widespread participation in organized youth sports is occurring at a younger age, especially in girls. Current public health recommendations developed to promote muscle strengthening and bone building activities for youth aged 6 and older, along with increased involvement in competitive sport activities at younger ages, has increased interest and concern from parents, clinicians, coaches and teachers regarding the optimal age to encourage and integrate more specialized physical training into youth development programs. This review synthesizes the latest literature and expert opinion regarding when to initiate neuromuscular conditioning in youth and presents a how to integrative training conceptual model that could maximize the potential health-related benefits for children by reducing sports-related injury risk and encouraging lifelong regular physical activity. PMID:21623307
Forecasting long-range atmospheric transport episodes of polychlorinated biphenyls using FLEXPART
NASA Astrophysics Data System (ADS)
Halse, Anne Karine; Eckhardt, Sabine; Schlabach, Martin; Stohl, Andreas; Breivik, Knut
2013-06-01
The analysis of concentrations of persistent organic pollutants (POPs) in ambient air is costly and can only be done for a limited number of samples. It is thus beneficial to maximize the information content of the samples analyzed via a targeted observation strategy. Using polychlorinated biphenyls (PCBs) as an example, a forecasting system to predict and evaluate long-range atmospheric transport (LRAT) episodes of POPs at a remote site in southern Norway has been developed. The system uses the Lagrangian particle transport model FLEXPART, and can be used for triggering extra ("targeted") sampling when LRAT episodes are predicted to occur. The system was evaluated by comparing targeted samples collected over 12-25 h during individual LRAT episodes with monitoring samples regularly collected over one day per week throughout a year. Measured concentrations in all targeted samples were above the 75th percentile of the concentrations obtained from the regular monitoring program and included the highest measured values of all samples. This clearly demonstrates the success of the targeted sampling strategy.
Regular and Chaotic Quantum Dynamics of Two-Level Atoms in a Selfconsistent Radiation Field
NASA Technical Reports Server (NTRS)
Konkov, L. E.; Prants, S. V.
1996-01-01
Dynamics of two-level atoms interacting with their own radiation field in a single-mode high-quality resonator is considered. The dynamical system consists of two second-order differential equations, one for the atomic SU(2) dynamical-group parameter and another for the field strength. With the help of the maximal Lyapunov exponent for this set, we numerically investigate transitions from regularity to deterministic quantum chaos in such a simple model. Increasing the collective coupling constant b is identical with 8(pi)N(sub 0)(d(exp 2))/hw, we observed for initially unexcited atoms a usual sharp transition to chaos at b(sub c) approx. equal to 1. If we take the dimensionless individual Rabi frequency a = Omega/2w as a control parameter, then a sequence of order-to-chaos transitions has been observed starting with the critical value a(sub c) approx. equal to 0.25 at the same initial conditions.
Lee, Sung Soo; Kang, Sunghwun
2015-01-01
[Purpose] The aim of the study was to clarify the effects of regular exercise on lipid profiles and serum adipokines in Korean children. [Subjects and Methods] Subjects were divided into controls (n=10), children who were obese (n=10), and children with type 2 diabetes mellitus (n=10). Maximal oxygen uptake (VO2max), body composition, lipid profiles, glucagon, insulin and adipokines (leptin, resistin, visfatin and retinol binding protein 4) were measured before to and after a 12-week exercise program. [Results] Body weight, body mass index, and percentage body fat were significantly higher in the obese and diabetes groups compared with the control group. Total cholesterol, triglycerides, low-density lipoprotein cholesterol and glycemic control levels were significantly decreased after the exercise program in the obese and diabetes groups, while high-density lipoprotein cholesterol levels were significantly increased. Adipokines were higher in the obese and diabetes groups compared with the control group prior to the exercise program, and were significantly lower following completion. [Conclusion] These results suggest that regular exercise has positive effects on obesity and type 2 diabetes mellitus in Korean children by improving glycemic control and reducing body weight, thereby lowering cardiovascular risk factors and adipokine levels. PMID:26180345
Kurnianingsih, Yoanna A.; Sim, Sam K. Y.; Chee, Michael W. L.; Mullette-Gillman, O’Dhaniel A.
2015-01-01
We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble) and choice strategies (what gamble information influences choices) within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk, and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning. We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61–80 years old) were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic decision-making for losses through changes in both individual preferences and the strategies individuals employ. PMID:26029092
Reliability and cost: A sensitivity analysis
NASA Technical Reports Server (NTRS)
Suich, Ronald C.; Patterson, Richard L.
1991-01-01
In the design phase of a system, how a design engineer or manager choose between a subsystem with .990 reliability and a more costly subsystem with .995 reliability is examined, along with the justification of the increased cost. High reliability is not necessarily an end in itself but may be desirable in order to reduce the expected cost due to subsystem failure. However, this may not be the wisest use of funds since the expected cost due to subsystem failure is not the only cost involved. The subsystem itself may be very costly. The cost of the subsystem nor the expected cost due to subsystem failure should not be considered separately but the total of the two costs should be maximized, i.e., the total of the cost of the subsystem plus the expected cost due to subsystem failure.
The critical 1-arm exponent for the ferromagnetic Ising model on the Bethe lattice
NASA Astrophysics Data System (ADS)
Heydenreich, Markus; Kolesnikov, Leonid
2018-04-01
We consider the ferromagnetic nearest-neighbor Ising model on regular trees (Bethe lattice), which is well-known to undergo a phase transition in the absence of an external magnetic field. The behavior of the model at critical temperature can be described in terms of various critical exponents; one of them is the critical 1-arm exponent ρ which characterizes the rate of decay of the (root) magnetization as a function of the distance to the boundary. The crucial quantity we analyze in this work is the thermal expectation of the root spin on a finite subtree, where the expected value is taken with respect to a probability measure related to the corresponding finite-volume Hamiltonian with a fixed boundary condition. The spontaneous magnetization, which is the limit of this thermal expectation in the distance between the root and the boundary (i.e., in the height of the subtree), is known to vanish at criticality. We are interested in a quantitative analysis of the rate of this convergence in terms of the critical 1-arm exponent ρ. Therefore, we rigorously prove that ⟨σ0⟩ n +, the thermal expectation of the root spin at the critical temperature and in the presence of the positive boundary condition, decays as ⟨σ0 ⟩ n +≈n-1/2 (in a rather sharp sense), where n is the height of the tree. This establishes the 1-arm critical exponent for the Ising model on regular trees (ρ =1/2 ).
Heterogeneous responses of human limbs to infused adrenergic agonists: a gravitational effect?
NASA Technical Reports Server (NTRS)
Pawelczyk, James A.; Levine, Benjamin D.
2002-01-01
Unlike quadrupeds, the legs of humans are regularly exposed to elevated pressures relative to the arms. We hypothesized that this "dependent hypertension" would be associated with altered adrenergic responsiveness. Isoproterenol (0.75-24 ng x 100 ml limb volume-1 x min-1) and phenylephrine (0.025-0.8 microg x 100 ml limb volume-1 x min-1) were infused incrementally in the brachial and femoral arteries of 12 normal volunteers; changes in limb blood flow were quantified by using strain-gauge plethysmography. Compared with the forearm, baseline calf vascular resistance was greater (38.8 +/- 2.5 vs. 26.9 +/- 2.0 mmHg x 100 ml x min x ml-1; P < 0.001) and maximal conductance was lower (46.1 +/- 11.9 vs. 59.4 +/- 13.4 ml x ml-1 x min-1 x mmHg-1; P < 0.03). Vascular conductance did not differ between the two limbs during isoproterenol infusions, whereas decreases in vascular conductance were greater in the calf than the forearm during phenylephrine infusions (P < 0.001). With responses normalized to maximal conductance, the half-maximal response for phenylephrine was significantly less for the calf than the forearm (P < 0.001), whereas the half-maximal response for isoproterenol did not differ between limbs. We conclude that alpha1- but not beta-adrenergic-receptor responsiveness in human limbs is nonuniform. The relatively greater response to alpha1-adrenergic-receptor stimulation in the calf may represent an adaptive mechanism that limits blood pooling and capillary filtration in the legs during standing.
Low External Workloads Are Related to Higher Injury Risk in Professional Male Basketball Games.
Caparrós, Toni; Casals, Martí; Solana, Álvaro; Peña, Javier
2018-06-01
The primary purpose of this study was to identify potential risk factors for sports injuries in professional basketball. An observational retrospective cohort study involving a male professional basketball team, using game tracking data was conducted during three consecutive seasons. Thirty-three professional basketball players took part in this study. A total of 29 time-loss injuries were recorded during regular season games, accounting for 244 total missed games with a mean of 16.26 ± 15.21 per player and season. The tracking data included the following variables: minutes played, physiological load, physiological intensity, mechanical load, mechanical intensity, distance covered, walking maximal speed, maximal speed, sprinting maximal speed, maximal speed, average offensive speed, average defensive speed, level one acceleration, level two acceleration, level three acceleration, level four acceleration, level one deceleration, level two deceleration, level three deceleration, level four deceleration, player efficiency rating and usage percentage. The influence of demographic characteristics, tracking data and performance factors on the risk of injury was investigated using multivariate analysis with their incidence rate ratios (IRRs). Athletes with less or equal than 3 decelerations per game (IRR, 4.36; 95% CI, 1.78-10.6) and those running less or equal than 1.3 miles per game (lower workload) (IRR, 6.42 ; 95% CI, 2.52-16.3) had a higher risk of injury during games (p < 0.01 in both cases). Therefore, unloaded players have a higher risk of injury. Adequate management of training loads might be a relevant factor to reduce the likelihood of injury according to individual profiles.
Maximizing Your Grant Development: A Guide for CEOs.
ERIC Educational Resources Information Center
Snyder, Thomas
1993-01-01
Since most private and public sources of external funding generally expect increased effort and accountability, Chief Executive Officers (CEOs) at two-year colleges must inform faculty and staff that if they do not expend extra effort their college will not receive significant grants. The CEO must also work with the college's professional…
A Prelude to Strategic Management of an Online Enterprise
ERIC Educational Resources Information Center
Pan, Cheng-Chang; Sivo, Stephen A.; Goldsmith, Clair
2016-01-01
Strategic management is expected to allow an organization to maximize given constraints and optimize limited resources in an effort to create a competitive advantage that leads to better results. For both for-profit and non-profit organizations, such strategic thinking helps the management make informed decisions and sustain long-term planning. To…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-22
... state-operated permit banks for the purpose of maximizing the fishing opportunities made available by... activity to regain their DAS for that trip, providing another opportunity to profit from the DAS that would... entities. Further, no reductions in profit are expected for any small entities, so the profitability...
Smooth Transitions: Helping Students with Autism Spectrum Disorder Navigate the School Day
ERIC Educational Resources Information Center
Hume, Kara; Sreckovic, Melissa; Snyder, Kate; Carnahan, Christina R.
2014-01-01
In school, students are expected to navigate different types of transitions every day, including those between instructors, subjects, and instructional formats, as well as classrooms. Despite the routines that many teachers develop to facilitate efficient transitions and maximize instructional time, many learners with ASD continue to struggle with…
3D image reconstruction algorithms for cryo-electron-microscopy images of virus particles
NASA Astrophysics Data System (ADS)
Doerschuk, Peter C.; Johnson, John E.
2000-11-01
A statistical model for the object and the complete image formation process in cryo electron microscopy of viruses is presented. Using this model, maximum likelihood reconstructions of the 3D structure of viruses are computed using the expectation maximization algorithm and an example based on Cowpea mosaic virus is provided.
A Benefit-Maximization Solution to Our Faculty Promotion and Tenure Process
ERIC Educational Resources Information Center
Barat, Somjit; Harvey, Hanafiah
2015-01-01
Tenure-track/tenured faculty at higher education institutions are expected to teach, conduct research and provide service as part of their promotion and tenure process, the relative importance of each component varying with the position and/or the university. However, based on the author's personal experience, feedback received from several…
"At Least One" Way to Add Value to Conferences
ERIC Educational Resources Information Center
Wilson, Warren J.
2005-01-01
In "EDUCAUSE Quarterly," Volume 25, Number 3, 2002, Joan Getman and Nikki Reynolds published an excellent article about getting the most from a conference. They listed 10 strategies that a conference attendee could use to maximize the conference's yield in information and motivation: (1) Plan ahead; (2) Set realistic expectations; (3) Use e-mail…
ERIC Educational Resources Information Center
Tseng, Hung Wei; Yeh, Hsin-Te
2013-01-01
Teamwork factors can facilitate team members, committing themselves to the purposes of maximizing their own and others' contributions and successes. It is important for online instructors to comprehend students' expectations on learning collaboratively. The aims of this study were to investigate online collaborative learning experiences and to…
A Probability Based Framework for Testing the Missing Data Mechanism
ERIC Educational Resources Information Center
Lin, Johnny Cheng-Han
2013-01-01
Many methods exist for imputing missing data but fewer methods have been proposed to test the missing data mechanism. Little (1988) introduced a multivariate chi-square test for the missing completely at random data mechanism (MCAR) that compares observed means for each pattern with expectation-maximization (EM) estimated means. As an alternative,…
Effects of Missing Data Methods in Structural Equation Modeling with Nonnormal Longitudinal Data
ERIC Educational Resources Information Center
Shin, Tacksoo; Davison, Mark L.; Long, Jeffrey D.
2009-01-01
The purpose of this study is to investigate the effects of missing data techniques in longitudinal studies under diverse conditions. A Monte Carlo simulation examined the performance of 3 missing data methods in latent growth modeling: listwise deletion (LD), maximum likelihood estimation using the expectation and maximization algorithm with a…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-03
... the status quo. The action is expected to maximize the profitability for the spiny dogfish fishery... possible commercial quotas by not making a deduction from the ACL accounting for management uncertainty...) in 2015; however, not accounting for management uncertainty would have increased the risk of...
ERIC Educational Resources Information Center
Weissman, Alexander
2013-01-01
Convergence of the expectation-maximization (EM) algorithm to a global optimum of the marginal log likelihood function for unconstrained latent variable models with categorical indicators is presented. The sufficient conditions under which global convergence of the EM algorithm is attainable are provided in an information-theoretic context by…
Optimization-Based Model Fitting for Latent Class and Latent Profile Analyses
ERIC Educational Resources Information Center
Huang, Guan-Hua; Wang, Su-Mei; Hsu, Chung-Chu
2011-01-01
Statisticians typically estimate the parameters of latent class and latent profile models using the Expectation-Maximization algorithm. This paper proposes an alternative two-stage approach to model fitting. The first stage uses the modified k-means and hierarchical clustering algorithms to identify the latent classes that best satisfy the…
Modeling Adversaries in Counterterrorism Decisions Using Prospect Theory.
Merrick, Jason R W; Leclerc, Philip
2016-04-01
Counterterrorism decisions have been an intense area of research in recent years. Both decision analysis and game theory have been used to model such decisions, and more recently approaches have been developed that combine the techniques of the two disciplines. However, each of these approaches assumes that the attacker is maximizing its utility. Experimental research shows that human beings do not make decisions by maximizing expected utility without aid, but instead deviate in specific ways such as loss aversion or likelihood insensitivity. In this article, we modify existing methods for counterterrorism decisions. We keep expected utility as the defender's paradigm to seek for the rational decision, but we use prospect theory to solve for the attacker's decision to descriptively model the attacker's loss aversion and likelihood insensitivity. We study the effects of this approach in a critical decision, whether to screen containers entering the United States for radioactive materials. We find that the defender's optimal decision is sensitive to the attacker's levels of loss aversion and likelihood insensitivity, meaning that understanding such descriptive decision effects is important in making such decisions. © 2014 Society for Risk Analysis.
Steganalysis feature improvement using expectation maximization
NASA Astrophysics Data System (ADS)
Rodriguez, Benjamin M.; Peterson, Gilbert L.; Agaian, Sos S.
2007-04-01
Images and data files provide an excellent opportunity for concealing illegal or clandestine material. Currently, there are over 250 different tools which embed data into an image without causing noticeable changes to the image. From a forensics perspective, when a system is confiscated or an image of a system is generated the investigator needs a tool that can scan and accurately identify files suspected of containing malicious information. The identification process is termed the steganalysis problem which focuses on both blind identification, in which only normal images are available for training, and multi-class identification, in which both the clean and stego images at several embedding rates are available for training. In this paper an investigation of a clustering and classification technique (Expectation Maximization with mixture models) is used to determine if a digital image contains hidden information. The steganalysis problem is for both anomaly detection and multi-class detection. The various clusters represent clean images and stego images with between 1% and 10% embedding percentage. Based on the results it is concluded that the EM classification technique is highly suitable for both blind detection and the multi-class problem.
Liu, Haiguang; Spence, John C H
2014-11-01
Crystallographic auto-indexing algorithms provide crystal orientations and unit-cell parameters and assign Miller indices based on the geometric relations between the Bragg peaks observed in diffraction patterns. However, if the Bravais symmetry is higher than the space-group symmetry, there will be multiple indexing options that are geometrically equivalent, and hence many ways to merge diffraction intensities from protein nanocrystals. Structure factor magnitudes from full reflections are required to resolve this ambiguity but only partial reflections are available from each XFEL shot, which must be merged to obtain full reflections from these 'stills'. To resolve this chicken-and-egg problem, an expectation maximization algorithm is described that iteratively constructs a model from the intensities recorded in the diffraction patterns as the indexing ambiguity is being resolved. The reconstructed model is then used to guide the resolution of the indexing ambiguity as feedback for the next iteration. Using both simulated and experimental data collected at an X-ray laser for photosystem I in the P63 space group (which supports a merohedral twinning indexing ambiguity), the method is validated.
NASA Astrophysics Data System (ADS)
Cardoso, T.; Oliveira, M. D.; Barbosa-Póvoa, A.; Nickel, S.
2015-05-01
Although the maximization of health is a key objective in health care systems, location-allocation literature has not yet considered this dimension. This study proposes a multi-objective stochastic mathematical programming approach to support the planning of a multi-service network of long-term care (LTC), both in terms of services location and capacity planning. This approach is based on a mixed integer linear programming model with two objectives - the maximization of expected health gains and the minimization of expected costs - with satisficing levels in several dimensions of equity - namely, equity of access, equity of utilization, socioeconomic equity and geographical equity - being imposed as constraints. The augmented ε-constraint method is used to explore the trade-off between these conflicting objectives, with uncertainty in the demand and delivery of care being accounted for. The model is applied to analyze the (re)organization of the LTC network currently operating in the Great Lisbon region in Portugal for the 2014-2016 period. Results show that extending the network of LTC is a cost-effective investment.
Aaltonen, T; Adelman, J; Akimoto, T; Albrow, M G; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzi-Bacchetta, P; Azzurri, P; Bacchetta, N; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Baroiant, S; Bar-Shalom, S; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Bednar, P; Behari, S; Bellettini, G; Bellinger, J; Belloni, A; Benjamin, D; Beretvas, A; Beringer, J; Berry, T; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bolshov, A; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Cooper, B; Copic, K; Cordelli, M; Cortiana, G; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lentdecker, G; De Lorenzo, G; Dell'orso, M; Demortier, L; Deng, J; Deninno, M; De Pedis, D; Derwent, P F; Di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Forrester, S; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Genser, K; Gerberich, H; Gerdes, D; Giagu, S; Giakoumopolou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Hamilton, A; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; Iyutin, B; James, E; Jayatilaka, B; Jeans, D; Jeon, E J; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Klute, M; Knuteson, B; Ko, B R; Koay, S A; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kraus, J; Kreps, M; Kroll, J; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhlmann, S E; Kuhr, T; Kulkarni, N P; Kusakabe, Y; Kwang, S; Laasanen, A T; Lai, S; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, J; Lee, J; Lee, Y J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Lin, C; Lin, C S; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lu, R-S; Lucchesi, D; Lueck, J; Luci, C; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; Macqueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, M; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; Miao, T; Miladinovic, N; Miles, J; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moed, S; Moggi, N; Moon, C S; Moore, R; Morello, M; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Oldeman, R; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Portell, X; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Rajaraman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Salamanna, G; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyria, A; Shalhout, S Z; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soderberg, M; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spinella, F; Spreitzer, T; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Sun, H; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tu, Y; Turini, N; Ukegawa, F; Uozumi, S; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner-Kuhr, J; Wagner, W; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Yagil, A; Yamamoto, K; Yamaoka, J; Yamashita, T; Yang, C; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, F; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zheng, Y; Zucchelli, S
2009-01-30
Models of maximal flavor violation (MxFV) in elementary particle physics may contain at least one new scalar SU(2) doublet field Phi(FV)=(eta(0),eta(+)) that couples the first and third generation quarks (q_(1), q_(3)) via a Lagrangian term L(FV)=xi(13)Phi(FV)q(1)q(3). These models have a distinctive signature of same-charge top-quark pairs and evade flavor-changing limits from meson mixing measurements. Data corresponding to 2 fb(-1) collected by the Collider Dectector at Fermilab II detector in pp[over ] collisions at sqrt[s]=1.96 TeV are analyzed for evidence of the MxFV signature. For a neutral scalar eta(0) with m_(eta;(0))=200 GeV/c(2) and coupling xi(13)=1, approximately 11 signal events are expected over a background of 2.1+/-1.8 events. Three events are observed in the data, consistent with background expectations, and limits are set on the coupling xi(13) for m(eta(0)=180-300 GeV/c(2).
Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia
2013-02-01
The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.
An Accommodations Model for the Secondary Inclusive Classroom
ERIC Educational Resources Information Center
Scanlon, David; Baker, Diana
2012-01-01
Despite expectations for accommodations in inclusive classrooms, little guidance for effective practice is available. Most accommodations policies and evidence-based practices address assessments. High school regular and special educators collaborated in focus groups to articulate a model based on their practices and perceptions of best practice.…
Power, Agency and Middle Leadership in English Primary Schools
ERIC Educational Resources Information Center
Hammersley-Fletcher, Linda; Strain, Michael
2011-01-01
English primary schools are considered quasi-collegial institutions within which staff communicate regularly and openly. The activities of staff, however, are bound by institutional norms and conditions and by societal expectations. Wider agendas of governmental control over the curriculum and external controls to ensure accountability and…
Faculty development: if you build it, they will come.
Steinert, Yvonne; Macdonald, Mary Ellen; Boillat, Miriam; Elizov, Michelle; Meterissian, Sarkis; Razack, Saleem; Ouellet, Marie-Noel; McLeod, Peter J
2010-09-01
The goals of this study were three-fold: to explore the reasons why some clinical teachers regularly attend centralised faculty development activities; to compare their responses with those of colleagues who do not attend, and to learn how we can make faculty development programmes more pertinent to teachers' needs. In 2008-2009, we conducted focus groups with 23 clinical teachers who had participated in faculty development activities on a regular basis in order to ascertain their perceptions of faculty development, reasons for participation, and perceived barriers against involvement. Thematic analysis and research team consensus guided the data interpretation. Reasons for regular participation included the perceptions that: faculty development enables personal and professional growth; learning and self-improvement are valued; workshop topics are viewed as relevant to teachers' needs; the opportunity to network with colleagues is appreciated, and initial positive experiences promote ongoing involvement. Barriers against participation mirrored those cited by non-attendees in an earlier study (e.g. volume of work, lack of time, logistical factors), but did not prevent participation. Suggestions for increasing participation included introducing a 'buddy system' for junior faculty members, an orientation workshop for new staff, and increased role-modelling and mentorship. The conceptualisation of faculty development as a means to achieve specific objectives and the desire for relevant programming that addresses current needs (i.e., expectancies), together with an appreciation of learning, self-improvement and networking with colleagues (i.e., values), were highlighted as reasons for participation by regular attendees. Medical educators should consider these 'lessons learned' in the design and delivery of faculty development offerings. They should also continue to explore the notion of faculty development as a social practice and the application of motivational theories that include expectancy-value constructs to personal and professional development.
Ribeiro, J P N; Matsumoto, R S; Takao, L K; Lima, M I S
2015-08-01
Estuaries present an environmental gradient that ranges from almost fresh water conditions to almost marine conditions. Salinity and flooding are the main abiotic drivers for plants. Therefore, plant zonation in estuaries is closely related to the tidal cycles. It is expected that the competitive abilities of plants would be inversely related to the tolerance toward environmental stress (tradeoff). Thus, in estuaries, plant zonation tends to be controlled by the environment near the sandbar and by competition away from it. This zonation pattern has been proposed for regular non-tropical estuaries. For tropical estuaries, the relative importance of rain is higher, and it is not clear to what extent this model can be extrapolated. We measured the tidal influence along the environmental gradient of a tropical irregular estuary and quantified the relative importance of the environment and the co-occurrence degree. Contrary to the narrow occurrence zone that would be expected for regular estuaries, plants presented large occurrence zones. However, the relative importance of the environment and competition followed the same patterns proposed for regular estuaries. The environmental conditions allow plants to occur in larger zones, but these zones arise from smaller and infrequent patches distributed across a larger area, and most species populations are concentrated in relatively narrow zones. Thus, we concluded that the zonation pattern in the Massaguaçu River estuary agrees with the tradeoff model.
Making adjustments to event annotations for improved biological event extraction.
Baek, Seung-Cheol; Park, Jong C
2016-09-16
Current state-of-the-art approaches to biological event extraction train statistical models in a supervised manner on corpora annotated with event triggers and event-argument relations. Inspecting such corpora, we observe that there is ambiguity in the span of event triggers (e.g., "transcriptional activity" vs. 'transcriptional'), leading to inconsistencies across event trigger annotations. Such inconsistencies make it quite likely that similar phrases are annotated with different spans of event triggers, suggesting the possibility that a statistical learning algorithm misses an opportunity for generalizing from such event triggers. We anticipate that adjustments to the span of event triggers to reduce these inconsistencies would meaningfully improve the present performance of event extraction systems. In this study, we look into this possibility with the corpora provided by the 2009 BioNLP shared task as a proof of concept. We propose an Informed Expectation-Maximization (EM) algorithm, which trains models using the EM algorithm with a posterior regularization technique, which consults the gold-standard event trigger annotations in a form of constraints. We further propose four constraints on the possible event trigger annotations to be explored by the EM algorithm. The algorithm is shown to outperform the state-of-the-art algorithm on the development corpus in a statistically significant manner and on the test corpus by a narrow margin. The analysis of the annotations generated by the algorithm shows that there are various types of ambiguity in event annotations, even though they could be small in number.
Pasluosta, Cristian F; Gassner, Heiko; Winkler, Juergen; Klucken, Jochen; Eskofier, Bjoern M
2015-11-01
Current challenges demand a profound restructuration of the global healthcare system. A more efficient system is required to cope with the growing world population and increased life expectancy, which is associated with a marked prevalence of chronic neurological disorders such as Parkinson's disease (PD). One possible approach to meet this demand is a laterally distributed platform such as the Internet of Things (IoT). Real-time motion metrics in PD could be obtained virtually in any scenario by placing lightweight wearable sensors in the patient's clothes and connecting them to a medical database through mobile devices such as cell phones or tablets. Technologies exist to collect huge amounts of patient data not only during regular medical visits but also at home during activities of daily life. These data could be fed into intelligent algorithms to first discriminate relevant threatening conditions, adjust medications based on online obtained physical deficits, and facilitate strategies to modify disease progression. A major impact of this approach lies in its efficiency, by maximizing resources and drastically improving the patient experience. The patient participates actively in disease management via combined objective device- and self-assessment and by sharing information within both medical and peer groups. Here, we review and discuss the existing wearable technologies and the Internet-of-Things concept applied to PD, with an emphasis on how this technological platform may lead to a shift in paradigm in terms of diagnostics and treatment.
The Elite Athlete and Strenuous Exercise in Pregnancy.
Pivarnik, James M; Szymanski, Linda M; Conway, Michelle R
2016-09-01
Highly trained women continue to exercise during pregnancy, but there is little information available to guide them, and their health care providers, in how to maximize performance without jeopardizing the maternal-fetal unit. Available evidence focusing on average women who perform regular vigorous exercise suggests that this activity is helpful in preventing several maladies of pregnancy, with little to no evidence of harm. However, some studies have shown that there may be a limit to how intense an elite performer should exercise during pregnancy. Health care providers should monitor these women athletes carefully, to build trust and understanding.
Evaluation of γ-Induced Apoptosis in Human Peripheral Blood Lymphocytes
NASA Astrophysics Data System (ADS)
Baranova, Elena; Boreyko, Alla; Ravnachka, Ivanka; Saveleva, Maria
2010-01-01
Several experiments have been performed to study regularities in the induction of apoptotic cells in human lymphocytes by 60Co γ-rays at different times after irradiation. Apoptosis induction by 60Co γ-rays in human lymphocytes in different cell cycle phases (G0, S, G1, and G2) has been studied. The maximal apoptosis output in lymphocyte cells was observed in the S phase. Modifying effect of replicative and reparative DNA synthesis inhibitors—1- β -D-arabinofuranosylcytosine (Ara-C) and hydroxyurea (Hu)—on the kinetics of 60Co γ-rays induced apoptosis in human lymphocytes has been studied.
Young, R M; Oei, T P
2000-01-01
The potential tension reduction effects of alcohol may be most appropriately tested by examining the role of alcohol related beliefs regarding alcohol's anxiolytic properties. The relationship between affective change drinking refusal self-efficacy, tension reduction alcohol expectancies, and ongoing drinking behavior was examined amongst 57 regular drinkers. Alcohol consumption, antecedent, and consequent mood states were monitored prospectively by diary, Social learning theory hypothesizes that low drinking refusal self-efficacy when experiencing a negative mood state should be associated with more frequent drinking when tense. Strong alcohol expectancies of tension reduction were hypothesized to predict subsequent tension reduction. Contrary to this hypothesis, the present study found that alcohol expectancies were more strongly related to antecedent mood states. Only a weak relationship between drinking refusal self-efficacy and predrinking tension, and between alcohol expectancy and subsequent tension reduction, was evident.
Optimal flight initiation distance.
Cooper, William E; Frederick, William G
2007-01-07
Decisions regarding flight initiation distance have received scant theoretical attention. A graphical model by Ydenberg and Dill (1986. The economics of fleeing from predators. Adv. Stud. Behav. 16, 229-249) that has guided research for the past 20 years specifies when escape begins. In the model, a prey detects a predator, monitors its approach until costs of escape and of remaining are equal, and then flees. The distance between predator and prey when escape is initiated (approach distance = flight initiation distance) occurs where decreasing cost of remaining and increasing cost of fleeing intersect. We argue that prey fleeing as predicted cannot maximize fitness because the best prey can do is break even during an encounter. We develop two optimality models, one applying when all expected future contribution to fitness (residual reproductive value) is lost if the prey dies, the other when any fitness gained (increase in expected RRV) during the encounter is retained after death. Both models predict optimal flight initiation distance from initial expected fitness, benefits obtainable during encounters, costs of escaping, and probability of being killed. Predictions match extensively verified predictions of Ydenberg and Dill's (1986) model. Our main conclusion is that optimality models are preferable to break-even models because they permit fitness maximization, offer many new testable predictions, and allow assessment of prey decisions in many naturally occurring situations through modification of benefit, escape cost, and risk functions.
Graham, Jeffrey K; Smith, Myron L; Simons, Andrew M
2014-07-22
All organisms are faced with environmental uncertainty. Bet-hedging theory expects unpredictable selection to result in the evolution of traits that maximize the geometric-mean fitness even though such traits appear to be detrimental over the shorter term. Despite the centrality of fitness measures to evolutionary analysis, no direct test of the geometric-mean fitness principle exists. Here, we directly distinguish between predictions of competing fitness maximization principles by testing Cohen's 1966 classic bet-hedging model using the fungus Neurospora crassa. The simple prediction is that propagule dormancy will evolve in proportion to the frequency of 'bad' years, whereas the prediction of the alternative arithmetic-mean principle is the evolution of zero dormancy as long as the expectation of a bad year is less than 0.5. Ascospore dormancy fraction in N. crassa was allowed to evolve under five experimental selection regimes that differed in the frequency of unpredictable 'bad years'. Results were consistent with bet-hedging theory: final dormancy fraction in 12 genetic lineages across 88 independently evolving samples was proportional to the frequency of bad years, and evolved both upwards and downwards as predicted from a range of starting dormancy fractions. These findings suggest that selection results in adaptation to variable rather than to expected environments. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Defender-Attacker Decision Tree Analysis to Combat Terrorism.
Garcia, Ryan J B; von Winterfeldt, Detlof
2016-12-01
We propose a methodology, called defender-attacker decision tree analysis, to evaluate defensive actions against terrorist attacks in a dynamic and hostile environment. Like most game-theoretic formulations of this problem, we assume that the defenders act rationally by maximizing their expected utility or minimizing their expected costs. However, we do not assume that attackers maximize their expected utilities. Instead, we encode the defender's limited knowledge about the attacker's motivations and capabilities as a conditional probability distribution over the attacker's decisions. We apply this methodology to the problem of defending against possible terrorist attacks on commercial airplanes, using one of three weapons: infrared-guided MANPADS (man-portable air defense systems), laser-guided MANPADS, or visually targeted RPGs (rocket propelled grenades). We also evaluate three countermeasures against these weapons: DIRCMs (directional infrared countermeasures), perimeter control around the airport, and hardening airplanes. The model includes deterrence effects, the effectiveness of the countermeasures, and the substitution of weapons and targets once a specific countermeasure is selected. It also includes a second stage of defensive decisions after an attack occurs. Key findings are: (1) due to the high cost of the countermeasures, not implementing countermeasures is the preferred defensive alternative for a large range of parameters; (2) if the probability of an attack and the associated consequences are large, a combination of DIRCMs and ground perimeter control are preferred over any single countermeasure. © 2016 Society for Risk Analysis.
Optimal rotation sequences for active perception
NASA Astrophysics Data System (ADS)
Nakath, David; Rachuy, Carsten; Clemens, Joachim; Schill, Kerstin
2016-05-01
One major objective of autonomous systems navigating in dynamic environments is gathering information needed for self localization, decision making, and path planning. To account for this, such systems are usually equipped with multiple types of sensors. As these sensors often have a limited field of view and a fixed orientation, the task of active perception breaks down to the problem of calculating alignment sequences which maximize the information gain regarding expected measurements. Action sequences that rotate the system according to the calculated optimal patterns then have to be generated. In this paper we present an approach for calculating these sequences for an autonomous system equipped with multiple sensors. We use a particle filter for multi- sensor fusion and state estimation. The planning task is modeled as a Markov decision process (MDP), where the system decides in each step, what actions to perform next. The optimal control policy, which provides the best action depending on the current estimated state, maximizes the expected cumulative reward. The latter is computed from the expected information gain of all sensors over time using value iteration. The algorithm is applied to a manifold representation of the joint space of rotation and time. We show the performance of the approach in a spacecraft navigation scenario where the information gain is changing over time, caused by the dynamic environment and the continuous movement of the spacecraft
Coding for Parallel Links to Maximize the Expected Value of Decodable Messages
NASA Technical Reports Server (NTRS)
Klimesh, Matthew A.; Chang, Christopher S.
2011-01-01
When multiple parallel communication links are available, it is useful to consider link-utilization strategies that provide tradeoffs between reliability and throughput. Interesting cases arise when there are three or more available links. Under the model considered, the links have known probabilities of being in working order, and each link has a known capacity. The sender has a number of messages to send to the receiver. Each message has a size and a value (i.e., a worth or priority). Messages may be divided into pieces arbitrarily, and the value of each piece is proportional to its size. The goal is to choose combinations of messages to send on the links so that the expected value of the messages decodable by the receiver is maximized. There are three parts to the innovation: (1) Applying coding to parallel links under the model; (2) Linear programming formulation for finding the optimal combinations of messages to send on the links; and (3) Algorithms for assisting in finding feasible combinations of messages, as support for the linear programming formulation. There are similarities between this innovation and methods developed in the field of network coding. However, network coding has generally been concerned with either maximizing throughput in a fixed network, or robust communication of a fixed volume of data. In contrast, under this model, the throughput is expected to vary depending on the state of the network. Examples of error-correcting codes that are useful under this model but which are not needed under previous models have been found. This model can represent either a one-shot communication attempt, or a stream of communications. Under the one-shot model, message sizes and link capacities are quantities of information (e.g., measured in bits), while under the communications stream model, message sizes and link capacities are information rates (e.g., measured in bits/second). This work has the potential to increase the value of data returned from spacecraft under certain conditions.
Price of anarchy is maximized at the percolation threshold.
Skinner, Brian
2015-05-01
When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called price of anarchy (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly placed congestible and incongestible links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold.
Dental Care Utilization among North Carolina Rural Older Adults
Arcury, Thomas A.; Savoca, Margaret R.; Anderson, Andrea M.; Chen, Haiying; Gilbert, Gregg H.; Bell, Ronny A.; Leng, Xiaoyan; Reynolds, Teresa; Quandt, Sara A.
2012-01-01
Objectives This analysis delineates the predisposing, need, and enabling factors that are significantly associated with regular and recent dental care in a multi-ethnic sample of rural older adults. Methods A cross-sectional comprehensive oral health survey conducted with a random, multi-ethnic (African American, American Indian, white) sample of 635 community-dwelling adults aged 60 years and older was completed in two rural southern counties. Results Almost no edentulous rural older adults received dental care. Slightly more than one-quarter (27.1%) of dentate rural older adults received regular dental care and slightly more than one-third (36.7%) received recent dental care. Predisposing (education) and enabling (regular place for dental care) factors associated with receiving regular and recent dental care among dentate participants point to greater resources being the driving force in receiving dental care. Contrary to expectations of the Behavioral Model of Health Services, those with the least need (e.g., better self-rated oral health) received regular dental care; this has been referred to as the Paradox of Dental Need. Conclusions Regular and recent dental care are infrequent among rural older adults. Those not receiving dental care are those who most need care. Community access to dental care and the ability of older adults to pay for dental care must be addressed by public health policy to improve the health and quality of life of older adults in rural communities. PMID:22536828
Evaluation of a Tobacco and Alcohol Abuse Prevention Curriculum for Adolescents.
ERIC Educational Resources Information Center
Hansen, William B.; And Others
Programs which have been somewhat effective in reducing the rates of onset of regular tobacco use have featured such components as peer pressure resistance training, correction of normative expectations, inoculation against mass media messages, information about parental influences, information about consequences of use, public commitments, or…
Crossing the Generational Divide: Supporting Generational Differences at Work
ERIC Educational Resources Information Center
Berl, Patricia Scallan
2006-01-01
Differences in attitudes and behaviors, regularly exhibited between youth and their elders, are frequently referred to as the "generation gap". On the job, these generational distinctions are becoming increasingly complex as "multi-generation gaps" emerge, with three or more generations defining roles and expectations, each vying for positions in…
47 CFR 74.1263 - Time of operation.
Code of Federal Regulations, 2012 CFR
2012-10-01
... FM Broadcast Booster Stations § 74.1263 Time of operation. (a) The licensee of an FM translator or booster station is not required to adhere to any regular schedule of operation. However, the licensee of an FM translator or booster station is expected to provide a dependable service to the extent that...
47 CFR 74.1263 - Time of operation.
Code of Federal Regulations, 2014 CFR
2014-10-01
... FM Broadcast Booster Stations § 74.1263 Time of operation. (a) The licensee of an FM translator or booster station is not required to adhere to any regular schedule of operation. However, the licensee of an FM translator or booster station is expected to provide a dependable service to the extent that...
47 CFR 74.1263 - Time of operation.
Code of Federal Regulations, 2013 CFR
2013-10-01
... FM Broadcast Booster Stations § 74.1263 Time of operation. (a) The licensee of an FM translator or booster station is not required to adhere to any regular schedule of operation. However, the licensee of an FM translator or booster station is expected to provide a dependable service to the extent that...
47 CFR 74.1263 - Time of operation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... FM Broadcast Booster Stations § 74.1263 Time of operation. (a) The licensee of an FM translator or booster station is not required to adhere to any regular schedule of operation. However, the licensee of an FM translator or booster station is expected to provide a dependable service to the extent that...
The Overconfident Principles of Economics Student: An Examination of a Metacognitive Skill.
ERIC Educational Resources Information Center
Grimes, Paul W.
2002-01-01
Examined the effect of demographic characteristics, academic endowments, course preparation, and course performance variables on the accuracy of pretest expectations when asking students to predict their performance on a regularly scheduled macroeconomics midterm examination. Finds overconfidence and misjudgments about the scope of the midterm…
Restructuring the Guidance Delivery System: Implications for High School Counselors.
ERIC Educational Resources Information Center
Greer, Richard M.; Richardson, Michael D.
1992-01-01
Notes that large portion of high school counselor's clientele, working parents, are not available during regular school hours. Suggests model program using flexible scheduling for high school counselors designed to address the issues of a changing clientele, a changing society, and changing expectation of counselors and schools. (NB)
34 CFR 472.5 - What definitions apply?
Code of Federal Regulations, 2010 CFR
2010-07-01
... attendance under State law, and whose receipt of project services is expected to result in new employment... principally for the provision of vocational education to individuals who have completed or left high school... regular students both individuals who have completed high school and individuals who have left high school...
34 CFR 472.5 - What definitions apply?
Code of Federal Regulations, 2011 CFR
2011-07-01
... attendance under State law, and whose receipt of project services is expected to result in new employment... principally for the provision of vocational education to individuals who have completed or left high school... regular students both individuals who have completed high school and individuals who have left high school...
Liu, Shu-Ming; Wang, Shi-Jun; Song, Si-Yao; Zou, Yong; Wang, Jun-Ru; Sun, Bing-Yin
Great variations have been found in composition and content of the essential oil of Zanthoxylum bungeanum Maxim. (Rutaceae), resulting from various factors such as harvest time, drying and extraction methods (Huang et al., 2006; Shao et al., 2013), solvent and herbal parts used (Zhang, 1996; Cao and Zhang, 2010; Wang et al., 2011). However, in terms of artificial introduction and cultivation, there is little research on the chemical composition of essential oil extracted from Z. bungeanum Maxim. cultivars, which have been introduced from different origins. In this study, the composition and content of essential oil from six cultivars (I-VI) have been investigated. They were introduced and cultivated for 11 years in the same cultivation conditions. Cultivars were as followings: Qin'an (I) cultivar originally introduced from Qin'an City in Gansu Province; Dahongpao A (II) from She County in Hebei Province; Dahongpao B (III) from Fuping County; Dahongpao C (IV) from Tongchuan City; Meifengjiao (V) from Feng County; and, Shizitou (VI) from Hancheng City, in Shaanxi Province, China. This research is expected to provide a theoretical basis for further introduction, cultivation, and commercial development of Z. bungeanum Maxim.
NASA Astrophysics Data System (ADS)
Atalay, Bora; Berker, A. Nihat
2018-05-01
Discrete-spin systems with maximally random nearest-neighbor interactions that can be symmetric or asymmetric, ferromagnetic or antiferromagnetic, including off-diagonal disorder, are studied, for the number of states q =3 ,4 in d dimensions. We use renormalization-group theory that is exact for hierarchical lattices and approximate (Migdal-Kadanoff) for hypercubic lattices. For all d >1 and all noninfinite temperatures, the system eventually renormalizes to a random single state, thus signaling q ×q degenerate ordering. Note that this is the maximally degenerate ordering. For high-temperature initial conditions, the system crosses over to this highly degenerate ordering only after spending many renormalization-group iterations near the disordered (infinite-temperature) fixed point. Thus, a temperature range of short-range disorder in the presence of long-range order is identified, as previously seen in underfrustrated Ising spin-glass systems. The entropy is calculated for all temperatures, behaves similarly for ferromagnetic and antiferromagnetic interactions, and shows a derivative maximum at the short-range disordering temperature. With a sharp immediate contrast of infinitesimally higher dimension 1 +ɛ , the system is as expected disordered at all temperatures for d =1 .
Tuffaha, Haitham W; Reynolds, Heather; Gordon, Louisa G; Rickard, Claire M; Scuffham, Paul A
2014-12-01
Value of information analysis has been proposed as an alternative to the standard hypothesis testing approach, which is based on type I and type II errors, in determining sample sizes for randomized clinical trials. However, in addition to sample size calculation, value of information analysis can optimize other aspects of research design such as possible comparator arms and alternative follow-up times, by considering trial designs that maximize the expected net benefit of research, which is the difference between the expected cost of the trial and the expected value of additional information. To apply value of information methods to the results of a pilot study on catheter securement devices to determine the optimal design of a future larger clinical trial. An economic evaluation was performed using data from a multi-arm randomized controlled pilot study comparing the efficacy of four types of catheter securement devices: standard polyurethane, tissue adhesive, bordered polyurethane and sutureless securement device. Probabilistic Monte Carlo simulation was used to characterize uncertainty surrounding the study results and to calculate the expected value of additional information. To guide the optimal future trial design, the expected costs and benefits of the alternative trial designs were estimated and compared. Analysis of the value of further information indicated that a randomized controlled trial on catheter securement devices is potentially worthwhile. Among the possible designs for the future trial, a four-arm study with 220 patients/arm would provide the highest expected net benefit corresponding to 130% return-on-investment. The initially considered design of 388 patients/arm, based on hypothesis testing calculations, would provide lower net benefit with return-on-investment of 79%. Cost-effectiveness and value of information analyses were based on the data from a single pilot trial which might affect the accuracy of our uncertainty estimation. Another limitation was that different follow-up durations for the larger trial were not evaluated. The value of information approach allows efficient trial design by maximizing the expected net benefit of additional research. This approach should be considered early in the design of randomized clinical trials. © The Author(s) 2014.
Pace, Danielle F.; Aylward, Stephen R.; Niethammer, Marc
2014-01-01
We propose a deformable image registration algorithm that uses anisotropic smoothing for regularization to find correspondences between images of sliding organs. In particular, we apply the method for respiratory motion estimation in longitudinal thoracic and abdominal computed tomography scans. The algorithm uses locally adaptive diffusion tensors to determine the direction and magnitude with which to smooth the components of the displacement field that are normal and tangential to an expected sliding boundary. Validation was performed using synthetic, phantom, and 14 clinical datasets, including the publicly available DIR-Lab dataset. We show that motion discontinuities caused by sliding can be effectively recovered, unlike conventional regularizations that enforce globally smooth motion. In the clinical datasets, target registration error showed improved accuracy for lung landmarks compared to the diffusive regularization. We also present a generalization of our algorithm to other sliding geometries, including sliding tubes (e.g., needles sliding through tissue, or contrast agent flowing through a vessel). Potential clinical applications of this method include longitudinal change detection and radiotherapy for lung or abdominal tumours, especially those near the chest or abdominal wall. PMID:23899632
Pace, Danielle F; Aylward, Stephen R; Niethammer, Marc
2013-11-01
We propose a deformable image registration algorithm that uses anisotropic smoothing for regularization to find correspondences between images of sliding organs. In particular, we apply the method for respiratory motion estimation in longitudinal thoracic and abdominal computed tomography scans. The algorithm uses locally adaptive diffusion tensors to determine the direction and magnitude with which to smooth the components of the displacement field that are normal and tangential to an expected sliding boundary. Validation was performed using synthetic, phantom, and 14 clinical datasets, including the publicly available DIR-Lab dataset. We show that motion discontinuities caused by sliding can be effectively recovered, unlike conventional regularizations that enforce globally smooth motion. In the clinical datasets, target registration error showed improved accuracy for lung landmarks compared to the diffusive regularization. We also present a generalization of our algorithm to other sliding geometries, including sliding tubes (e.g., needles sliding through tissue, or contrast agent flowing through a vessel). Potential clinical applications of this method include longitudinal change detection and radiotherapy for lung or abdominal tumours, especially those near the chest or abdominal wall.
Gamarel, Kristi E; Nelson, Kimberly M; Stephenson, Rob; Santiago Rivera, Olga J; Chiaramonte, Danielle; Miller, Robin Lin
2018-02-01
Young gay, bisexual and other men who have sex with men (YGBMSM) and young transgender women are disproportionately affected by HIV/AIDS. The success of biomedical prevention strategies is predicated on regular HIV testing; however, there has been limited uptake of testing among YGBMSM and young transgender women. Anticipated HIV stigma-expecting rejection as a result of seroconversion- may serve as a significant barrier to testing. A cross-sectional sample of YGBMSM (n = 719, 95.5%) and young transgender women (n = 33, 4.4%) ages 15-24 were recruited to participate in a one-time survey. Approximately one-third of youth had not tested within the last 6 months. In a multivariable model, anticipated HIV stigma and reporting a non-gay identity were associated with an increased odds of delaying regular HIV testing. Future research and interventions are warranted to address HIV stigma, in order to increase regular HIV testing among YGBMSM and transgender women.
Finite element based contact analysis of radio frequency MEMs switch membrane surfaces
NASA Astrophysics Data System (ADS)
Liu, Jin-Ya; Chalivendra, Vijaya; Huang, Wenzhen
2017-10-01
Finite element simulations were performed to determine the contact behavior of radio frequency (RF) micro-electro-mechanical (MEM) switch contact surfaces under monotonic and cyclic loading conditions. Atomic force microscopy (AFM) was used to capture the topography of RF-MEM switch membranes and later they were analyzed for multi-scale regular as well as fractal structures. Frictionless, non-adhesive contact 3D finite element analysis was carried out at different length scales to investigate the contact behavior of the regular-fractal surface using an elasto-plastic material model. Dominant micro-scale regular patterns were found to significantly change the contact behavior. Contact areas mainly cluster around the regular pattern. The contribution from the fractal structure is not significant. Under cyclic loading conditions, plastic deformation in the 1st loading/unloading cycle smooth the surface. The subsequent repetitive loading/unloading cycles undergo elastic contact without changing the morphology of the contacting surfaces. The work is expected to shed light on the quality of the switch surface contact as well as the optimum design of RF MEM switch surfaces.
Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management. PMID:25538868
Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.
On linear Landau Damping for relativistic plasmas via Gevrey regularity
NASA Astrophysics Data System (ADS)
Young, Brent
2015-10-01
We examine the phenomenon of Landau Damping in relativistic plasmas via a study of the relativistic Vlasov-Poisson system (both on the torus and on R3) linearized around a sufficiently nice, spatially uniform kinetic equilibrium. We find that exponential decay of spatial Fourier modes is impossible under modest symmetry assumptions. However, by assuming the equilibrium and initial data are sufficiently regular functions of velocity for a given wavevector (in particular that they exhibit a kind of Gevrey regularity), we show that it is possible for the mode associated to this wavevector to decay like exp (-| t | δ) (with 0 < δ < 1) if the magnitude of the wavevector exceeds a certain critical size which depends on the character of the interaction. We also give a heuristic argument why one should not expect such rapid decay for modes with wavevectors below this threshold.
NASA Astrophysics Data System (ADS)
Fernández-González, Daniel; Martín-Duarte, Ramón; Ruiz-Bustinza, Íñigo; Mochón, Javier; González-Gasca, Carmen; Verdeja, Luis Felipe
2016-08-01
Blast furnace operators expect to get sinter with homogenous and regular properties (chemical and mechanical), necessary to ensure regular blast furnace operation. Blends for sintering also include several iron by-products and other wastes that are obtained in different processes inside the steelworks. Due to their source, the availability of such materials is not always consistent, but their total production should be consumed in the sintering process, to both save money and recycle wastes. The main scope of this paper is to obtain the least expensive iron ore blend for the sintering process, which will provide suitable chemical and mechanical features for the homogeneous and regular operation of the blast furnace. The systematic use of statistical tools was employed to analyze historical data, including linear and partial correlations applied to the data and fuzzy clustering based on the Sugeno Fuzzy Inference System to establish relationships among the available variables.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manzini, Gianmarco
2012-07-13
We develop and analyze a new family of virtual element methods on unstructured polygonal meshes for the diffusion problem in primal form, that use arbitrarily regular discrete spaces V{sub h} {contained_in} C{sup {alpha}} {element_of} N. The degrees of freedom are (a) solution and derivative values of various degree at suitable nodes and (b) solution moments inside polygons. The convergence of the method is proven theoretically and an optimal error estimate is derived. The connection with the Mimetic Finite Difference method is also discussed. Numerical experiments confirm the convergence rate that is expected from the theory.
Santaniello, Sabato; McCarthy, Michelle M; Montgomery, Erwin B; Gale, John T; Kopell, Nancy; Sarma, Sridevi V
2015-02-10
High-frequency deep brain stimulation (HFS) is clinically recognized to treat parkinsonian movement disorders, but its mechanisms remain elusive. Current hypotheses suggest that the therapeutic merit of HFS stems from increasing the regularity of the firing patterns in the basal ganglia (BG). Although this is consistent with experiments in humans and animal models of Parkinsonism, it is unclear how the pattern regularization would originate from HFS. To address this question, we built a computational model of the cortico-BG-thalamo-cortical loop in normal and parkinsonian conditions. We simulated the effects of subthalamic deep brain stimulation both proximally to the stimulation site and distally through orthodromic and antidromic mechanisms for several stimulation frequencies (20-180 Hz) and, correspondingly, we studied the evolution of the firing patterns in the loop. The model closely reproduced experimental evidence for each structure in the loop and showed that neither the proximal effects nor the distal effects individually account for the observed pattern changes, whereas the combined impact of these effects increases with the stimulation frequency and becomes significant for HFS. Perturbations evoked proximally and distally propagate along the loop, rendezvous in the striatum, and, for HFS, positively overlap (reinforcement), thus causing larger poststimulus activation and more regular patterns in striatum. Reinforcement is maximal for the clinically relevant 130-Hz stimulation and restores a more normal activity in the nuclei downstream. These results suggest that reinforcement may be pivotal to achieve pattern regularization and restore the neural activity in the nuclei downstream and may stem from frequency-selective resonant properties of the loop.
Santaniello, Sabato; McCarthy, Michelle M.; Montgomery, Erwin B.; Gale, John T.; Kopell, Nancy; Sarma, Sridevi V.
2015-01-01
High-frequency deep brain stimulation (HFS) is clinically recognized to treat parkinsonian movement disorders, but its mechanisms remain elusive. Current hypotheses suggest that the therapeutic merit of HFS stems from increasing the regularity of the firing patterns in the basal ganglia (BG). Although this is consistent with experiments in humans and animal models of Parkinsonism, it is unclear how the pattern regularization would originate from HFS. To address this question, we built a computational model of the cortico-BG-thalamo-cortical loop in normal and parkinsonian conditions. We simulated the effects of subthalamic deep brain stimulation both proximally to the stimulation site and distally through orthodromic and antidromic mechanisms for several stimulation frequencies (20–180 Hz) and, correspondingly, we studied the evolution of the firing patterns in the loop. The model closely reproduced experimental evidence for each structure in the loop and showed that neither the proximal effects nor the distal effects individually account for the observed pattern changes, whereas the combined impact of these effects increases with the stimulation frequency and becomes significant for HFS. Perturbations evoked proximally and distally propagate along the loop, rendezvous in the striatum, and, for HFS, positively overlap (reinforcement), thus causing larger poststimulus activation and more regular patterns in striatum. Reinforcement is maximal for the clinically relevant 130-Hz stimulation and restores a more normal activity in the nuclei downstream. These results suggest that reinforcement may be pivotal to achieve pattern regularization and restore the neural activity in the nuclei downstream and may stem from frequency-selective resonant properties of the loop. PMID:25624501
Opinion evolution influenced by informed agents
NASA Astrophysics Data System (ADS)
Fan, Kangqi; Pedrycz, Witold
2016-11-01
Guiding public opinions toward a pre-set target by informed agents can be a strategy adopted in some practical applications. The informed agents are common agents who are employed or chosen to spread the pre-set opinion. In this work, we propose a social judgment based opinion (SJBO) dynamics model to explore the opinion evolution under the influence of informed agents. The SJBO model distinguishes between inner opinions and observable choices, and incorporates both the compromise between similar opinions and the repulsion between dissimilar opinions. Three choices (support, opposition, and remaining undecided) are considered in the SJBO model. Using the SJBO model, both the inner opinions and the observable choices can be tracked during the opinion evolution process. The simulation results indicate that if the exchanges of inner opinions among agents are not available, the effect of informed agents is mainly dependent on the characteristics of regular agents, including the assimilation threshold, decay threshold, and initial opinions. Increasing the assimilation threshold and decay threshold can improve the guiding effectiveness of informed agents. Moreover, if the initial opinions of regular agents are close to null, the full and unanimous consensus at the pre-set opinion can be realized, indicating that, to maximize the influence of informed agents, the guidance should be started when regular agents have little knowledge about a subject under consideration. If the regular agents have had clear opinions, the full and unanimous consensus at the pre-set opinion cannot be achieved. However, the introduction of informed agents can make the majority of agents choose the pre-set opinion.
Condition-dependent mate choice: A stochastic dynamic programming approach.
Frame, Alicia M; Mills, Alex F
2014-09-01
We study how changing female condition during the mating season and condition-dependent search costs impact female mate choice, and what strategies a female could employ in choosing mates to maximize her own fitness. We address this problem via a stochastic dynamic programming model of mate choice. In the model, a female encounters males sequentially and must choose whether to mate or continue searching. As the female searches, her own condition changes stochastically, and she incurs condition-dependent search costs. The female attempts to maximize the quality of the offspring, which is a function of the female's condition at mating and the quality of the male with whom she mates. The mating strategy that maximizes the female's net expected reward is a quality threshold. We compare the optimal policy with other well-known mate choice strategies, and we use simulations to examine how well the optimal policy fares under imperfect information. Copyright © 2014 Elsevier Inc. All rights reserved.
What's wrong with hazard-ranking systems? An expository note.
Cox, Louis Anthony Tony
2009-07-01
Two commonly recommended principles for allocating risk management resources to remediate uncertain hazards are: (1) select a subset to maximize risk-reduction benefits (e.g., maximize the von Neumann-Morgenstern expected utility of the selected risk-reducing activities), and (2) assign priorities to risk-reducing opportunities and then select activities from the top of the priority list down until no more can be afforded. When different activities create uncertain but correlated risk reductions, as is often the case in practice, then these principles are inconsistent: priority scoring and ranking fails to maximize risk-reduction benefits. Real-world risk priority scoring systems used in homeland security and terrorism risk assessment, environmental risk management, information system vulnerability rating, business risk matrices, and many other important applications do not exploit correlations among risk-reducing opportunities or optimally diversify risk-reducing investments. As a result, they generally make suboptimal risk management recommendations. Applying portfolio optimization methods instead of risk prioritization ranking, rating, or scoring methods can achieve greater risk-reduction value for resources spent.
ERIC Educational Resources Information Center
Lucas, Christopher M.
2009-01-01
For educators in the field of higher education and judicial affairs, issues are growing. Campus adjudicators must somehow maximize every opportunity for student education and development in the context of declining resources and increasing expectations of public accountability. Numbers of student misconduct cases, including matters of violence and…
Optimizing Experimental Designs Relative to Costs and Effect Sizes.
ERIC Educational Resources Information Center
Headrick, Todd C.; Zumbo, Bruno D.
A general model is derived for the purpose of efficiently allocating integral numbers of units in multi-level designs given prespecified power levels. The derivation of the model is based on a constrained optimization problem that maximizes a general form of a ratio of expected mean squares subject to a budget constraint. This model provides more…
Charles T. Stiff; William F. Stansfield
2004-01-01
Separate thinning guidelines were developed for maximizing land expectation value (LEV), present net worth (PNW), and total sawlog yield (TSY) of existing and future loblolly pine (Pinus taeda L.) plantations in eastern Texas. The guidelines were created using data from simulated stands which were thinned one time during their rotation using a...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-12
... the season through December 31, the end of the fishing year, thus maximizing this sector's opportunity... expected to significantly reduce profits for a substantial number of small entities. This proposed rule... and associated increased profits for for-hire entities associated with the recreational harvest of red...
ERIC Educational Resources Information Center
Bouchet, Francois; Harley, Jason M.; Trevors, Gregory J.; Azevedo, Roger
2013-01-01
In this paper, we present the results obtained using a clustering algorithm (Expectation-Maximization) on data collected from 106 college students learning about the circulatory system with MetaTutor, an agent-based Intelligent Tutoring System (ITS) designed to foster self-regulated learning (SRL). The three extracted clusters were validated and…
USDA-ARS?s Scientific Manuscript database
Water shortages are responsible for the greatest crop losses around the world and are expected to worsen. In arid areas where agriculture is dependent on irrigation, various forms of deficit irrigation management have been suggested to optimize crop yields for available soil water. The relationshi...
Optimizing reserve expansion for disjunct populations of San Joaquin kit fox
Robert G. Haight; Brian Cypher; Patrick A. Kelly; Scott Phillips; Katherine Ralls; Hugh P. Possingham
2004-01-01
Expanding habitat protection is a common strategy for species conservation. We present a model to optimize the expansion of reserves for disjunct populations of an endangered species. The objective is to maximize the expected number of surviving populations subject to budget and habitat constraints. The model accounts for benefits of reserve expansion in terms of...
Benefits of advanced software techniques for mission planning systems
NASA Technical Reports Server (NTRS)
Gasquet, A.; Parrod, Y.; Desaintvincent, A.
1994-01-01
The increasing complexity of modern spacecraft, and the stringent requirement for maximizing their mission return, call for a new generation of Mission Planning Systems (MPS). In this paper, we discuss the requirements for the Space Mission Planning and the benefits which can be expected from Artificial Intelligence techniques through examples of applications developed by Matra Marconi Space.
Benefits of advanced software techniques for mission planning systems
NASA Astrophysics Data System (ADS)
Gasquet, A.; Parrod, Y.; Desaintvincent, A.
1994-10-01
The increasing complexity of modern spacecraft, and the stringent requirement for maximizing their mission return, call for a new generation of Mission Planning Systems (MPS). In this paper, we discuss the requirements for the Space Mission Planning and the benefits which can be expected from Artificial Intelligence techniques through examples of applications developed by Matra Marconi Space.
ERIC Educational Resources Information Center
Köse, Alper
2014-01-01
The primary objective of this study was to examine the effect of missing data on goodness of fit statistics in confirmatory factor analysis (CFA). For this aim, four missing data handling methods; listwise deletion, full information maximum likelihood, regression imputation and expectation maximization (EM) imputation were examined in terms of…
What Influences Young Canadians to Pursue Post-Secondary Studies? Final Report
ERIC Educational Resources Information Center
Dubois, Julie
2002-01-01
This paper uses the theory of human capital to model post-secondary education enrolment decisions. The model is based on the assumption that high school graduates assess the costs and benefits associated with various levels of post-secondary education (college or university) and select the option that maximizes the expected net present value.…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramachandran, Thiagarajan; Kundu, Soumya; Chen, Yan
This paper develops and utilizes an optimization based framework to investigate the maximal energy efficiency potentially attainable by HVAC system operation in a non-predictive context. Performance is evaluated relative to the existing state of the art set point reset strategies. The expected efficiency increase driven by operation constraints relaxations is evaluated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramachandran, Thiagarajan; Kundu, Soumya; Chen, Yan
This paper develops and utilizes an optimization based framework to investigate the maximal energy efficiency potentially attainable by HVAC system operation in a non-predictive context. Performance is evaluated relative to the existing state of the art set-point reset strategies. The expected efficiency increase driven by operation constraints relaxations is evaluated.
Optimizing the Use of Response Times for Item Selection in Computerized Adaptive Testing
ERIC Educational Resources Information Center
Choe, Edison M.; Kern, Justin L.; Chang, Hua-Hua
2018-01-01
Despite common operationalization, measurement efficiency of computerized adaptive testing should not only be assessed in terms of the number of items administered but also the time it takes to complete the test. To this end, a recent study introduced a novel item selection criterion that maximizes Fisher information per unit of expected response…
ERIC Educational Resources Information Center
Schulze, Pamela A.; Harwood, Robin L.; Schoelmerich, Axel
2001-01-01
Investigated differences in beliefs and practices about infant feeding among middle class Anglo and Puerto Rican mothers. Interviews and observations indicated that Anglo mothers reported earlier attainment of self-feeding and more emphasis on child rearing goals related to self-maximization. Puerto Rican mothers reported later attainment of…
Solar-Energy System for a Commercial Building--Topeka, Kansas
NASA Technical Reports Server (NTRS)
1982-01-01
Report describes a solar-energy system for space heating, cooling and domestic hot water at a 5,600 square-foot (520-square-meter) Topeka, Kansas, commercial building. System is expected to provide 74% of annual cooling load, 47% of heating load, and 95% of domestic hot-water load. System was included in building design to maximize energy conservation.
Magnetic Tape Storage and Handling: A Guide for Libraries and Archives.
ERIC Educational Resources Information Center
Van Bogart, John W. C.
This document provides a guide on how to properly store and care for magnetic media to maximize their life expectancies. An introduction compares magnetic media to paper and film and outlines the scope of the report. The second section discusses things that can go wrong with magnetic media. Binder degradation, magnetic particle instabilities,…
Autonomous entropy-based intelligent experimental design
NASA Astrophysics Data System (ADS)
Malakar, Nabin Kumar
2011-07-01
The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same goal in an automated fashion.
The effect of lifelong exercise dose on cardiovascular function during exercise
Carrick-Ranson, Graeme; Hastings, Jeffrey L.; Bhella, Paul S.; Fujimoto, Naoki; Shibata, Shigeki; Palmer, M. Dean; Boyd, Kara; Livingston, Sheryl; Dijk, Erika
2014-01-01
An increased “dose” of endurance exercise training is associated with a greater maximal oxygen uptake (V̇o2max), a larger left ventricular (LV) mass, and improved heart rate and blood pressure control. However, the effect of lifelong exercise dose on metabolic and hemodynamic response during exercise has not been previously examined. We performed a cross-sectional study on 101 (69 men) seniors (60 yr and older) focusing on lifelong exercise frequency as an index of exercise dose. These included 27 who had performed ≤2 exercise sessions/wk (sedentary), 25 who performed 2–3 sessions/wk (casual), 24 who performed 4–5 sessions/wk (committed) and 25 who performed ≥6 sessions/wk plus regular competitions (Masters athletes) over at least the last 25 yr. Oxygen uptake and hemodynamics [cardiac output, stroke volume (SV)] were collected at rest, two levels of steady-state submaximal exercise, and maximal exercise. Doppler ultrasound measures of LV diastolic filling were assessed at rest and during LV loading (saline infusion) to simulate increased LV filling. Body composition, total blood volume, and heart rate recovery after maximal exercise were also examined. V̇o2max increased in a dose-dependent manner (P < 0.05). At maximal exercise, cardiac output and SV were largest in committed exercisers and Masters athletes (P < 0.05), while arteriovenous oxygen difference was greater in all trained groups (P < 0.05). At maximal exercise, effective arterial elastance, an index of ventricular-arterial coupling, was lower in committed exercisers and Masters athletes (P < 0.05). Doppler measures of LV filling were not enhanced at any condition, irrespective of lifelong exercise frequency. These data suggest that performing four or more weekly endurance exercise sessions over a lifetime results in significant gains in V̇o2max, SV, and heart rate regulation during exercise; however, improved SV regulation during exercise is not coupled with favorable effects on LV filling, even when the heart is fully loaded. PMID:24458750
Effects of Strength Training on Postpubertal Adolescent Distance Runners.
Blagrove, Richard C; Howe, Louis P; Cushion, Emily J; Spence, Adam; Howatson, Glyn; Pedlar, Charles R; Hayes, Philip R
2018-06-01
Strength training activities have consistently been shown to improve running economy (RE) and neuromuscular characteristics, such as force-producing ability and maximal speed, in adult distance runners. However, the effects on adolescent (<18 yr) runners remains elusive. This randomized control trial aimed to examine the effect of strength training on several important physiological and neuromuscular qualities associated with distance running performance. Participants (n = 25, 13 female, 17.2 ± 1.2 yr) were paired according to their sex and RE and randomly assigned to a 10-wk strength training group (STG) or a control group who continued their regular training. The STG performed twice weekly sessions of plyometric, sprint, and resistance training in addition to their normal running. Outcome measures included body mass, maximal oxygen uptake (V˙O2max), speed at V˙O2max, RE (quantified as energy cost), speed at fixed blood lactate concentrations, 20-m sprint, and maximal voluntary contraction during an isometric quarter-squat. Eighteen participants (STG: n = 9, 16.1 ± 1.1 yr; control group: n = 9, 17.6 ± 1.2 yr) completed the study. The STG displayed small improvements (3.2%-3.7%; effect size (ES), 0.31-0.51) in RE that were inferred as "possibly beneficial" for an average of three submaximal speeds. Trivial or small changes were observed for body composition variables, V˙O2max and speed at V˙O2max; however, the training period provided likely benefits to speed at fixed blood lactate concentrations in both groups. Strength training elicited a very likely benefit and a possible benefit to sprint time (ES, 0.32) and maximal voluntary contraction (ES, 0.86), respectively. Ten weeks of strength training added to the program of a postpubertal distance runner was highly likely to improve maximal speed and enhances RE by a small extent, without deleterious effects on body composition or other aerobic parameters.
Interval Running Training Improves Cognitive Flexibility and Aerobic Power of Young Healthy Adults.
Venckunas, Tomas; Snieckus, Audrius; Trinkunas, Eugenijus; Baranauskiene, Neringa; Solianik, Rima; Juodsnukis, Antanas; Streckis, Vytautas; Kamandulis, Sigitas
2016-08-01
Venckunas, T, Snieckus, A, Trinkunas, E, Baranauskiene, N, Solianik, R, Juodsnukis, A, Streckis, V, and Kamandulis, S. Interval running training improves cognitive flexibility and aerobic power of young healthy adults. J Strength Cond Res 30(8): 2114-2121, 2016-The benefits of regular physical exercise may well extend beyond the reduction of chronic diseases risk and augmentation of working capacity, to many other aspects of human well-being, including improved cognitive functioning. Although the effects of moderate intensity continuous training on cognitive performance are relatively well studied, the benefits of interval training have not been investigated in this respect so far. The aim of the current study was to assess whether 7 weeks of interval running training is effective at improving both aerobic fitness and cognitive performance. For this purpose, 8 young dinghy sailors (6 boys and 2 girls) completed the interval running program with 200 m and 2,000 m running performance, cycling maximal oxygen uptake, and cognitive function was measured before and after the intervention. The control group consisted of healthy age-matched subjects (8 boys and 2 girls) who continued their active lifestyle and were tested in the same way as the experimental group, but did not complete any regular training. In the experimental group, 200 m and 2,000 m running performance and cycling maximal oxygen uptake increased together with improved results on cognitive flexibility tasks. No changes in the results of short-term and working memory tasks were observed in the experimental group, and no changes in any of the measured indices were evident in the controls. In conclusion, 7 weeks of interval running training improved running performance and cycling aerobic power, and were sufficient to improve the ability to adjust behavior to changing demands in young active individuals.
Aminiaghdam, Soran; Rode, Christian; Müller, Roy; Blickhan, Reinhard
2017-02-01
Pronograde trunk orientation in small birds causes prominent intra-limb asymmetries in the leg function. As yet, it is not clear whether these asymmetries induced by the trunk reflect general constraints on the leg function regardless of the specific leg architecture or size of the species. To address this, we instructed 12 human volunteers to walk at a self-selected velocity with four postures: regular erect, or with 30 deg, 50 deg and maximal trunk flexion. In addition, we simulated the axial leg force (along the line connecting hip and centre of pressure) using two simple models: spring and damper in series, and parallel spring and damper. As trunk flexion increases, lower limb joints become more flexed during stance. Similar to birds, the associated posterior shift of the hip relative to the centre of mass leads to a shorter leg at toe-off than at touchdown, and to a flatter angle of attack and a steeper leg angle at toe-off. Furthermore, walking with maximal trunk flexion induces right-skewed vertical and horizontal ground reaction force profiles comparable to those in birds. Interestingly, the spring and damper in series model provides a superior prediction of the axial leg force across trunk-flexed gaits compared with the parallel spring and damper model; in regular erect gait, the damper does not substantially improve the reproduction of the human axial leg force. In conclusion, mimicking the pronograde locomotion of birds by bending the trunk forward in humans causes a leg function similar to that of birds despite the different morphology of the segmented legs. © 2017. Published by The Company of Biologists Ltd.
Haugen, Thomas; Tønnessen, Espen; Øksenholt, Øyvind; Haugen, Fredrik Lie; Paulsen, Gøran; Enoksen, Eystein; Seiler, Stephen
2015-01-01
The aims of the present study were to compare the effects of 1) training at 90 and 100% sprint velocity and 2) supervised versus unsupervised sprint training on soccer-specific physical performance in junior soccer players. Young, male soccer players (17 ±1 yr, 71 ±10 kg, 180 ±6 cm) were randomly assigned to four different treatment conditions over a 7-week intervention period. A control group (CON, n=9) completed regular soccer training according to their teams’ original training plans. Three training groups performed a weekly repeated-sprint training session in addition to their regular soccer training sessions performed at A) 100% intensity without supervision (100UNSUP, n=13), B) 90% of maximal sprint velocity with supervision (90SUP, n=10) or C) 90% of maximal sprint velocity without supervision (90UNSUP, n=13). Repetitions x distance for the sprint-training sessions were 15x20 m for 100UNSUP and 30x20 m for 90SUP and 90UNSUP. Single-sprint performance (best time from 15x20 m sprints), repeated-sprint performance (mean time over 15x20 m sprints), countermovement jump and Yo-Yo Intermittent Recovery Level 1 (Yo-Yo IR1) were assessed during pre-training and post-training tests. No significant differences in performance outcomes were observed across groups. 90SUP improved Yo-Yo IR1 by a moderate margin compared to controls, while all other effect magnitudes were trivial or small. In conclusion, neither weekly sprint training at 90 or 100% velocity, nor supervised sprint training enhanced soccer-specific physical performance in junior soccer players. PMID:25798601
NASA Astrophysics Data System (ADS)
Obulesu, O.; Rama Mohan Reddy, A., Dr; Mahendra, M.
2017-08-01
Detecting regular and efficient cyclic models is the demanding activity for data analysts due to unstructured, vigorous and enormous raw information produced from web. Many existing approaches generate large candidate patterns in the occurrence of huge and complex databases. In this work, two novel algorithms are proposed and a comparative examination is performed by considering scalability and performance parameters. The first algorithm is, EFPMA (Extended Regular Model Detection Algorithm) used to find frequent sequential patterns from the spatiotemporal dataset and the second one is, ETMA (Enhanced Tree-based Mining Algorithm) for detecting effective cyclic models with symbolic database representation. EFPMA is an algorithm grows models from both ends (prefixes and suffixes) of detected patterns, which results in faster pattern growth because of less levels of database projection compared to existing approaches such as Prefixspan and SPADE. ETMA uses distinct notions to store and manage transactions data horizontally such as segment, sequence and individual symbols. ETMA exploits a partition-and-conquer method to find maximal patterns by using symbolic notations. Using this algorithm, we can mine cyclic models in full-series sequential patterns including subsection series also. ETMA reduces the memory consumption and makes use of the efficient symbolic operation. Furthermore, ETMA only records time-series instances dynamically, in terms of character, series and section approaches respectively. The extent of the pattern and proving efficiency of the reducing and retrieval techniques from synthetic and actual datasets is a really open & challenging mining problem. These techniques are useful in data streams, traffic risk analysis, medical diagnosis, DNA sequence Mining, Earthquake prediction applications. Extensive investigational outcomes illustrates that the algorithms outperforms well towards efficiency and scalability than ECLAT, STNR and MAFIA approaches.
Rice, Treva K; Sarzynski, Mark A; Sung, Yun Ju; Argyropoulos, George; Stütz, Adrian M; Teran-Garcia, Margarita; Rao, D C; Bouchard, Claude; Rankinen, Tuomo
2012-08-01
Although regular exercise improves submaximal aerobic capacity, there is large variability in its response to exercise training. While this variation is thought to be partly due to genetic differences, relatively little is known about the causal genes. Submaximal aerobic capacity traits in the current report include the responses of oxygen consumption (ΔVO(2)60), power output (ΔWORK60), and cardiac output (ΔQ60) at 60% of VO2max to a standardized 20-week endurance exercise training program. Genome-wide linkage analysis in 475 HERITAGE Family Study Caucasians identified a locus on chromosome 13q for ΔVO(2)60 (LOD = 3.11). Follow-up fine mapping involved a dense marker panel of over 1,800 single-nucleotide polymorphisms (SNPs) in a 7.9-Mb region (21.1-29.1 Mb from p-terminus). Single-SNP analyses found 14 SNPs moderately associated with both ΔVO(2)60 at P ≤ 0.005 and the correlated traits of ΔWORK60 and ΔQ60 at P < 0.05. Haplotype analyses provided several strong signals (P < 1.0 × 10(-5)) for ΔVO(2)60. Overall, association analyses narrowed the target region and included potential biological candidate genes (MIPEP and SGCG). Consistent with maximal heritability estimates of 23%, up to 20% of the phenotypic variance in ΔVO(2)60 was accounted for by these SNPs. These results implicate candidate genes on chromosome 13q12 for the ability to improve submaximal exercise capacity in response to regular exercise. Submaximal exercise at 60% of maximal capacity is an exercise intensity that falls well within the range recommended in the Physical Activity Guidelines for Americans and thus has potential public health relevance.
Rice, Treva K.; Sarzynski, Mark A.; Sung, Yun Ju; Argyropoulos, George; Stütz, Adrian M.; Teran-Garcia, Margarita; Rao, D. C.; Bouchard, Claude
2014-01-01
Although regular exercise improves submaximal aerobic capacity, there is large variability in its response to exercise training. While this variation is thought to be partly due to genetic differences, relatively little is known about the causal genes. Submaximal aerobic capacity traits in the current report include the responses of oxygen consumption (ΔVO260), power output (ΔWORK60), and cardiac output (ΔQ60) at 60% of VO2max to a standardized 20-week endurance exercise training program. Genome-wide linkage analysis in 475 HERITAGE Family Study Caucasians identified a locus on chromosome 13q for ΔVO260 (LOD = 3.11). Follow-up fine mapping involved a dense marker panel of over 1,800 single-nucleotide polymorphisms (SNPs) in a 7.9-Mb region (21.1–29.1 Mb from p-terminus). Single-SNP analyses found 14 SNPs moderately associated with both ΔVO260 at P ≤ 0.005 and the correlated traits of ΔWORK60 and ΔQ60 at P < 0.05. Haplotype analyses provided several strong signals (P<1.0 × 10−5) for ΔVO260. Overall, association analyses narrowed the target region and included potential biological candidate genes (MIPEP and SGCG). Consistent with maximal heritability estimates of 23%, up to 20% of the phenotypic variance in ΔVO260 was accounted for by these SNPs. These results implicate candidate genes on chromosome 13q12 for the ability to improve submaximal exercise capacity in response to regular exercise. Submaximal exercise at 60% of maximal capacity is an exercise intensity that falls well within the range recommended in the Physical Activity Guidelines for Americans and thus has potential public health relevance. PMID:22170014
Correlation-based regularization and gradient operators for (joint) inversion on unstructured meshes
NASA Astrophysics Data System (ADS)
Jordi, Claudio; Doetsch, Joseph; Günther, Thomas; Schmelzbach, Cedric; Robertsson, Johan
2017-04-01
When working with unstructured meshes for geophysical inversions, special attention should be paid to the design of the operators that are used for regularizing the inverse problem and coupling of different property models in joint inversions. Regularization constraints for inversions on unstructured meshes are often defined in a rather ad-hoc manner and usually only involve the cell to which the operator is applied and its direct neighbours. Similarly, most structural coupling operators for joint inversion, such as the popular cross-gradients operator, are only defined in the direct neighbourhood of a cell. As a result, the regularization and coupling length scales and strength of these operators depend on the discretization as well as cell sizes and shape. Especially for unstructured meshes, where the cell sizes vary throughout the model domain, the dependency of the operator on the discretization may lead to artefacts. Designing operators that are based on a spatial correlation model allows to define correlation length scales over which an operator acts (called footprint), reducing the dependency on the discretization and the effects of variable cell sizes. Moreover, correlation-based operators can accommodate for expected anisotropy by using different length scales in horizontal and vertical directions. Correlation-based regularization operators also known as stochastic regularization operators have already been successfully applied to inversions on regular grids. Here, we formulate stochastic operators for unstructured meshes and apply them in 2D surface and 3D cross-well electrical resistivity tomography data inversion examples of layered media. Especially for the synthetic cross-well example, improved inversion results are achieved when stochastic regularization is used instead of a classical smoothness constraint. For the case of cross-gradients operators for joint inversion, the correlation model is used to define the footprint of the operator and weigh the contributions of the property values that are used to calculate the cross-gradients. In a first series of synthetic-data tests, we examined the mesh dependency of the cross-gradients operators. Compared to operators that are only defined in the direct neighbourhood of a cell, the dependency on the cell size of the cross-gradients calculation is markedly reduced when using operators with larger footprints. A second test with synthetic models focussed on the effect of small-scale variabilities of the parameter value on the cross-gradients calculation. Small-scale variabilities that are superimposed on a global trend of the property value can potentially degrade the cross-gradients calculation and destabilize joint inversion. We observe that the cross-gradients from operators with footprints larger than the length scale of the variabilities are less affected compared to operators with a small footprint. In joint inversions on unstructured meshes, we thus expect the correlation-based coupling operators to ensure robust coupling on a physically meaningful scale.
Effect of muscle mass and intensity of isometric contraction on heart rate.
Gálvez, J M; Alonso, J P; Sangrador, L A; Navarro, G
2000-02-01
The purpose of this study was to determine the effect of muscle mass and the level of force on the contraction-induced rise in heart rate. We conducted an experimental study in a sample of 28 healthy men between 20 and 30 yr of age (power: 95%, alpha: 5%). Smokers, obese subjects, and those who performed regular physical activity over a certain amount of energetic expenditure were excluded from the study. The participants exerted two types of isometric contractions: handgrip and turning a 40-cm-diameter wheel. Both were sustained to exhaustion at 20 and 50% of maximal force. Twenty-five subjects finished the experiment. Heart rate increased a mean of 15.1 beats/min [95% confidence interval (CI): 5.5-24.6] from 20 to 50% handgrip contractions, and 20.7 beats/min (95% CI: 11.9-29.5) from 20 to 50% wheel-turn contractions. Heart rate also increased a mean of 13.3 beats/min (95% CI: 10.4-16.1) from handgrip to wheel-turn contractions at 20% maximal force, and 18.9 beats/min (95% CI: 9. 8-28.0) from handgrip to wheel-turn contractions at 50% maximal force. We conclude that the magnitude of the heart rate increase during isometric exercise is related to the intensity of the contraction and the mass of the contracted muscle.
Váczi, Márk; Tollár, József; Meszler, Balázs; Juhász, Ivett; Karsai, István
2013-01-01
The aim of the present study was to investigate the effects of a short-term in-season plyometric training program on power, agility and knee extensor strength. Male soccer players from a third league team were assigned into an experimental and a control group. The experimental group, beside its regular soccer training sessions, performed a periodized plyometric training program for six weeks. The program included two training sessions per week, and maximal intensity unilateral and bilateral plyometric exercises (total of 40 – 100 foot contacts/session) were executed. Controls participated only in the same soccer training routine, and did not perform plyometrics. Depth vertical jump height, agility (Illinois Agility Test, T Agility Test) and maximal voluntary isometric torque in knee extensors using Multicont II dynamometer were evaluated before and after the experiment. In the experimental group small but significant improvements were found in both agility tests, while depth jump height and isometric torque increments were greater. The control group did not improve in any of the measures. Results of the study indicate that plyometric training consisting of high impact unilateral and bilateral exercises induced remarkable improvements in lower extremity power and maximal knee extensor strength, and smaller improvements in soccer-specific agility. Therefore, it is concluded that short-term plyometric training should be incorporated in the in-season preparation of lower level players to improve specific performance in soccer. PMID:23717351
Predictors of cardiovascular fitness in sedentary men.
Riou, Marie-Eve; Pigeon, Etienne; St-Onge, Josée; Tremblay, Angelo; Marette, André; Weisnagel, S John; Joanisse, Denis R
2009-04-01
The relative contribution of anthropometric and skeletal muscle characteristics to cardiorespiratory fitness was studied in sedentary men. Cardiorespiratory fitness (maximal oxygen consumption) was assessed using an incremental bicycle ergometer protocol in 37 men aged 34-53 years. Vastus lateralis muscle biopsy samples were used to assess fiber type composition (I, IIA, IIX) and areas, capillary density, and activities of glycolytic and oxidative energy metabolic pathway enzymes. Correlations (all p < 0.05) were observed between maximal oxygen consumption (L.min-1) and body mass (r = 0.53), body mass index (r = 0.39), waist circumference (r = 0.34), fat free mass (FFM; r = 0.68), fat mass (r = 0.33), the enzyme activity of cytochrome c oxidase (COX; r = 0.39), muscle type IIA (r = 0.40) and IIX (r = 0.50) fiber area, and the number of capillaries per type IIA (r = 0.39) and IIX (r = 0.37) fiber. When adjusted for FFM in partial correlations, all correlations were lost, with the exception of COX (r = 0.48). Stepwise multiple regression revealed that maximal oxygen consumption was independently predicted by FFM, COX activity, mean capillary number per fiber, waist circumference, and, to a lesser extent, muscle capillary supply. In the absence of regular physical activity, cardiorespiratory fitness is strongly predicted by the potential for aerobic metabolism of skeletal muscle and negatively correlated with abdominal fat deposition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tom, Nathan M.; Yu, Yi -Hsiang; Wright, Alan D.
The aim of this study is to describe a procedure to maximize the power-to-load ratio of a novel wave energy converter (WEC) that combines an oscillating surge wave energy converter with variable structural components. The control of the power-take-off torque will be on a wave-to-wave timescale, whereas the structure will be controlled statically such that the geometry remains the same throughout the wave period. Linear hydrodynamic theory is used to calculate the upper and lower bounds for the time-averaged absorbed power and surge foundation loads while assuming that the WEC motion remains sinusoidal. Previous work using pseudo-spectral techniques to solvemore » the optimal control problem focused solely on maximizing absorbed energy. This work extends the optimal control problem to include a measure of the surge foundation force in the optimization. The objective function includes two competing terms that force the optimizer to maximize power capture while minimizing structural loads. A penalty weight was included with the surge foundation force that allows control of the optimizer performance based on whether emphasis should be placed on power absorption or load shedding. Results from pseudo-spectral optimal control indicate that a unit reduction in time-averaged power can be accompanied by a greater reduction in surge-foundation force.« less
Performance determinants of fixed gear cycling during criteriums.
Babault, Nicolas; Poisson, Maxime; Cimadoro, Guiseppe; Cometti, Carole; Païzis, Christos
2018-06-17
Nowadays, fixed gear competitions on outdoor circuits such as criteriums are regularly organized worldwide. To date, no study has investigated this alternative form of cycling. The purpose of the present study was to examine fixed gear performance indexes and to characterize physiological determinants of fixed gear cyclists. This study was carried out in two parts. Part 1 (n = 36) examined correlations between performance indexes obtained during a real fixed gear criterium (time trial, fastest laps, averaged lap time during races, fatigue indexes) and during a sprint track time trial. Part 2 (n = 9) examined correlations between the recorded performance indexes and some aerobic and anaerobic performance outputs (VO 2max , maximal aerobic power, knee extensor and knee flexor maximal voluntary torque, vertical jump height and performance during a modified Wingate test). Results from Part 1 indicated significant correlations between fixed gear final performance (i.e. average lap time during the finals) and single lap time (time trial, fastest lap during races and sprint track time trial). In addition, results from Part 2 revealed significant correlations between fixed gear performance and aerobic indicators (VO 2max and maximal aerobic power). However, no significant relationship was obtained between fixed gear cycling and anaerobic qualities such as strength. Similarly to traditional cycling disciplines, we concluded that fixed gear cycling is mainly limited by aerobic capacity, particularly criteriums final performance. However, specific skills including technical competency should be considered.
Tom, Nathan M.; Yu, Yi -Hsiang; Wright, Alan D.; ...
2017-04-18
The aim of this study is to describe a procedure to maximize the power-to-load ratio of a novel wave energy converter (WEC) that combines an oscillating surge wave energy converter with variable structural components. The control of the power-take-off torque will be on a wave-to-wave timescale, whereas the structure will be controlled statically such that the geometry remains the same throughout the wave period. Linear hydrodynamic theory is used to calculate the upper and lower bounds for the time-averaged absorbed power and surge foundation loads while assuming that the WEC motion remains sinusoidal. Previous work using pseudo-spectral techniques to solvemore » the optimal control problem focused solely on maximizing absorbed energy. This work extends the optimal control problem to include a measure of the surge foundation force in the optimization. The objective function includes two competing terms that force the optimizer to maximize power capture while minimizing structural loads. A penalty weight was included with the surge foundation force that allows control of the optimizer performance based on whether emphasis should be placed on power absorption or load shedding. Results from pseudo-spectral optimal control indicate that a unit reduction in time-averaged power can be accompanied by a greater reduction in surge-foundation force.« less
Váczi, Márk; Tollár, József; Meszler, Balázs; Juhász, Ivett; Karsai, István
2013-03-01
The aim of the present study was to investigate the effects of a short-term in-season plyometric training program on power, agility and knee extensor strength. Male soccer players from a third league team were assigned into an experimental and a control group. The experimental group, beside its regular soccer training sessions, performed a periodized plyometric training program for six weeks. The program included two training sessions per week, and maximal intensity unilateral and bilateral plyometric exercises (total of 40 - 100 foot contacts/session) were executed. Controls participated only in the same soccer training routine, and did not perform plyometrics. Depth vertical jump height, agility (Illinois Agility Test, T Agility Test) and maximal voluntary isometric torque in knee extensors using Multicont II dynamometer were evaluated before and after the experiment. In the experimental group small but significant improvements were found in both agility tests, while depth jump height and isometric torque increments were greater. The control group did not improve in any of the measures. Results of the study indicate that plyometric training consisting of high impact unilateral and bilateral exercises induced remarkable improvements in lower extremity power and maximal knee extensor strength, and smaller improvements in soccer-specific agility. Therefore, it is concluded that short-term plyometric training should be incorporated in the in-season preparation of lower level players to improve specific performance in soccer.
Neuromuscular response differences to power vs strength back squat exercise in elite athletes.
Brandon, R; Howatson, G; Strachan, F; Hunter, A M
2015-10-01
The study's aim was to establish the neuromuscular responses in elite athletes during and following maximal 'explosive' regular back squat exercise at heavy, moderate, and light loads. Ten elite track and field athletes completed 10 sets of five maximal squat repetitions on three separate days. Knee extension maximal isometric voluntary contraction (MIVC), rate of force development (RFD) and evoked peak twitch force (Pt) assessments were made pre- and post-session. Surface electromyography [root mean square (RMS)] and mechanical measurements were recorded during repetitions. The heavy session resulted in the greatest repetition impulse in comparison to moderate and light sessions (P < 0.001), while the latter showed highest repetition power (P < 0.001). MIVC, RFD, and Pt were significantly reduced post-session (P < 0.01), with greatest reduction observed after the heavy, followed by the moderate and light sessions accordingly. Power significantly reduced during the heavy session only (P < 0.001), and greater increases in RMS occurred during heavy session (P < 0.001), followed by moderate, with no change during light session. In conclusion, this study has shown in elite athletes that the moderate load is optimal for providing a neuromuscular stimulus but with limited fatigue. This type of intervention could be potentially used in the development of both strength and power in elite athletic populations. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Kim, Dae-Young; Seo, Byoung-Do; Choi, Pan-Am
2014-04-01
[Purpose] This study was conducted to determine the influence of Taekwondo as security martial arts training on anaerobic threshold, cardiorespiratory fitness, and blood lactate recovery. [Subjects and Methods] Fourteen healthy university students were recruited and divided into an exercise group and a control group (n = 7 in each group). The subjects who participated in the experiment were subjected to an exercise loading test in which anaerobic threshold, value of ventilation, oxygen uptake, maximal oxygen uptake, heart rate, and maximal values of ventilation / heart rate were measured during the exercise, immediately after maximum exercise loading, and at 1, 3, 5, 10, and 15 min of recovery. [Results] At the anaerobic threshold time point, the exercise group showed a significantly longer time to reach anaerobic threshold. The exercise group showed significantly higher values for the time to reach VO2max, maximal values of ventilation, maximal oxygen uptake and maximal values of ventilation / heart rate. Significant changes were observed in the value of ventilation volumes at the 1- and 5-min recovery time points within the exercise group; oxygen uptake and maximal oxygen uptake were significantly different at the 5- and 10-min time points; heart rate was significantly different at the 1- and 3-min time points; and maximal values of ventilation / heart rate was significantly different at the 5-min time point. The exercise group showed significant decreases in blood lactate levels at the 15- and 30-min recovery time points. [Conclusion] The study results revealed that Taekwondo as a security martial arts training increases the maximal oxygen uptake and anaerobic threshold and accelerates an individual's recovery to the normal state of cardiorespiratory fitness and blood lactate level. These results are expected to contribute to the execution of more effective security services in emergencies in which violence can occur.
Gamma loop contributing to maximal voluntary contractions in man.
Hagbarth, K E; Kunesch, E J; Nordin, M; Schmidt, R; Wallin, E U
1986-01-01
A local anaesthetic drug was injected around the peroneal nerve in healthy subjects in order to investigate whether the resulting loss in foot dorsiflexion power in part depended on a gamma-fibre block preventing 'internal' activation of spindle end-organs and thereby depriving the alpha-motoneurones of an excitatory spindle inflow during contraction. The motor outcome of maximal dorsiflexion efforts was assessed by measuring firing rates of individual motor units in the anterior tibial (t.a.) muscle, mean voltage e.m.g. from the pretibial muscles, dorsiflexion force and range of voluntary foot dorsiflexion movements. The tests were performed with and without peripheral conditioning stimuli, such as agonist or antagonist muscle vibration or imposed stretch of the contracting muscles. As compared to control values of t.a. motor unit firing rates in maximal isometric voluntary contractions, the firing rates were lower and more irregular during maximal dorsiflexion efforts performed during subtotal peroneal nerve blocks. During the development of paresis a gradual reduction of motor unit firing rates was observed before the units ceased responding to the voluntary commands. This change in motor unit behaviour was accompanied by a reduction of the mean voltage e.m.g. activity in the pretibial muscles. At a given stage of anaesthesia the e.m.g. responses to maximal voluntary efforts were more affected than the responses evoked by electric nerve stimuli delivered proximal to the block, indicating that impaired impulse transmission in alpha motor fibres was not the sole cause of the paresis. The inability to generate high and regular motor unit firing rates during peroneal nerve blocks was accentuated by vibration applied over the antagonistic calf muscles. By contrast, in eight out of ten experiments agonist stretch or vibration caused an enhancement of motor unit firing during the maximal force tasks. The reverse effects of agonist and antagonist vibration on the ability to activate the paretic muscles were evidenced also by alterations induced in mean voltage e.m.g. activity, dorsiflexion force and range of dorsiflexion movements. The autogenetic excitatory and the reciprocal inhibitory effects of muscle vibration rose in strength as the vibration frequency was raised from 90 to 165 Hz. Reflex effects on maximal voluntary contraction strength similar to those observed during partial nerve blocks were not seen under normal conditions when the nerve supply was intact.(ABSTRACT TRUNCATED AT 400 WORDS) PMID:3612576
Video lottery: winning expectancies and arousal.
Ladouceur, Robert; Sévigny, Serge; Blaszczynski, Alexander; O'Connor, Kieron; Lavoie, Marc E
2003-06-01
This study investigates the effects of video lottery players' expectancies of winning on physiological and subjective arousal. Participants were assigned randomly to one of two experimental conditions: high and low winning expectancies. Participants played 100 video lottery games in a laboratory setting while physiological measures were recorded. Level of risk-taking was controlled. Participants were 34 occasional or regular video lottery players. They were assigned randomly into two groups of 17, with nine men and eight women in each group. The low-expectancy group played for fun, therefore expecting to win worthless credits, while the high-expectancy group played for real money. Players' experience, demographic variables and subjective arousal were assessed. Severity of problem gambling was measured with the South Oaks Gambling Screen. In order to measure arousal, the average heart rate was recorded across eight periods. Participants exposed to high as compared to low expectations experienced faster heart rate prior to and during the gambling session. According to self-reports, it is the expectancy of winning money that is exciting, not playing the game. Regardless of the level of risk-taking, expectancy of winning is a cognitive factor influencing levels of arousal. When playing for fun, gambling becomes significantly less stimulating than when playing for money.
Aguirre, Claudia G.; Bello, Mariel S.; Pang, Raina D.; Andrabi, Nafeesa; Hendricks, Peter S.; Bluthenthal, Ricky N.; Leventhal, Adam M.
2016-01-01
The current study utilized the intersectionality framework to explore whether smoking outcome expectancies (i.e., cognitions about the anticipated effects of smoking) were predicted by gender and ethnicity, and the gender-by-ethnicity interaction. In a cross-sectional design, daily smokers from the general community [32.2% women; Non-Hispanic African American (N=175), Non-Hispanic White (N=109), or Hispanic (N=26)] completed self-report measures on smoking expectancies and other co-factors. Results showed that women reported greater negative reinforcement (i.e., anticipated smoking-induced negative affect reduction) and weight control (i.e., anticipated smoking-induced appetite/weight suppression) expectancies than men. Hispanic (vs. African American or White) smokers endorsed greater negative reinforcement expectancies. A gender by ethnicity interaction was found for weight control expectancies, such that White women reported greater weight control expectancies than White men, but no gender differences among African American and Hispanic smokers were found. Ethnicity, gender, and their intersectionality should be considered in smoking cessation programs to target smoking-related cognitions. PMID:26438665
Aguirre, Claudia G; Bello, Mariel S; Andrabi, Nafeesa; Pang, Raina D; Hendricks, Peter S; Bluthenthal, Ricky N; Leventhal, Adam M
2016-01-01
The current study utilized the intersectionality framework to explore whether smoking outcome expectancies (i.e., cognitions about the anticipated effects of smoking) were predicted by gender and ethnicity, and the gender-by-ethnicity interaction. In a cross-sectional design, daily smokers from the general community (32.2% women; non-Hispanic African American [n = 175], non-Hispanic White [n = 109], or Hispanic [n = 26]) completed self-report measures on smoking expectancies and other co-factors. Results showed that women reported greater negative reinforcement (i.e., anticipated smoking-induced negative affect reduction) and weight control (i.e., anticipated smoking-induced appetite/weight suppression) expectancies than men. Hispanic (vs. African American or White) smokers endorsed greater negative reinforcement expectancies. A gender-by-ethnicity interaction was found for weight control expectancies, such that White women reported greater weight control expectancies than White men, but no gender differences among African American and Hispanic smokers were found. These findings suggest that gender, ethnicity, and their intersectionality should be considered in research on cognitive mechanisms that may contribute to tobacco-related health disparities. © The Author(s) 2015.
The cerebellum predicts the temporal consequences of observed motor acts.
Avanzino, Laura; Bove, Marco; Pelosin, Elisa; Ogliastro, Carla; Lagravinese, Giovanna; Martino, Davide
2015-01-01
It is increasingly clear that we extract patterns of temporal regularity between events to optimize information processing. The ability to extract temporal patterns and regularity of events is referred as temporal expectation. Temporal expectation activates the same cerebral network usually engaged in action selection, comprising cerebellum. However, it is unclear whether the cerebellum is directly involved in temporal expectation, when timing information is processed to make predictions on the outcome of a motor act. Healthy volunteers received one session of either active (inhibitory, 1 Hz) or sham repetitive transcranial magnetic stimulation covering the right lateral cerebellum prior the execution of a temporal expectation task. Subjects were asked to predict the end of a visually perceived human body motion (right hand handwriting) and of an inanimate object motion (a moving circle reaching a target). Videos representing movements were shown in full; the actual tasks consisted of watching the same videos, but interrupted after a variable interval from its onset by a dark interval of variable duration. During the 'dark' interval, subjects were asked to indicate when the movement represented in the video reached its end by clicking on the spacebar of the keyboard. Performance on the timing task was analyzed measuring the absolute value of timing error, the coefficient of variability and the percentage of anticipation responses. The active group exhibited greater absolute timing error compared with the sham group only in the human body motion task. Our findings suggest that the cerebellum is engaged in cognitive and perceptual domains that are strictly connected to motor control.
The value of foresight: how prospection affects decision-making.
Pezzulo, Giovanni; Rigoli, Francesco
2011-01-01
Traditional theories of decision-making assume that utilities are based on the intrinsic value of outcomes; in turn, these values depend on associations between expected outcomes and the current motivational state of the decision-maker. This view disregards the fact that humans (and possibly other animals) have prospection abilities, which permit anticipating future mental processes and motivational and emotional states. For instance, we can evaluate future outcomes in light of the motivational state we expect to have when the outcome is collected, not (only) when we make a decision. Consequently, we can plan for the future and choose to store food to be consumed when we expect to be hungry, not immediately. Furthermore, similarly to any expected outcome, we can assign a value to our anticipated mental processes and emotions. It has been reported that (in some circumstances) human subjects prefer to receive an unavoidable punishment immediately, probably because they are anticipating the dread associated with the time spent waiting for the punishment. This article offers a formal framework to guide neuroeconomic research on how prospection affects decision-making. The model has two characteristics. First, it uses model-based Bayesian inference to describe anticipation of cognitive and motivational processes. Second, the utility-maximization process considers these anticipations in two ways: to evaluate outcomes (e.g., the pleasure of eating a pie is evaluated differently at the beginning of a dinner, when one is hungry, and at the end of the dinner, when one is satiated), and as outcomes having a value themselves (e.g., the case of dread as a cost of waiting for punishment). By explicitly accounting for the relationship between prospection and value, our model provides a framework to reconcile the utility-maximization approach with psychological phenomena such as planning for the future and dread.
The Value of Foresight: How Prospection Affects Decision-Making
Pezzulo, Giovanni; Rigoli, Francesco
2011-01-01
Traditional theories of decision-making assume that utilities are based on the intrinsic value of outcomes; in turn, these values depend on associations between expected outcomes and the current motivational state of the decision-maker. This view disregards the fact that humans (and possibly other animals) have prospection abilities, which permit anticipating future mental processes and motivational and emotional states. For instance, we can evaluate future outcomes in light of the motivational state we expect to have when the outcome is collected, not (only) when we make a decision. Consequently, we can plan for the future and choose to store food to be consumed when we expect to be hungry, not immediately. Furthermore, similarly to any expected outcome, we can assign a value to our anticipated mental processes and emotions. It has been reported that (in some circumstances) human subjects prefer to receive an unavoidable punishment immediately, probably because they are anticipating the dread associated with the time spent waiting for the punishment. This article offers a formal framework to guide neuroeconomic research on how prospection affects decision-making. The model has two characteristics. First, it uses model-based Bayesian inference to describe anticipation of cognitive and motivational processes. Second, the utility-maximization process considers these anticipations in two ways: to evaluate outcomes (e.g., the pleasure of eating a pie is evaluated differently at the beginning of a dinner, when one is hungry, and at the end of the dinner, when one is satiated), and as outcomes having a value themselves (e.g., the case of dread as a cost of waiting for punishment). By explicitly accounting for the relationship between prospection and value, our model provides a framework to reconcile the utility-maximization approach with psychological phenomena such as planning for the future and dread. PMID:21747755
ERIC Educational Resources Information Center
Stephenson, Jennifer; O'Neill, Sue; Carter, Mark
2012-01-01
With increasing expectations that preservice teachers will be prepared to teach students with special needs in regular classrooms, it is timely to review relevant units in teacher education courses. Units relevant to special education/inclusion in primary undergraduate teacher preparation courses in Australian tertiary institutions, delivered in…
ERIC Educational Resources Information Center
Angus, Rebecca; Hughes, Thomas
2017-01-01
Schools regularly implement numerous programs to satisfy widespread expectations. Often, implementation is carried out with little follow-up examining data that could help refine or determine the ultimate worth of the intervention. Through utilization of both descriptive and empirical methods, this study delved into the long-term effectiveness of…
Learning Documentations in VET Systems: An Analysis of Current Swiss Practices
ERIC Educational Resources Information Center
Caruso, Valentina; Cattaneo, Alberto; Gurtner, Jean-Luc
2016-01-01
Swiss vocational education and training (VET) is defined as a dual-track system where apprentices weekly alternate between vocational school and a (real) workplace. At the workplace, they have to keep a learning documentation throughout their training, in which they are expected to regularly document their professional development. The actual use…
The Intransitivity of Educational Preferences
ERIC Educational Resources Information Center
Smith, Debra Candace
2013-01-01
This study sought to answer the question of whether the existence of cycles in education are random events, or if cycles in education are likely to be expected on a regular basis due to intransitive decision-making patterns of stakeholders. This was a quantitative study, modeled after two previously conducted studies (Davis, 1958/59; May, 1954),…
Pros in Parks: Integrated Programming for Reaching Our Urban Park Operations Audience
ERIC Educational Resources Information Center
Miller, Laura M.; Walker, Jamie Rae
2016-01-01
In addition to regular job duties, such as tree care, mulching, irrigation, and pesticide management, urban park workers have faced environmental changes due to drought, wildfires, and West Nile virus. They simultaneously have endured expectations to manage growing, diversifying park usage and limitations on career development. An integrated…
Grading A-Level Double Subject Mathematicians and the Implications for Selection.
ERIC Educational Resources Information Center
Newbould, Charles A.
1981-01-01
Test data were used to compare the grading of two forms of double mathematics: pure and applied math, and regular and advanced math. Results confirm expectations that in the former system, the grading is comparable, and in the latter, it is not necessarily comparable. Implications for student admission are discussed. (MSE)
The "Something Other": Personal Competencies for Learning and Life
ERIC Educational Resources Information Center
Redding, Sam
2014-01-01
Parents seek for their children "something other" than what they usually expect them to acquire through the regular school program, and they turn to extracurricular activities and out-of-school experiences to find it. Teachers know that each student brings to a learning task a "something other"--certain attributes that affect…
ERIC Educational Resources Information Center
Moore Johnson, Susan; Reinhorn, Stefanie K.; Simon, Nicole S.
2016-01-01
Teachers in high-poverty schools often feel stressed and fatigued. We might expect that if we ask these teachers to take on even more work by meeting regularly in collaborative improvement teams, they will respond with skepticism, even resentment. But in a study of 83 teachers in six outstanding high-poverty schools, these researchers found the…
Endocrinopathies in thalassemia major patient
NASA Astrophysics Data System (ADS)
Lubis, D. A.; Yunir, E. M.
2018-03-01
Advanced in chelation therapy and regular blood transfusion have marked improvements in the life expectancy of patients with thalassemia major, however these patients still have to deal with several complications. We report a 19-year-old male, presented with multiple endocrine complication-related thalassemia; hypogonadism, short stature, osteoporosis with history of fracture, and subclinical hypothyroid.
In the Summer, Getting Into College Is Easy.
ERIC Educational Resources Information Center
Gose, Ben
1998-01-01
Many colleges have summer academic programs for high school students; some design special courses, while others admit the students to regular college courses. Some target gifted students. While participation in the programs, which are easy to get into, does not assure later entry to the institution, many parents have that expectation. The programs…
ERIC Educational Resources Information Center
Keep, Ewart; Rogers, David; Hunt, Sally; Walden, Christopher; Fryer, Bob; Gorard, Stephen; Williams, Ceri; Jones, Wendy; Hartley, Ralph
2010-01-01
With 6 billion British pounds of public spending reductions already on the table, and far deeper cuts inevitable, what are the prospects for adult learning in the new Parliament? Some of the regular contributors of this journal were asked what they expected and what they would like to see. Ewart Keep warns that the coalition parties' commitments…
An analysis of competitive bidding by providers for indigent medical care contracts.
Kirkman-Liff, B L; Christianson, J B; Hillman, D G
1985-01-01
This article develops a model of behavior in bidding for indigent medical care contracts in which bidders set bid prices to maximize their expected utility, conditional on estimates of variables which affect the payoff associated with winning or losing a contract. The hypotheses generated by this model are tested empirically using data from the first round of bidding in the Arizona indigent health care experiment. The behavior of bidding organizations in Arizona is found to be consistent in most respects with the predictions of the model. Bid prices appear to have been influenced by estimated costs and by expectations concerning the potential loss from not securing a contract, the initial wealth of the bidding organization, and the expected number of competitors in the bidding process. PMID:4086301
Assessment of Optimal Flexibility in Ensemble of Frequency Responsive Loads
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kundu, Soumya; Hansen, Jacob; Lian, Jianming
2018-04-19
Potential of electrical loads in providing grid ancillary services is often limited due to the uncertainties associated with the load behavior. A knowledge of the expected uncertainties with a load control program would invariably yield to better informed control policies, opening up the possibility of extracting the maximal load control potential without affecting grid operations. In the context of frequency responsive load control, a probabilistic uncertainty analysis framework is presented to quantify the expected error between the target and actual load response, under uncertainties in the load dynamics. A closed-form expression of an optimal demand flexibility, minimizing the expected errormore » in actual and committed flexibility, is provided. Analytical results are validated through Monte Carlo simulations of ensembles of electric water heaters.« less
Ohlsson, A; Steinhaus, D; Kjellström, B; Ryden, L; Bennett, T
2003-06-01
Exercise testing is commonly used in patients with congestive heart failure for diagnostic and prognostic purposes. Such testing may be even more valuable if invasive hemodynamics are acquired. However, this will make the test more complex and expensive and only provides information from isolated moments. We studied serial exercise tests in heart failure patients with implanted hemodynamic monitors allowing recording of central hemodynamics. Twenty-one NYHA Class II-III heart failure patients underwent maximal exercise tests and submaximal bike or 6-min hall walk tests to quantify their hemodynamic responses and to study the feasibility of conducting exercise tests in patients with such devices. Patients were followed for 2-3 years with serial exercise tests. During maximal tests (n=70), heart rate increased by 52+/-19 bpm while S(v)O(2) decreased by 35+/-10% saturation units. RV systolic and diastolic pressure increased 29+/-11 and 11+/-6 mmHg, respectively, while pulmonary artery diastolic pressure increased 21+/-8 mmHg. Submaximal bike (n=196) and hall walk tests (n=172) resulted in S(v)O(2) changes of 80 and 91% of the maximal tests, while RV pressures ranged from 72 to 79% of maximal responses. An added potential value of implantable hemodynamic monitors in heart failure patients may be to quantitatively determine the true hemodynamic profile during standard non-invasive clinical exercise tests and to compare that to hemodynamic effects of regular exercise during daily living. It would be of interest to study whether such information could improve the ability to predict changes in a patient's clinical condition and to improve tailoring patient management.
Siveke, Ida; Leibold, Christian; Grothe, Benedikt
2007-11-01
We are regularly exposed to several concurrent sounds, producing a mixture of binaural cues. The neuronal mechanisms underlying the localization of concurrent sounds are not well understood. The major binaural cues for localizing low-frequency sounds in the horizontal plane are interaural time differences (ITDs). Auditory brain stem neurons encode ITDs by firing maximally in response to "favorable" ITDs and weakly or not at all in response to "unfavorable" ITDs. We recorded from ITD-sensitive neurons in the dorsal nucleus of the lateral lemniscus (DNLL) while presenting pure tones at different ITDs embedded in noise. We found that increasing levels of concurrent white noise suppressed the maximal response rate to tones with favorable ITDs and slightly enhanced the response rate to tones with unfavorable ITDs. Nevertheless, most of the neurons maintained ITD sensitivity to tones even for noise intensities equal to that of the tone. Using concurrent noise with a spectral composition in which the neuron's excitatory frequencies are omitted reduced the maximal response similar to that obtained with concurrent white noise. This finding indicates that the decrease of the maximal rate is mediated by suppressive cross-frequency interactions, which we also observed during monaural stimulation with additional white noise. In contrast, the enhancement of the firing rate to tones at unfavorable ITD might be due to early binaural interactions (e.g., at the level of the superior olive). A simple simulation corroborates this interpretation. Taken together, these findings suggest that the spectral composition of a concurrent sound strongly influences the spatial processing of ITD-sensitive DNLL neurons.
Niemelä, Kristiina; Väänänen, Ilkka; Leinonen, Raija; Laukkanen, Pia
2011-08-01
Home-based exercise is a viable alternative for older adults with difficulties in exercise opportunities outside the home. The aim of this study was to investigate the benefits of home-based rocking-chair training, and its effects on the physical performance of elderly women. Community- dwelling women (n=51) aged 73-87 years were randomly assigned to the rocking-chair group (RCG, n=26) or control group (CG, n=25) by drawing lots. Baseline and outcome measurements were hand grip strength, maximal isometric knee extension, maximal walking speed over 10 meters, rising from a chair five times, and the Berg Balance Scale (BBS). The RCG carried out a six-week rocking-chair training program at home, involving ten sessions per week, twice a day for 15 minutes per session, and ten different movements. The CG continued their usual daily lives. After three months, the RCG responded to a mail questionnaire. After the intervention, the RCG improved and the CG declined. The data showed significant interactions of group by time in the BBS score (p=0.001), maximal knee extension strength (p=0.006) and maximal walking speed (p=0.046), which indicates that the change between groups during the follow-up period was significant. Adherence to the training protocol was high (96%). After three months, the exercise program had become a regular home exercise habit for 88.5% of the subjects. Results indicate that home-based elderly women benefit from this easily implemented rocking-chair exercise program. The subjects became motivated to participate in training and continued the exercises. This is a promising alternative exercise method for maintaining physical activity and leads to improvements in physical performance.
Generalized expectation-maximization segmentation of brain MR images
NASA Astrophysics Data System (ADS)
Devalkeneer, Arnaud A.; Robe, Pierre A.; Verly, Jacques G.; Phillips, Christophe L. M.
2006-03-01
Manual segmentation of medical images is unpractical because it is time consuming, not reproducible, and prone to human error. It is also very difficult to take into account the 3D nature of the images. Thus, semi- or fully-automatic methods are of great interest. Current segmentation algorithms based on an Expectation- Maximization (EM) procedure present some limitations. The algorithm by Ashburner et al., 2005, does not allow multichannel inputs, e.g. two MR images of different contrast, and does not use spatial constraints between adjacent voxels, e.g. Markov random field (MRF) constraints. The solution of Van Leemput et al., 1999, employs a simplified model (mixture coefficients are not estimated and only one Gaussian is used by tissue class, with three for the image background). We have thus implemented an algorithm that combines the features of these two approaches: multichannel inputs, intensity bias correction, multi-Gaussian histogram model, and Markov random field (MRF) constraints. Our proposed method classifies tissues in three iterative main stages by way of a Generalized-EM (GEM) algorithm: (1) estimation of the Gaussian parameters modeling the histogram of the images, (2) correction of image intensity non-uniformity, and (3) modification of prior classification knowledge by MRF techniques. The goal of the GEM algorithm is to maximize the log-likelihood across the classes and voxels. Our segmentation algorithm was validated on synthetic data (with the Dice metric criterion) and real data (by a neurosurgeon) and compared to the original algorithms by Ashburner et al. and Van Leemput et al. Our combined approach leads to more robust and accurate segmentation.
Agent-Based Model Approach to Complex Phenomena in Real Economy
NASA Astrophysics Data System (ADS)
Iyetomi, H.; Aoyama, H.; Fujiwara, Y.; Ikeda, Y.; Souma, W.
An agent-based model for firms' dynamics is developed. The model consists of firm agents with identical characteristic parameters and a bank agent. Dynamics of those agents are described by their balance sheets. Each firm tries to maximize its expected profit with possible risks in market. Infinite growth of a firm directed by the ``profit maximization" principle is suppressed by a concept of ``going concern". Possibility of bankruptcy of firms is also introduced by incorporating a retardation effect of information on firms' decision. The firms, mutually interacting through the monopolistic bank, become heterogeneous in the course of temporal evolution. Statistical properties of firms' dynamics obtained by simulations based on the model are discussed in light of observations in the real economy.
Miller, Stephen; Pike, James; Stacy, Alan W; Xie, Bin; Ames, Susan L
2017-06-01
Despite the general trend of declining use of traditional cigarettes among young adults in the United States, alternative high school students continue to smoke cigarettes and electronic cigarettes at rates much higher than do students attending regular high schools. Challenging life circumstances leading to elevated levels of negative affect may account for increased smoking behavior in this population. Further, a belief in the negative affect-reducing qualities of nicotine may mediate this effect. The current study tested the hypothesis that negative reinforcing outcome expectancies mediate the relationship between negative affect on smoking susceptibility in nonusers, smoking frequency in users, and smoking experimentation in the overall sample. Results support the hypothesis that negative affect in alternative high school students is correlated with smoking experimentation, smoking willingness, and smoking frequency and that the relationship between negative affect and smoking behavior outcomes is mediated by negative reinforcing outcome expectancies (i.e., beliefs in the negative affect-reducing effects of smoking). This finding was supported for both cigarettes and electronic cigarettes and coincides with a rapid increase in the number of high school students nationally who have experimented with electronic cigarettes. Future antismoking initiatives directed at at-risk youth should consider integrating healthier negative affect reduction techniques to counter the use of nicotine products. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Neuron-Type-Specific Utility in a Brain-Machine Interface: a Pilot Study.
Garcia-Garcia, Martha G; Bergquist, Austin J; Vargas-Perez, Hector; Nagai, Mary K; Zariffa, Jose; Marquez-Chin, Cesar; Popovic, Milos R
2017-11-01
Firing rates of single cortical neurons can be volitionally modulated through biofeedback (i.e. operant conditioning), and this information can be transformed to control external devices (i.e. brain-machine interfaces; BMIs). However, not all neurons respond to operant conditioning in BMI implementation. Establishing criteria that predict neuron utility will assist translation of BMI research to clinical applications. Single cortical neurons (n=7) were recorded extracellularly from primary motor cortex of a Long-Evans rat. Recordings were incorporated into a BMI involving up-regulation of firing rate to control the brightness of a light-emitting-diode and subsequent reward. Neurons were classified as 'fast-spiking', 'bursting' or 'regular-spiking' according to waveform-width and intrinsic firing patterns. Fast-spiking and bursting neurons were found to up-regulate firing rate by a factor of 2.43±1.16, demonstrating high utility, while regular-spiking neurons decreased firing rates on average by a factor of 0.73±0.23, demonstrating low utility. The ability to select neurons with high utility will be important to minimize training times and maximize information yield in future clinical BMI applications. The highly contrasting utility observed between fast-spiking and bursting neurons versus regular-spiking neurons allows for the hypothesis to be advanced that intrinsic electrophysiological properties may be useful criteria that predict neuron utility in BMI implementation.
Chen, Zhongxian; Yu, Haitao; Wen, Cheng
2014-01-01
The goal of direct drive ocean wave energy extraction system is to convert ocean wave energy into electricity. The problem explored in this paper is the design and optimal control for the direct drive ocean wave energy extraction system. An optimal control method based on internal model proportion integration differentiation (IM-PID) is proposed in this paper though most of ocean wave energy extraction systems are optimized by the structure, weight, and material. With this control method, the heavy speed of outer heavy buoy of the energy extraction system is in resonance with incident wave, and the system efficiency is largely improved. Validity of the proposed optimal control method is verified in both regular and irregular ocean waves, and it is shown that IM-PID control method is optimal in that it maximizes the energy conversion efficiency. In addition, the anti-interference ability of IM-PID control method has been assessed, and the results show that the IM-PID control method has good robustness, high precision, and strong anti-interference ability. PMID:25152913
AdS3 to dS3 transition in the near horizon of asymptotically de Sitter solutions
NASA Astrophysics Data System (ADS)
Sadeghian, S.; Vahidinia, M. H.
2017-08-01
We consider two solutions of Einstein-Λ theory which admit the extremal vanishing horizon (EVH) limit, odd-dimensional multispinning Kerr black hole (in the presence of cosmological constant) and cosmological soliton. We show that the near horizon EVH geometry of Kerr has a three-dimensional maximally symmetric subspace whose curvature depends on rotational parameters and the cosmological constant. In the Kerr-dS case, this subspace interpolates between AdS3 , three-dimensional flat and dS3 by varying rotational parameters, while the near horizon of the EVH cosmological soliton always has a dS3 . The feature of the EVH cosmological soliton is that it is regular everywhere on the horizon. In the near EVH case, these three-dimensional parts turn into the corresponding locally maximally symmetric spacetimes with a horizon: Kerr-dS3 , flat space cosmology or BTZ black hole. We show that their thermodynamics match with the thermodynamics of the original near EVH black holes. We also briefly discuss the holographic two-dimensional CFT dual to the near horizon of EVH solutions.
The effect of a novel square-profile hand rim on propulsion technique of wheelchair tennis players.
de Groot, Sonja; Bos, Femke; Koopman, Jorine; Hoekstra, Aldo E; Vegter, Riemer J K
2018-09-01
The purpose of this study was to investigate the effect of a square-profile hand rim (SPR) on propulsion technique of wheelchair tennis players. Eight experienced wheelchair tennis players performed two sets of three submaximal exercise tests and six sprint tests on a wheelchair ergometer, once with a regular rim (RR) and once with a SPR. Torque and velocity were measured continuously and power output and timing variables were calculated. No significant differences were found in propulsion technique between the RR and SPR during the submaximal tests. When sprinting with the racket, the SPR showed a significantly lower overall speed (9.1 vs. 9.8 m s -1 ), maximal speed (10.5 vs. 11.4 m s -1 ), and maximal acceleration (18.6 vs. 10.9 m s -2 ). The SPR does not seem to improve the propulsion technique when propelling a wheelchair with a tennis racket in the hand. However, the results gave input for new hand rim designs for wheelchair tennis. Copyright © 2018 Elsevier Ltd. All rights reserved.
Network marketing on a small-world network
NASA Astrophysics Data System (ADS)
Kim, Beom Jun; Jun, Tackseung; Kim, Jeong-Yoo; Choi, M. Y.
2006-02-01
We investigate a dynamic model of network marketing in a small-world network structure artificially constructed similarly to the Watts-Strogatz network model. Different from the traditional marketing, consumers can also play the role of the manufacturer's selling agents in network marketing, which is stimulated by the referral fee the manufacturer offers. As the wiring probability α is increased from zero to unity, the network changes from the one-dimensional regular directed network to the star network where all but one player are connected to one consumer. The price p of the product and the referral fee r are used as free parameters to maximize the profit of the manufacturer. It is observed that at α=0 the maximized profit is constant independent of the network size N while at α≠0, it increases linearly with N. This is in parallel to the small-world transition. It is also revealed that while the optimal value of p stays at an almost constant level in a broad range of α, that of r is sensitive to a change in the network structure. The consumer surplus is also studied and discussed.
2014-01-01
An amylase and lipase producing bacterium (strain C2) was enriched and isolated from soil regularly contaminated with olive washing wastewater in Sfax, Tunisia. Cell was aerobic, mesophilic, Gram-negative, motile, non-sporulating bacterium, capable of growing optimally at pH 7 and 30°C and tolerated maximally 10% (W/V) NaCl. The predominant fatty acids were found to be C18:1ω7c (32.8%), C16:1ω7c (27.3%) and C16:0 (23.1%). Phylogenetic analysis of the 16S rRNA gene revealed that this strain belonging to the genus Pseudomonas. Strain C2 was found to be closely related to Pseudomonas luteola with more than 99% of similarity. Amylase optimization extraction was carried out using Box Behnken Design (BBD). Its maximal activity was found when the pH and temperature ranged from 5.5 to 6.5 and from 33 to 37°C, respectively. Under these conditions, amylase activity was found to be about 9.48 U/ml. PMID:24405763
Takebayashi, T; Varsier, N; Kikuchi, Y; Wake, K; Taki, M; Watanabe, S; Akiba, S; Yamaguchi, N
2008-02-12
In a case-control study in Japan of brain tumours in relation to mobile phone use, we used a novel approach for estimating the specific absorption rate (SAR) inside the tumour, taking account of spatial relationships between tumour localisation and intracranial radiofrequency distribution. Personal interviews were carried out with 88 patients with glioma, 132 with meningioma, and 102 with pituitary adenoma (322 cases in total), and with 683 individually matched controls. All maximal SAR values were below 0.1 W kg(-1), far lower than the level at which thermal effects may occur, the adjusted odds ratios (ORs) for regular mobile phone users being 1.22 (95% confidence interval (CI): 0.63-2.37) for glioma and 0.70 (0.42-1.16) for meningioma. When the maximal SAR value inside the tumour tissue was accounted for in the exposure indices, the overall OR was again not increased and there was no significant trend towards an increasing OR in relation to SAR-derived exposure indices. A non-significant increase in OR among glioma patients in the heavily exposed group may reflect recall bias.
How the medical practice employee can get more from continuing education programs.
Hills, Laura Sachs
2007-01-01
Continuing education can be a win-win situation for the medical practice employee and for the practice. However, in order education programs must become informed consumers of such programs. They must know how to select the right educational programs for their needs and maximize their own participation. Employees who attend continuing education programs without preparation may not get the full benefit from their experiences. This article suggests benchmarks to help determine whether a continuing education program is worthwhile and offers advice for calculating the actual cost of any continuing education program. It provides a how-to checklist for medical practice employees so they know how to get the most out of their continuing education experience before, during, and after the program. This article also suggests using a study partner system to double educational efforts among employees and offers 10 practical tips for taking and using notes at a continuing education program. Finally, this article outlines the benefits of becoming a regular student and offers three practical tips for maximizing the employee's exhibit hall experience.
Chen, Zhongxian; Yu, Haitao; Wen, Cheng
2014-01-01
The goal of direct drive ocean wave energy extraction system is to convert ocean wave energy into electricity. The problem explored in this paper is the design and optimal control for the direct drive ocean wave energy extraction system. An optimal control method based on internal model proportion integration differentiation (IM-PID) is proposed in this paper though most of ocean wave energy extraction systems are optimized by the structure, weight, and material. With this control method, the heavy speed of outer heavy buoy of the energy extraction system is in resonance with incident wave, and the system efficiency is largely improved. Validity of the proposed optimal control method is verified in both regular and irregular ocean waves, and it is shown that IM-PID control method is optimal in that it maximizes the energy conversion efficiency. In addition, the anti-interference ability of IM-PID control method has been assessed, and the results show that the IM-PID control method has good robustness, high precision, and strong anti-interference ability.
Collard, Marie; De Ridder, Chantal; David, Bruno; Dehairs, Frank; Dubois, Philippe
2015-02-01
Increasing atmospheric carbon dioxide concentration alters the chemistry of the oceans towards more acidic conditions. Polar oceans are particularly affected due to their low temperature, low carbonate content and mixing patterns, for instance upwellings. Calcifying organisms are expected to be highly impacted by the decrease in the oceans' pH and carbonate ions concentration. In particular, sea urchins, members of the phylum Echinodermata, are hypothesized to be at risk due to their high-magnesium calcite skeleton. However, tolerance to ocean acidification in metazoans is first linked to acid-base regulation capacities of the extracellular fluids. No information on this is available to date for Antarctic echinoderms and inference from temperate and tropical studies needs support. In this study, we investigated the acid-base status of 9 species of sea urchins (3 cidaroids, 2 regular euechinoids and 4 irregular echinoids). It appears that Antarctic regular euechinoids seem equipped with similar acid-base regulation systems as tropical and temperate regular euechinoids but could rely on more passive ion transfer systems, minimizing energy requirements. Cidaroids have an acid-base status similar to that of tropical cidaroids. Therefore Antarctic cidaroids will most probably not be affected by decreasing seawater pH, the pH drop linked to ocean acidification being negligible in comparison of the naturally low pH of the coelomic fluid. Irregular echinoids might not suffer from reduced seawater pH if acidosis of the coelomic fluid pH does not occur but more data on their acid-base regulation are needed. Combining these results with the resilience of Antarctic sea urchin larvae strongly suggests that these organisms might not be the expected victims of ocean acidification. However, data on the impact of other global stressors such as temperature and of the combination of the different stressors needs to be acquired to assess the sensitivity of these organisms to global change. © 2014 John Wiley & Sons Ltd.
A guide to the visual analysis and communication of biomolecular structural data.
Johnson, Graham T; Hertig, Samuel
2014-10-01
Biologists regularly face an increasingly difficult task - to effectively communicate bigger and more complex structural data using an ever-expanding suite of visualization tools. Whether presenting results to peers or educating an outreach audience, a scientist can achieve maximal impact with minimal production time by systematically identifying an audience's needs, planning solutions from a variety of visual communication techniques and then applying the most appropriate software tools. A guide to available resources that range from software tools to professional illustrators can help researchers to generate better figures and presentations tailored to any audience's needs, and enable artistically inclined scientists to create captivating outreach imagery.
How can organisations influence their older employees' decision of when to retire?
Oakman, Jodi; Howie, Linsey
2013-01-01
This article reports on a study of older employees of a large public service organisation and examines their experiences of employment and their intentions to retire. This study collected qualitative data through focus group interviews with 42 participants. Key themes derived from data analysis with regard to influences on retirement intentions included: personal, organizational and legislative influences. The study concludes that organisations can retain their older workers longer if they provide sufficient support, the work offered is satisfying, and part-time work is available. Regular review of employees' performance and satisfaction is required to maximize the productivity and retention of older workers.
Vasudevan, Abhinav; Gibson, Peter R; van Langenberg, Daniel R
2017-01-01
An awareness of the expected time for therapies to induce symptomatic improvement and remission is necessary for determining the timing of follow-up, disease (re)assessment, and the duration to persist with therapies, yet this is seldom reported as an outcome in clinical trials. In this review, we explore the time to clinical response and remission of current therapies for inflammatory bowel disease (IBD) as well as medication, patient and disease related factors that may influence the time to clinical response. It appears that the time to therapeutic response varies depending on the indication for therapy (Crohn’s disease or ulcerative colitis). Agents with the most rapid time to clinical response included corticosteroids, calcineurin inhibitors, exclusive enteral nutrition, aminosalicylates and anti-tumor necrosis factor therapy which will work in most patients within the first 2 mo. Vedolizumab, methotrexate and thiopurines had a longer time to clinical response and can take several months to achieve maximal efficacy. Factors affecting the time to clinical response of therapies included use of concomitant therapy, disease duration, smoking status, disease phenotype and advanced age. There appears to be marked variation in time to clinical response for therapies used in IBD which is further influenced by disease and patient related factors. Understanding the expected time to therapeutic response is integral to inform further decision making, maintain a patient-centered approach and ensure treatment is given an appropriate timeframe to achieve maximal benefit prior to cessation. PMID:29085188
Gwak, Jae Ha; Lee, Bo Kyeong; Lee, Won Kyung; Sohn, So Young
2017-03-15
This study proposes a new framework for the selection of optimal locations for green roofs to achieve a sustainable urban ecosystem. The proposed framework selects building sites that can maximize the benefits of green roofs, based not only on the socio-economic and environmental benefits to urban residents, but also on the provision of urban foraging sites for honeybees. The framework comprises three steps. First, building candidates for green roofs are selected considering the building type. Second, the selected building candidates are ranked in terms of their expected socio-economic and environmental effects. The benefits of green roofs are improved energy efficiency and air quality, reduction of urban flood risk and infrastructure improvement costs, reuse of storm water, and creation of space for education and leisure. Furthermore, the estimated cost of installing green roofs is also considered. We employ spatial data to determine the expected effects of green roofs on each building unit, because the benefits and costs may vary depending on the location of the building. This is due to the heterogeneous spatial conditions. In the third step, the final building sites are proposed by solving the maximal covering location problem (MCLP) to determine the optimal locations for green roofs as urban honeybee foraging sites. As an illustrative example, we apply the proposed framework in Seoul, Korea. This new framework is expected to contribute to sustainable urban ecosystems. Copyright © 2016 Elsevier Ltd. All rights reserved.
Vasudevan, Abhinav; Gibson, Peter R; van Langenberg, Daniel R
2017-09-21
An awareness of the expected time for therapies to induce symptomatic improvement and remission is necessary for determining the timing of follow-up, disease (re)assessment, and the duration to persist with therapies, yet this is seldom reported as an outcome in clinical trials. In this review, we explore the time to clinical response and remission of current therapies for inflammatory bowel disease (IBD) as well as medication, patient and disease related factors that may influence the time to clinical response. It appears that the time to therapeutic response varies depending on the indication for therapy (Crohn's disease or ulcerative colitis). Agents with the most rapid time to clinical response included corticosteroids, calcineurin inhibitors, exclusive enteral nutrition, aminosalicylates and anti-tumor necrosis factor therapy which will work in most patients within the first 2 mo. Vedolizumab, methotrexate and thiopurines had a longer time to clinical response and can take several months to achieve maximal efficacy. Factors affecting the time to clinical response of therapies included use of concomitant therapy, disease duration, smoking status, disease phenotype and advanced age. There appears to be marked variation in time to clinical response for therapies used in IBD which is further influenced by disease and patient related factors. Understanding the expected time to therapeutic response is integral to inform further decision making, maintain a patient-centered approach and ensure treatment is given an appropriate timeframe to achieve maximal benefit prior to cessation.
Maximizing Federal IT Dollars: A Connection Between IT Investments and Organizational Performance
2011-04-01
Theory for investments, where diversification of financial assets (stocks, bonds, and cash) is balanced by expected returns and risk (Markowitz, 1952...Stakeholder satisfaction (stakeholder may not pay proportionally for service) Stakeholders Stockholders , owners, market Taxpayers; legislative...Adviser for Off-Campus Programs in the Department of Engineering Manage- ment and Systems Engineering. His current research interests include stochastic
Expectation Maximization and its Application in Modeling, Segmentation and Anomaly Detection
2008-05-01
ocomplNc <la!a rrot>lcm,. ",., i’lCOll\\l>lc,c,ICSS of Ihc dala mayan "" IIuc lu missing dala. (J,,,,,,.,ed di,nibu!ions . elc . 0"" such c • ..- is a...Estimation Techniques in Computer Huiyan, Z., Yongfeng, C., Wen, Y. SAR Image Segmentation Using MPM Constrained Stochastic Relaxation. Civil Engineering
ERIC Educational Resources Information Center
Song, Hairong; Ferrer, Emilio
2009-01-01
This article presents a state-space modeling (SSM) technique for fitting process factor analysis models directly to raw data. The Kalman smoother via the expectation-maximization algorithm to obtain maximum likelihood parameter estimates is used. To examine the finite sample properties of the estimates in SSM when common factors are involved, a…
Optimal control of orientation and entanglement for two dipole-dipole coupled quantum planar rotors.
Yu, Hongling; Ho, Tak-San; Rabitz, Herschel
2018-05-09
Optimal control simulations are performed for orientation and entanglement of two dipole-dipole coupled identical quantum rotors. The rotors at various fixed separations lie on a model non-interacting plane with an applied control field. It is shown that optimal control of orientation or entanglement represents two contrasting control scenarios. In particular, the maximally oriented state (MOS) of the two rotors has a zero entanglement entropy and is readily attainable at all rotor separations. Whereas, the contrasting maximally entangled state (MES) has a zero orientation expectation value and is most conveniently attainable at small separations where the dipole-dipole coupling is strong. It is demonstrated that the peak orientation expectation value attained by the MOS at large separations exhibits a long time revival pattern due to the small energy splittings arising form the extremely weak dipole-dipole coupling between the degenerate product states of the two free rotors. Moreover, it is found that the peak entanglement entropy value attained by the MES remains largely unchanged as the two rotors are transported to large separations after turning off the control field. Finally, optimal control simulations of transition dynamics between the MOS and the MES reveal the intricate interplay between orientation and entanglement.
Time perspective and well-being: Swedish survey questionnaires and data.
Garcia, Danilo; Nima, Ali Al; Lindskär, Erik
2016-12-01
The data pertains 448 Swedes' responses to questionnaires on time perspective (Zimbardo Time Perspective Inventory), temporal life satisfaction (Temporal Satisfaction with Life Scale), affect (Positive Affect and Negative Affect Schedule), and psychological well-being (Ryff׳s Scales of Psychological Well-Being-short version). The data was collected among university students and individuals at a training facility (see U. Sailer, P. Rosenberg, A.A. Nima, A. Gamble, T. Gärling, T. Archer, D. Garcia, 2014; [1]). Since there were no differences in any of the other background variables, but exercise frequency, all subsequent analyses were conducted on the 448 participants as one single sample. In this article we include the Swedish versions of the questionnaires used to operationalize the time perspective and well-being variables. The data is available, SPSS file, as Supplementary material in this article. We used the Expectation-Maximization Algorithm to input missing values. Little׳s Chi-Square test for Missing Completely at Random showed a χ (2)=67.25 (df=53, p=.09) for men and χ (2)=77.65 (df=72, p=.31) for women. These values suggested that the Expectation-Maximization Algorithm was suitable to use on this data for missing data imputation.
A Local Scalable Distributed Expectation Maximization Algorithm for Large Peer-to-Peer Networks
NASA Technical Reports Server (NTRS)
Bhaduri, Kanishka; Srivastava, Ashok N.
2009-01-01
This paper offers a local distributed algorithm for expectation maximization in large peer-to-peer environments. The algorithm can be used for a variety of well-known data mining tasks in a distributed environment such as clustering, anomaly detection, target tracking to name a few. This technology is crucial for many emerging peer-to-peer applications for bioinformatics, astronomy, social networking, sensor networks and web mining. Centralizing all or some of the data for building global models is impractical in such peer-to-peer environments because of the large number of data sources, the asynchronous nature of the peer-to-peer networks, and dynamic nature of the data/network. The distributed algorithm we have developed in this paper is provably-correct i.e. it converges to the same result compared to a similar centralized algorithm and can automatically adapt to changes to the data and the network. We show that the communication overhead of the algorithm is very low due to its local nature. This monitoring algorithm is then used as a feedback loop to sample data from the network and rebuild the model when it is outdated. We present thorough experimental results to verify our theoretical claims.
Feng, Haihua; Karl, William Clem; Castañon, David A
2008-05-01
In this paper, we develop a new unified approach for laser radar range anomaly suppression, range profiling, and segmentation. This approach combines an object-based hybrid scene model for representing the range distribution of the field and a statistical mixture model for the range data measurement noise. The image segmentation problem is formulated as a minimization problem which jointly estimates the target boundary together with the target region range variation and background range variation directly from the noisy and anomaly-filled range data. This formulation allows direct incorporation of prior information concerning the target boundary, target ranges, and background ranges into an optimal reconstruction process. Curve evolution techniques and a generalized expectation-maximization algorithm are jointly employed as an efficient solver for minimizing the objective energy, resulting in a coupled pair of object and intensity optimization tasks. The method directly and optimally extracts the target boundary, avoiding a suboptimal two-step process involving image smoothing followed by boundary extraction. Experiments are presented demonstrating that the proposed approach is robust to anomalous pixels (missing data) and capable of producing accurate estimation of the target boundary and range values from noisy data.
Gravitational lensing and ghost images in the regular Bardeen no-horizon spacetimes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schee, Jan; Stuchlík, Zdeněk, E-mail: jan.schee@fpf.slu.cz, E-mail: zdenek.stuchlik@fpf.slu.cz
We study deflection of light rays and gravitational lensing in the regular Bardeen no-horizon spacetimes. Flatness of these spacetimes in the central region implies existence of interesting optical effects related to photons crossing the gravitational field of the no-horizon spacetimes with low impact parameters. These effects occur due to existence of a critical impact parameter giving maximal deflection of light rays in the Bardeen no-horizon spacetimes. We give the critical impact parameter in dependence on the specific charge of the spacetimes, and discuss 'ghost' direct and indirect images of Keplerian discs, generated by photons with low impact parameters. The ghostmore » direct images can occur only for large inclination angles of distant observers, while ghost indirect images can occur also for small inclination angles. We determine the range of the frequency shift of photons generating the ghost images and determine distribution of the frequency shift across these images. We compare them to those of the standard direct images of the Keplerian discs. The difference of the ranges of the frequency shift on the ghost and direct images could serve as a quantitative measure of the Bardeen no-horizon spacetimes. The regions of the Keplerian discs giving the ghost images are determined in dependence on the specific charge of the no-horizon spacetimes. For comparison we construct direct and indirect (ordinary and ghost) images of Keplerian discs around Reissner-Nördström naked singularities demonstrating a clear qualitative difference to the ghost direct images in the regular Bardeen no-horizon spacetimes. The optical effects related to the low impact parameter photons thus give clear signature of the regular Bardeen no-horizon spacetimes, as no similar phenomena could occur in the black hole or naked singularity spacetimes. Similar direct ghost images have to occur in any regular no-horizon spacetimes having nearly flat central region.« less
Updated atomic weights: Time to review our table
Coplen, Tyler B.; Meyers, Fabienne; Holden, Norman E.
2016-01-01
Despite common belief, atomic weights are not necessarily constants of nature. Scientists’ ability to measure these values is regularly improving, so one would expect that the accuracy of these values should be improving with time. It is the task of the IUPAC (International Union of Pure and Applied Chemistry) Commission on Isotopic Abundances and Atomic Weights (CIAAW) to regularly review atomic-weight determinations and release updated values.According to an evaluation published in Pure and Applied Chemistry [1], even the most simplified table abridged to four significant digits needs to be updated for the elements selenium and molybdenum. According to the most recent 2015 release of "Atomic Weights of the Elements" [2], another update is needed for ytterbium.
Calculation of Expectation Values of Operators in the Complex Scaling Method
Papadimitriou, G.
2016-06-14
In the complex scaling method (CSM) provides with a way to obtain resonance parameters of particle unstable states by rotating the coordinates and momenta of the original Hamiltonian. It is convenient to use an L 2 integrable basis to resolve the complex rotated or complex scaled Hamiltonian H θ , with θ being the angle of rotation in the complex energy plane. Within the CSM, resonance and scattering solutions have fall-off asymptotics. Furthermore, one of the consequences is that, expectation values of operators in a resonance or scattering complex scaled solution are calculated by complex rotating the operators. In thismore » work we are exploring applications of the CSM on calculations of expectation values of quantum mechanical operators by using the regularized backrotation technique and calculating hence the expectation value using the unrotated operator. Moreover, the test cases involve a schematic two-body Gaussian model and also applications using realistic interactions.« less
Multi-Objective Bidding Strategy for Genco Using Non-Dominated Sorting Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Saksinchai, Apinat; Boonchuay, Chanwit; Ongsakul, Weerakorn
2010-06-01
This paper proposes a multi-objective bidding strategy for a generation company (GenCo) in uniform price spot market using non-dominated sorting particle swarm optimization (NSPSO). Instead of using a tradeoff technique, NSPSO is introduced to solve the multi-objective strategic bidding problem considering expected profit maximization and risk (profit variation) minimization. Monte Carlo simulation is employed to simulate rivals' bidding behavior. Test results indicate that the proposed approach can provide the efficient non-dominated solution front effectively. In addition, it can be used as a decision making tool for a GenCo compromising between expected profit and price risk in spot market.
Tootelian, Dennis H; Mikhailitchenko, Andrey; Holst, Cindy; Gaedeke, Ralph M
2016-01-01
The health care landscape has changed dramatically. Consumers now seek plans whose benefits better fit their health care needs and desires for access to providers. This exploratory survey of more than 1,000 HMO and non-HMO customers found significant differences with respect to their selection processes for health plans and providers, and their expectations regarding access to and communication with health care providers. While there are some similarities in factors affecting choice, segmentation strategies are necessary to maximize the appeal of a plan, satisfy customers in the selection of physicians, and meet their expectations regarding access to those physicians.
Noisy Preferences in Risky Choice: A Cautionary Note
2017-01-01
We examine the effects of multiple sources of noise in risky decision making. Noise in the parameters that characterize an individual’s preferences can combine with noise in the response process to distort observed choice proportions. Thus, underlying preferences that conform to expected value maximization can appear to show systematic risk aversion or risk seeking. Similarly, core preferences that are consistent with expected utility theory, when perturbed by such noise, can appear to display nonlinear probability weighting. For this reason, modal choices cannot be used simplistically to infer underlying preferences. Quantitative model fits that do not allow for both sorts of noise can lead to wrong conclusions. PMID:28569526
Collective states in social systems with interacting learning agents
NASA Astrophysics Data System (ADS)
Semeshenko, Viktoriya; Gordon, Mirta B.; Nadal, Jean-Pierre
2008-08-01
We study the implications of social interactions and individual learning features on consumer demand in a simple market model. We consider a social system of interacting heterogeneous agents with learning abilities. Given a fixed price, agents repeatedly decide whether or not to buy a unit of a good, so as to maximize their expected utilities. This model is close to Random Field Ising Models, where the random field corresponds to the idiosyncratic willingness to pay. We show that the equilibrium reached depends on the nature of the information agents use to estimate their expected utilities. It may be different from the systems’ Nash equilibria.
Lin, Shin-Yi; Chien, Shih-Chang; Wang, Sheng-Yang; Mau, Jeng-Leun
2016-01-01
Pleurotus citrinopileatus mycelium was prepared with high ergothioneine (Hi-Ergo) content and its proximate composition, nonvolatile taste components, and antioxidant properties were studied. The ergothioneine contents of fruiting bodies and Hi-Ergo and regular mycelia were 3.89, 14.57, and 0.37 mg/g dry weight, respectively. Hi-Ergo mycelium contained more dietary fiber, soluble polysaccharides, and ash but less carbohydrates, reducing sugar, fiber, and fat than regular mycelium. However, Hi-Ergo mycelium contained the smallest amounts of total sugars and polyols (47.43 mg/g dry weight). In addition, Hi-Ergo mycelium showed the most intense umami taste. On the basis of the half-maximal effective concentration values obtained, the 70% ethanolic extract from Hi-Ergo mycelium showed the most effective antioxidant activity, reducing power, and scavenging ability, whereas the fruiting body showed the most effective antioxidant activity, chelating ability, and Trolox-equivalent antioxidant capacity. Overall, Hi-Ergo mycelium could be beneficially used as a food-flavoring material or as a nutritional supplement.
Field experience and performance evaluation of a medium-concentration CPV system
NASA Astrophysics Data System (ADS)
Norton, Matthew; Bentley, Roger; Georghiou, George E.; Chonavel, Sylvain; De Mutiis, Alfredo
2012-10-01
With the aim of gaining experience and performance data from location with a harsh summer climate, a 70 X concentrating photovoltaic (CPV) system was installed in Janurary 2009 in Nicosia, Cyprus. The performance of this system has been monitored using regular current-voltage characterisations for three years. Over this period, the output of the system has remained fairly constant. Measured performance ratios varied from 0.79 to 0.86 in the winter, but fell to 0.64 over the year when left uncleaned. Operating cell temperatures were modeled and found to be similar to those of flat plate modules. The most significant causes of energy loss have been identified as originating from tracking issues and soiling. Losses due to soiling could account for a drop in output of 0.2% per day. When cleaned and properly oriented, the normalized output of the system has remained constant, suggesting that this particular design is tolerant to the physical strain of long-term outdoor exposure in harsh summer conditions. Regular cleaning and reliable tracker operation are shown to be essential for maximizing energy yield.
Regularizing portfolio optimization
NASA Astrophysics Data System (ADS)
Still, Susanne; Kondor, Imre
2010-07-01
The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.
Distributed and opposing effects of incidental learning in the human brain.
Hall, Michelle G; Naughtin, Claire K; Mattingley, Jason B; Dux, Paul E
2018-06-01
Incidental learning affords a behavioural advantage when sensory information matches regularities that have previously been encountered. Previous studies have taken a focused approach by probing the involvement of specific candidate brain regions underlying incidentally acquired memory representations, as well as expectation effects on early sensory representations. Here, we investigated the broader extent of the brain's sensitivity to violations and fulfilments of expectations, using an incidental learning paradigm in which the contingencies between target locations and target identities were manipulated without participants' overt knowledge. Multivariate analysis of functional magnetic resonance imaging data was applied to compare the consistency of neural activity for visual events that the contingency manipulation rendered likely versus unlikely. We observed widespread sensitivity to expectations across frontal, temporal, occipital, and sub-cortical areas. These activation clusters showed distinct response profiles, such that some regions displayed more reliable activation patterns under fulfilled expectations, whereas others showed more reliable patterns when expectations were violated. These findings reveal that expectations affect multiple stages of information processing during visual decision making, rather than early sensory processing stages alone. Copyright © 2018 Elsevier Inc. All rights reserved.
Johnson, Fred A.; Jensen, Gitte H.; Madsen, Jesper; Williams, Byron K.
2014-01-01
We explored the application of dynamic-optimization methods to the problem of pink-footed goose (Anser brachyrhynchus) management in western Europe. We were especially concerned with the extent to which uncertainty in population dynamics influenced an optimal management strategy, the gain in management performance that could be expected if uncertainty could be eliminated or reduced, and whether an adaptive or robust management strategy might be most appropriate in the face of uncertainty. We combined three alternative survival models with three alternative reproductive models to form a set of nine annual-cycle models for pink-footed geese. These models represent a wide range of possibilities concerning the extent to which demographic rates are density dependent or independent, and the extent to which they are influenced by spring temperatures. We calculated state-dependent harvest strategies for these models using stochastic dynamic programming and an objective function that maximized sustainable harvest, subject to a constraint on desired population size. As expected, attaining the largest mean objective value (i.e., the relative measure of management performance) depended on the ability to match a model-dependent optimal strategy with its generating model of population dynamics. The nine models suggested widely varying objective values regardless of the harvest strategy, with the density-independent models generally producing higher objective values than models with density-dependent survival. In the face of uncertainty as to which of the nine models is most appropriate, the optimal strategy assuming that both survival and reproduction were a function of goose abundance and spring temperatures maximized the expected minimum objective value (i.e., maxi–min). In contrast, the optimal strategy assuming equal model weights minimized the expected maximum loss in objective value. The expected value of eliminating model uncertainty was an increase in objective value of only 3.0%. This value represents the difference between the best that could be expected if the most appropriate model were known and the best that could be expected in the face of model uncertainty. The value of eliminating uncertainty about the survival process was substantially higher than that associated with the reproductive process, which is consistent with evidence that variation in survival is more important than variation in reproduction in relatively long-lived avian species. Comparing the expected objective value if the most appropriate model were known with that of the maxi–min robust strategy, we found the value of eliminating uncertainty to be an expected increase of 6.2% in objective value. This result underscores the conservatism of the maxi–min rule and suggests that risk-neutral managers would prefer the optimal strategy that maximizes expected value, which is also the strategy that is expected to minimize the maximum loss (i.e., a strategy based on equal model weights). The low value of information calculated for pink-footed geese suggests that a robust strategy (i.e., one in which no learning is anticipated) could be as nearly effective as an adaptive one (i.e., a strategy in which the relative credibility of models is assessed through time). Of course, an alternative explanation for the low value of information is that the set of population models we considered was too narrow to represent key uncertainties in population dynamics. Yet we know that questions about the presence of density dependence must be central to the development of a sustainable harvest strategy. And while there are potentially many environmental covariates that could help explain variation in survival or reproduction, our admission of models in which vital rates are drawn randomly from reasonable distributions represents a worst-case scenario for management. We suspect that much of the value of the various harvest strategies we calculated is derived from the fact that they are state dependent, such that appropriate harvest rates depend on population abundance and weather conditions, as well as our focus on an infinite time horizon for sustainability.
Assessment of Students' Satisfaction: A Case Study of Dire Dawa University, Ethiopia
ERIC Educational Resources Information Center
Daniel, Dawit; Liben, Getachew; Adugna, Ashenafi
2017-01-01
Universities in the modern world are expected to seek and cultivate new knowledge, provide the right kind of leadership and strive to promote equality and social justice. The general objective of the study is to investigate the satisfaction level of undergraduate level students enrolled in regular program of Dire-Dawa University and there by…
29 CFR 778.220 - “Show-up” or “reporting” pay.
Code of Federal Regulations, 2010 CFR
2010-07-01
... scheduled work on any day will receive a minimum of 4 hours' work or pay. The employee thus receives not... failure to provide expected work during regular hours. One of the primary purposes of such an arrangement... that an employee entitled to overtime pay after 40 hours a week whose workweek begins on Monday and who...