Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim
2016-01-01
Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation.
Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim
2016-01-01
Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179
A combined reconstruction-classification method for diffuse optical tomography.
Hiltunen, P; Prince, S J D; Arridge, S
2009-11-07
We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.
NASA Astrophysics Data System (ADS)
Aslan, Serdar; Taylan Cemgil, Ali; Akın, Ata
2016-08-01
Objective. In this paper, we aimed for the robust estimation of the parameters and states of the hemodynamic model by using blood oxygen level dependent signal. Approach. In the fMRI literature, there are only a few successful methods that are able to make a joint estimation of the states and parameters of the hemodynamic model. In this paper, we implemented a maximum likelihood based method called the particle smoother expectation maximization (PSEM) algorithm for the joint state and parameter estimation. Main results. Former sequential Monte Carlo methods were only reliable in the hemodynamic state estimates. They were claimed to outperform the local linearization (LL) filter and the extended Kalman filter (EKF). The PSEM algorithm is compared with the most successful method called square-root cubature Kalman smoother (SCKS) for both state and parameter estimation. SCKS was found to be better than the dynamic expectation maximization (DEM) algorithm, which was shown to be a better estimator than EKF, LL and particle filters. Significance. PSEM was more accurate than SCKS for both the state and the parameter estimation. Hence, PSEM seems to be the most accurate method for the system identification and state estimation for the hemodynamic model inversion literature. This paper do not compare its results with Tikhonov-regularized Newton—CKF (TNF-CKF), a recent robust method which works in filtering sense.
Maximal likelihood correspondence estimation for face recognition across pose.
Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang
2014-10-01
Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database.
Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster
2017-12-01
This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.
Firing patterns of spontaneously active motor units in spinal cord-injured subjects.
Zijdewind, Inge; Thomas, Christine K
2012-04-01
Involuntary motor unit activity at low rates is common in hand muscles paralysed by spinal cord injury. Our aim was to describe these patterns of motor unit behaviour in relation to motoneurone and motor unit properties. Intramuscular electromyographic activity (EMG), surface EMG and force were recorded for 30 min from thenar muscles of nine men with chronic cervical SCI. Motor units fired for sustained periods (>10 min) at regular (coefficient of variation ≤ 0.15, CV, n =19 units) or irregular intervals (CV>0.15, n =14). Regularly firing units started and stopped firing independently suggesting that intrinsic motoneurone properties were important for recruitment and derecruitment. Recruitment (3.6 Hz, SD 1.2), maximal (10.2 Hz, SD 2.3, range: 7.5-15.4 Hz) and derecruitment frequencies were low (3.3 Hz, SD 1.6), as were firing rate increases after recruitment (~20 intervals in 3 s). Once active, firing often covaried, promoting the idea that units received common inputs.Half of the regularly firing units showed a very slow decline (>40 s) in discharge before derecruitment and had interspike intervals longer than their estimated after hyperpolarisation potential (AHP) duration (estimated by death rate and breakpoint analyses). The other units were derecruited more abruptly and had shorter estimated AHP durations. Overall, regularly firing units had longer estimated AHP durations and were weaker than irregularly firing units, suggesting they were lower threshold units. Sustained firing of units at regular rates may reflect activation of persistent inward currents, visible here in the absence of voluntary drive, whereas irregularly firing units may only respond to synaptic noise.
Convergence of damped inertial dynamics governed by regularized maximally monotone operators
NASA Astrophysics Data System (ADS)
Attouch, Hedy; Cabot, Alexandre
2018-06-01
In a Hilbert space setting, we study the asymptotic behavior, as time t goes to infinity, of the trajectories of a second-order differential equation governed by the Yosida regularization of a maximally monotone operator with time-varying positive index λ (t). The dissipative and convergence properties are attached to the presence of a viscous damping term with positive coefficient γ (t). A suitable tuning of the parameters γ (t) and λ (t) makes it possible to prove the weak convergence of the trajectories towards zeros of the operator. When the operator is the subdifferential of a closed convex proper function, we estimate the rate of convergence of the values. These results are in line with the recent articles by Attouch-Cabot [3], and Attouch-Peypouquet [8]. In this last paper, the authors considered the case γ (t) = α/t, which is naturally linked to Nesterov's accelerated method. We unify, and often improve the results already present in the literature.
Yang, Defu; Wang, Lin; Chen, Dongmei; Yan, Chenggang; He, Xiaowei; Liang, Jimin; Chen, Xueli
2018-05-17
The reconstruction of bioluminescence tomography (BLT) is severely ill-posed due to the insufficient measurements and diffuses nature of the light propagation. Predefined permissible source region (PSR) combined with regularization terms is one common strategy to reduce such ill-posedness. However, the region of PSR is usually hard to determine and can be easily affected by subjective consciousness. Hence, we theoretically developed a filtered maximum likelihood expectation maximization (fMLEM) method for BLT. Our method can avoid predefining the PSR and provide a robust and accurate result for global reconstruction. In the method, the simplified spherical harmonics approximation (SP N ) was applied to characterize diffuse light propagation in medium, and the statistical estimation-based MLEM algorithm combined with a filter function was used to solve the inverse problem. We systematically demonstrated the performance of our method by the regular geometry- and digital mouse-based simulations and a liver cancer-based in vivo experiment. Graphical abstract The filtered MLEM-based global reconstruction method for BLT.
J.-L. Lions' problem concerning maximal regularity of equations governed by non-autonomous forms
NASA Astrophysics Data System (ADS)
Fackler, Stephan
2017-05-01
An old problem due to J.-L. Lions going back to the 1960s asks whether the abstract Cauchy problem associated to non-autonomous forms has maximal regularity if the time dependence is merely assumed to be continuous or even measurable. We give a negative answer to this question and discuss the minimal regularity needed for positive results.
Symbolic Dynamics and Grammatical Complexity
NASA Astrophysics Data System (ADS)
Hao, Bai-Lin; Zheng, Wei-Mou
The following sections are included: * Formal Languages and Their Complexity * Formal Language * Chomsky Hierarchy of Grammatical Complexity * The L-System * Regular Language and Finite Automaton * Finite Automaton * Regular Language * Stefan Matrix as Transfer Function for Automaton * Beyond Regular Languages * Feigenbaum and Generalized Feigenbaum Limiting Sets * Even and Odd Fibonacci Sequences * Odd Maximal Primitive Prefixes and Kneading Map * Even Maximal Primitive Prefixes and Distinct Excluded Blocks * Summary of Results
Firing patterns of spontaneously active motor units in spinal cord-injured subjects
Zijdewind, Inge; Thomas, Christine K
2012-01-01
Involuntary motor unit activity at low rates is common in hand muscles paralysed by spinal cord injury. Our aim was to describe these patterns of motor unit behaviour in relation to motoneurone and motor unit properties. Intramuscular electromyographic activity (EMG), surface EMG and force were recorded for 30 min from thenar muscles of nine men with chronic cervical SCI. Motor units fired for sustained periods (>10 min) at regular (coefficient of variation ≤ 0.15, CV, n = 19 units) or irregular intervals (CV > 0.15, n = 14). Regularly firing units started and stopped firing independently suggesting that intrinsic motoneurone properties were important for recruitment and derecruitment. Recruitment (3.6 Hz, SD 1.2), maximal (10.2 Hz, SD 2.3, range: 7.5–15.4 Hz) and derecruitment frequencies were low (3.3 Hz, SD 1.6), as were firing rate increases after recruitment (∼20 intervals in 3 s). Once active, firing often covaried, promoting the idea that units received common inputs. Half of the regularly firing units showed a very slow decline (>40 s) in discharge before derecruitment and had interspike intervals longer than their estimated afterhyperpolarisation potential (AHP) duration (estimated by death rate and breakpoint analyses). The other units were derecruited more abruptly and had shorter estimated AHP durations. Overall, regularly firing units had longer estimated AHP durations and were weaker than irregularly firing units, suggesting they were lower threshold units. Sustained firing of units at regular rates may reflect activation of persistent inward currents, visible here in the absence of voluntary drive, whereas irregularly firing units may only respond to synaptic noise. PMID:22310313
Bayesian Recurrent Neural Network for Language Modeling.
Chien, Jen-Tzung; Ku, Yuan-Chu
2016-02-01
A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.
Asadi, Abbas; Ramirez-Campillo, Rodrigo; Meylan, Cesar; Nakamura, Fabio Y; Cañas-Jamett, Rodrigo; Izquierdo, Mikel
2017-12-01
The aim of the present study was to compare maximal-intensity exercise adaptations in young basketball players (who were strong individuals at baseline) participating in regular basketball training versus regular plus a volume-based plyometric training program in the pre-season period. Young basketball players were recruited and assigned either to a plyometric with regular basketball training group (experimental group [EG]; N.=8), or a basketball training only group (control group [CG]; N.=8). The athletes in EG performed periodized (i.e., from 117 to 183 jumps per session) plyometric training for eight weeks. Before and after the intervention, players were assessed in vertical and broad jump, change of direction, maximal strength and a 60-meter sprint test. No significant improvements were found in the CG, while the EG improved vertical jump (effect size [ES] 2.8), broad jump (ES=2.4), agility T test (ES=2.2), Illinois agility test (ES=1.4), maximal strength (ES=1.8), and 60-m sprint (ES=1.6) (P<0.05) after intervention, and the improvements were greater compared to the CG (P<0.05). Plyometric training in addition to regular basketball practice can lead to meaningful improvements in maximal-intensity exercise adaptations among young basketball players during the pre-season.
Automated model selection in covariance estimation and spatial whitening of MEG and EEG signals.
Engemann, Denis A; Gramfort, Alexandre
2015-03-01
Magnetoencephalography and electroencephalography (M/EEG) measure non-invasively the weak electromagnetic fields induced by post-synaptic neural currents. The estimation of the spatial covariance of the signals recorded on M/EEG sensors is a building block of modern data analysis pipelines. Such covariance estimates are used in brain-computer interfaces (BCI) systems, in nearly all source localization methods for spatial whitening as well as for data covariance estimation in beamformers. The rationale for such models is that the signals can be modeled by a zero mean Gaussian distribution. While maximizing the Gaussian likelihood seems natural, it leads to a covariance estimate known as empirical covariance (EC). It turns out that the EC is a poor estimate of the true covariance when the number of samples is small. To address this issue the estimation needs to be regularized. The most common approach downweights off-diagonal coefficients, while more advanced regularization methods are based on shrinkage techniques or generative models with low rank assumptions: probabilistic PCA (PPCA) and factor analysis (FA). Using cross-validation all of these models can be tuned and compared based on Gaussian likelihood computed on unseen data. We investigated these models on simulations, one electroencephalography (EEG) dataset as well as magnetoencephalography (MEG) datasets from the most common MEG systems. First, our results demonstrate that different models can be the best, depending on the number of samples, heterogeneity of sensor types and noise properties. Second, we show that the models tuned by cross-validation are superior to models with hand-selected regularization. Hence, we propose an automated solution to the often overlooked problem of covariance estimation of M/EEG signals. The relevance of the procedure is demonstrated here for spatial whitening and source localization of MEG signals. Copyright © 2015 Elsevier Inc. All rights reserved.
Maximal volume behind horizons without curvature singularity
NASA Astrophysics Data System (ADS)
Wang, Shao-Jun; Guo, Xin-Xuan; Wang, Towe
2018-01-01
The black hole information paradox is related to the area of event horizon, and potentially to the volume and singularity behind it. One example is the complexity/volume duality conjectured by Stanford and Susskind. Accepting the proposal of Christodoulou and Rovelli, we calculate the maximal volume inside regular black holes, which are free of curvature singularity, in asymptotically flat and anti-de Sitter spacetimes respectively. The complexity/volume duality is then applied to anti-de Sitter regular black holes. We also present an analytical expression for the maximal volume outside the de Sitter horizon.
Hermassi, Souhail; van den Tillaar, Roland; Khlifa, Riadh; Chelly, Mohamed Souhaiel; Chamari, Karim
2015-08-01
The purpose of this study was to compare the effect of a specific resistance training program (throwing movement with a medicine ball) with that of regular training (throwing with regular balls) on ball velocity, anthropometry, maximal upper-body strength, and power. Thirty-four elite male team handball players (age: 18 ± 0.5 years, body mass: 80.6 ± 5.5 kg, height: 1.80 ± 5.1 m, body fat: 13.4 ± 0.6%) were randomly assigned to 1 of the 3 groups: control (n = 10), resistance training group (n = 12), or regular throwing training group (n = 12). Over the 8-week in season, the athletes performed 3 times per week according to an assigned training program alongside their normal team handball training. One repetition maximum (1RM) bench press and 1RM pullover scores assessed maximal arm strength. Anthropometry was assessed by body mass, fat percentage, and muscle volumes of upper body. Handball throwing velocity was measured by a standing throw, a throw with run, and a jump throw. Power was measured by measuring total distance thrown by a 3-kg medicine ball overhead throw. Throwing ball velocity, maximal strength, power, and muscle volume increases for the specific resistance training group after the 8 weeks of training, whereas only maximal strength, muscle volume and power and in the jump throw increases were found for the regular throwing training group. No significant changes for the control group were found. The current findings suggest that elite male handball players can improve ball velocity, anthropometrics, maximal upper-body strength, and power during the competition season by implementing a medicine ball throwing program.
Liu, Feng
2018-01-01
In this paper we investigate the endpoint regularity of the discrete m -sublinear fractional maximal operator associated with [Formula: see text]-balls, both in the centered and uncentered versions. We show that these operators map [Formula: see text] into [Formula: see text] boundedly and continuously. Here [Formula: see text] represents the set of functions of bounded variation defined on [Formula: see text].
Regularized Generalized Canonical Correlation Analysis
ERIC Educational Resources Information Center
Tenenhaus, Arthur; Tenenhaus, Michel
2011-01-01
Regularized generalized canonical correlation analysis (RGCCA) is a generalization of regularized canonical correlation analysis to three or more sets of variables. It constitutes a general framework for many multi-block data analysis methods. It combines the power of multi-block data analysis methods (maximization of well identified criteria) and…
Regularized minimum I-divergence methods for the inverse blackbody radiation problem
NASA Astrophysics Data System (ADS)
Choi, Kerkil; Lanterman, Aaron D.; Shin, Jaemin
2006-08-01
This paper proposes iterative methods for estimating the area temperature distribution of a blackbody from its total radiated power spectrum measurements. This is called the inverse blackbody radiation problem. This problem is inherently ill-posed due to the characteristics of the kernel in the underlying integral equation given by Planck's law. The functions involved in the problem are all non-negative. Csiszár's I-divergence is an information-theoretic discrepancy measure between two non-negative functions. We derive iterative methods for minimizing Csiszár's I-divergence between the measured power spectrum and the power spectrum arising from the estimate according to the integral equation. Due to the ill-posedness of the problem, unconstrained algorithms often produce poor estimates, especially when the measurements are corrupted by noise. To alleviate this difficulty, we apply regularization methods to our algorithms. Penalties based on Shannon's entropy, the L1-norm and Good's roughness are chosen to suppress the undesirable artefacts. When a penalty is applied, the pertinent optimization that needs to be performed at each iteration is no longer trivial. In particular, Good's roughness causes couplings between estimate components. To handle this issue, we adapt Green's one-step-late method. This choice is based on the important fact that our minimum I-divergence algorithms can be interpreted as asymptotic forms of certain expectation-maximization algorithms. The effectiveness of our methods is illustrated via various numerical experiments.
SPECT reconstruction using DCT-induced tight framelet regularization
NASA Astrophysics Data System (ADS)
Zhang, Jiahan; Li, Si; Xu, Yuesheng; Schmidtlein, C. R.; Lipson, Edward D.; Feiglin, David H.; Krol, Andrzej
2015-03-01
Wavelet transforms have been successfully applied in many fields of image processing. Yet, to our knowledge, they have never been directly incorporated to the objective function in Emission Computed Tomography (ECT) image reconstruction. Our aim has been to investigate if the ℓ1-norm of non-decimated discrete cosine transform (DCT) coefficients of the estimated radiotracer distribution could be effectively used as the regularization term for the penalized-likelihood (PL) reconstruction, where a regularizer is used to enforce the image smoothness in the reconstruction. In this study, the ℓ1-norm of 2D DCT wavelet decomposition was used as a regularization term. The Preconditioned Alternating Projection Algorithm (PAPA), which we proposed in earlier work to solve penalized likelihood (PL) reconstruction with non-differentiable regularizers, was used to solve this optimization problem. The DCT wavelet decompositions were performed on the transaxial reconstructed images. We reconstructed Monte Carlo simulated SPECT data obtained for a numerical phantom with Gaussian blobs as hot lesions and with a warm random lumpy background. Reconstructed images using the proposed method exhibited better noise suppression and improved lesion conspicuity, compared with images reconstructed using expectation maximization (EM) algorithm with Gaussian post filter (GPF). Also, the mean square error (MSE) was smaller, compared with EM-GPF. A critical and challenging aspect of this method was selection of optimal parameters. In summary, our numerical experiments demonstrated that the ℓ1-norm of discrete cosine transform (DCT) wavelet frame transform DCT regularizer shows promise for SPECT image reconstruction using PAPA method.
Independence of reaction time and response force control during isometric leg extension.
Fukushi, Tamami; Ohtsuki, Tatsuyuki
2004-04-01
In this study, we examined the relative control of reaction time and force in responses of the lower limb. Fourteen female participants (age 21.2 +/- 1.0 years, height 1.62 +/- 0.05 m, body mass 54.1 +/- 6.1 kg; mean +/- s) were instructed to exert their maximal isometric one-leg extension force as quickly as possible in response to an auditory stimulus presented after one of 13 foreperiod durations, ranging from 0.5 to 10.0 s. In the 'irregular condition' each foreperiod was presented in random order, while in the 'regular condition' each foreperiod was repeated consecutively. A significant interactive effect of foreperiod duration and regularity on reaction time was observed (P < 0.001 in two-way ANOVA with repeated measures). In the irregular condition the shorter foreperiod induced a longer reaction time, while in the regular condition the shorter foreperiod induced a shorter reaction time. Peak amplitude of isometric force was affected only by the regularity of foreperiod and there was a significant variation of changes in peak force across participants; nine participants were shown to significantly increase peak force for the regular condition (P < 0.001), three to decrease it (P < 0.05) and two showed no difference. These results indicate the independence of reaction time and response force control in the lower limb motor system. Variation of changes in peak force across participants may be due to the different attention to the bipolar nature of the task requirements such as maximal force and maximal speed.
Assessment of the Maximal Split-Half Coefficient to Estimate Reliability
ERIC Educational Resources Information Center
Thompson, Barry L.; Green, Samuel B.; Yang, Yanyun
2010-01-01
The maximal split-half coefficient is computed by calculating all possible split-half reliability estimates for a scale and then choosing the maximal value as the reliability estimate. Osburn compared the maximal split-half coefficient with 10 other internal consistency estimates of reliability and concluded that it yielded the most consistently…
Exercise Prescriptions for Active Seniors: A Team Approach for Maximizing Adherence.
ERIC Educational Resources Information Center
Brennan, Fred H., Jr.
2002-01-01
Exercise is an important "medication" that healthcare providers can prescribe for their geriatric patients. Increasing physical fitness by participating in regular exercise can reduce the effects of aging that lead to functional declines and poor health. Modest regular exercise can substantially lower the risk of death from coronary…
Are H-reflex and M-wave recruitment curve parameters related to aerobic capacity?
Piscione, Julien; Grosset, Jean-François; Gamet, Didier; Pérot, Chantal
2012-10-01
Soleus Hoffmann reflex (H-reflex) amplitude is affected by a training period and type and level of training are also well known to modify aerobic capacities. Previously, paired changes in H-reflex and aerobic capacity have been evidenced after endurance training. The aim of this study was to investigate possible links between H- and M-recruitment curve parameters and aerobic capacity collected on a cohort of subjects (56 young men) that were not involved in regular physical training. Maximal H-reflex normalized with respect to maximal M-wave (H(max)/M(max)) was measured as well as other parameters of the H- or M-recruitment curves that provide information about the reflex or direct excitability of the motoneuron pool, such as thresholds of stimulus intensity to obtain H or M response (H(th) and M(th)), the ascending slope of H-reflex, or M-wave recruitment curves (H(slp) and M(slp)) and their ratio (H(slp)/M(slp)). Aerobic capacity, i.e., maximal oxygen consumption and maximal aerobic power (MAP) were, respectively, estimated from a running field test and from an incremental test on a cycle ergometer. Maximal oxygen consumption was only correlated with M(slp), an indicator of muscle fiber heterogeneity (p < 0.05), whereas MAP was not correlated with any of the tested parameters (p > 0.05). Although higher H-reflex are often described for subjects with a high aerobic capacity because of endurance training, at a basic level (i.e., without training period context) no correlation was observed between maximal H-reflex and aerobic capacity. Thus, none of the H-reflex or M-wave recruitment curve parameters, except M(slp), was related to the aerobic capacity of young, untrained male subjects.
Speed- and Circuit-Based High-Intensity Interval Training on Recovery Oxygen Consumption
SCHLEPPENBACH, LINDSAY N.; EZER, ANDREAS B.; GRONEMUS, SARAH A.; WIDENSKI, KATELYN R.; BRAUN, SAORI I.; JANOT, JEFFREY M.
2017-01-01
Due to the current obesity epidemic in the United States, there is growing interest in efficient, effective ways to increase energy expenditure and weight loss. Research has shown that high-intensity exercise elicits a higher Excess Post-Exercise Oxygen Consumption (EPOC) throughout the day compared to steady-state exercise. Currently, there is no single research study that examines the differences in Recovery Oxygen Consumption (ROC) resulting from high-intensity interval training (HIIT) modalities. The purpose of this study is to review the impact of circuit training (CT) and speed interval training (SIT), on ROC in both regular exercising and sedentary populations. A total of 26 participants were recruited from the UW-Eau Claire campus and divided into regularly exercising and sedentary groups, according to self-reported exercise participation status. Oxygen consumption was measured during and after two HIIT sessions and was used to estimate caloric expenditure. There was no significant difference in caloric expenditure during and after exercise among individuals who regularly exercise and individuals who are sedentary. There was also no significant difference in ROC between regular exercisers and sedentary or between SIT and CT. However, there was a significantly higher caloric expenditure in SIT vs. CT regardless of exercise status. It is recommended that individuals engage in SIT vs. CT when the goal is to maximize overall caloric expenditure. With respect to ROC, individuals can choose either modalities of HIIT to achieve similar effects on increased oxygen consumption post-exercise. PMID:29170696
Speed- and Circuit-Based High-Intensity Interval Training on Recovery Oxygen Consumption.
Schleppenbach, Lindsay N; Ezer, Andreas B; Gronemus, Sarah A; Widenski, Katelyn R; Braun, Saori I; Janot, Jeffrey M
2017-01-01
Due to the current obesity epidemic in the United States, there is growing interest in efficient, effective ways to increase energy expenditure and weight loss. Research has shown that high-intensity exercise elicits a higher Excess Post-Exercise Oxygen Consumption (EPOC) throughout the day compared to steady-state exercise. Currently, there is no single research study that examines the differences in Recovery Oxygen Consumption (ROC) resulting from high-intensity interval training (HIIT) modalities. The purpose of this study is to review the impact of circuit training (CT) and speed interval training (SIT), on ROC in both regular exercising and sedentary populations. A total of 26 participants were recruited from the UW-Eau Claire campus and divided into regularly exercising and sedentary groups, according to self-reported exercise participation status. Oxygen consumption was measured during and after two HIIT sessions and was used to estimate caloric expenditure. There was no significant difference in caloric expenditure during and after exercise among individuals who regularly exercise and individuals who are sedentary. There was also no significant difference in ROC between regular exercisers and sedentary or between SIT and CT. However, there was a significantly higher caloric expenditure in SIT vs. CT regardless of exercise status. It is recommended that individuals engage in SIT vs. CT when the goal is to maximize overall caloric expenditure. With respect to ROC, individuals can choose either modalities of HIIT to achieve similar effects on increased oxygen consumption post-exercise.
NASA Technical Reports Server (NTRS)
DeYoung, J. A.; McKinley, A.; Davis, J. A.; Hetzel, P.; Bauch, A.
1996-01-01
Eight laboratories are participating in an international two-way satellite time and frequency transfer (TWSTFT) experiment. Regular time and frequency transfers have been performed over a period of almost two years, including both European and transatlantic time transfers. The performance of the regular TWSTFT sessions over an extended period has demonstrated conclusively the usefulness of the TWSTFT method for routine international time and frequency comparisons. Regular measurements are performed three times per week resulting in a regular but unevenly spaced data set. A method is presented that allows an estimate of the values of delta (sub y)(gamma) to be formed from these data. In order to maximize efficient use of paid satellite time an investigation to determine the optimal length of a single TWSTFT session is presented. The optimal experiment length is determined by evaluating how long white phase modulation (PM) instabilities are the dominant noise source during the typical 300-second sampling times currently used. A detailed investigation of the frequency transfers realized via the transatlantic TWSTFT links UTC(USNO)-UTC(NPL), UTC(USNO)-UTC(PTB), and UTC(PTB)-UTC(NPL) is presented. The investigation focuses on the frequency instabilities realized, a three cornered hat resolution of the delta (sub y) (gamma) values, and a comparison of the transatlantic and inter-European determination of UTC(PTB)-UTC(NPL). Future directions of this TWSTFT experiment are outlined.
Takebayashi, T; Varsier, N; Kikuchi, Y; Wake, K; Taki, M; Watanabe, S; Akiba, S; Yamaguchi, N
2008-02-12
In a case-control study in Japan of brain tumours in relation to mobile phone use, we used a novel approach for estimating the specific absorption rate (SAR) inside the tumour, taking account of spatial relationships between tumour localisation and intracranial radiofrequency distribution. Personal interviews were carried out with 88 patients with glioma, 132 with meningioma, and 102 with pituitary adenoma (322 cases in total), and with 683 individually matched controls. All maximal SAR values were below 0.1 W kg(-1), far lower than the level at which thermal effects may occur, the adjusted odds ratios (ORs) for regular mobile phone users being 1.22 (95% confidence interval (CI): 0.63-2.37) for glioma and 0.70 (0.42-1.16) for meningioma. When the maximal SAR value inside the tumour tissue was accounted for in the exposure indices, the overall OR was again not increased and there was no significant trend towards an increasing OR in relation to SAR-derived exposure indices. A non-significant increase in OR among glioma patients in the heavily exposed group may reflect recall bias.
Discrete maximal regularity of time-stepping schemes for fractional evolution equations.
Jin, Bangti; Li, Buyang; Zhou, Zhi
2018-01-01
In this work, we establish the maximal [Formula: see text]-regularity for several time stepping schemes for a fractional evolution model, which involves a fractional derivative of order [Formula: see text], [Formula: see text], in time. These schemes include convolution quadratures generated by backward Euler method and second-order backward difference formula, the L1 scheme, explicit Euler method and a fractional variant of the Crank-Nicolson method. The main tools for the analysis include operator-valued Fourier multiplier theorem due to Weis (Math Ann 319:735-758, 2001. doi:10.1007/PL00004457) and its discrete analogue due to Blunck (Stud Math 146:157-176, 2001. doi:10.4064/sm146-2-3). These results generalize the corresponding results for parabolic problems.
ERIC Educational Resources Information Center
Ramenzoni, Veronica; Riley, Michael A.; Davis, Tehran; Shockley, Kevin; Armstrong, Rachel
2008-01-01
Three experiments investigated the ability to perceive the maximum height to which another actor could jump to reach an object. Experiment 1 determined the accuracy of estimates for another actor's maximal reach-with-jump height and compared these estimates to estimates of the actor's standing maximal reaching height and to estimates of the…
Yang, Li; Wang, Guobao; Qi, Jinyi
2016-04-01
Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.
Leicht, Anthony; Crowther, Robert; Golledge, Jonathan
2015-05-18
This study examined the impact of regular supervised exercise on body fat, assessed via anthropometry, and eating patterns of peripheral arterial disease patients with intermittent claudication (IC). Body fat, eating patterns and walking ability were assessed in 11 healthy adults (Control) and age- and mass-matched IC patients undertaking usual care (n = 10; IC-Con) or supervised exercise (12-months; n = 10; IC-Ex). At entry, all groups exhibited similar body fat and eating patterns. Maximal walking ability was greatest for Control participants and similar for IC-Ex and IC-Con patients. Supervised exercise resulted in significantly greater improvements in maximal walking ability (IC-Ex 148%-170% vs. IC-Con 29%-52%) and smaller increases in body fat (IC-Ex -2.1%-1.4% vs. IC-Con 8.4%-10%). IC-Con patients exhibited significantly greater increases in body fat compared with Control at follow-up (8.4%-10% vs. -0.6%-1.4%). Eating patterns were similar for all groups at follow-up. The current study demonstrated that regular, supervised exercise significantly improved maximal walking ability and minimised increase in body fat amongst IC patients without changes in eating patterns. The study supports the use of supervised exercise to minimize cardiovascular risk amongst IC patients. Further studies are needed to examine the additional value of other lifestyle interventions such as diet modification.
Task-based statistical image reconstruction for high-quality cone-beam CT
NASA Astrophysics Data System (ADS)
Dang, Hao; Webster Stayman, J.; Xu, Jennifer; Zbijewski, Wojciech; Sisniega, Alejandro; Mow, Michael; Wang, Xiaohui; Foos, David H.; Aygun, Nafi; Koliatsos, Vassilis E.; Siewerdsen, Jeffrey H.
2017-11-01
Task-based analysis of medical imaging performance underlies many ongoing efforts in the development of new imaging systems. In statistical image reconstruction, regularization is often formulated in terms to encourage smoothness and/or sharpness (e.g. a linear, quadratic, or Huber penalty) but without explicit formulation of the task. We propose an alternative regularization approach in which a spatially varying penalty is determined that maximizes task-based imaging performance at every location in a 3D image. We apply the method to model-based image reconstruction (MBIR—viz., penalized weighted least-squares, PWLS) in cone-beam CT (CBCT) of the head, focusing on the task of detecting a small, low-contrast intracranial hemorrhage (ICH), and we test the performance of the algorithm in the context of a recently developed CBCT prototype for point-of-care imaging of brain injury. Theoretical predictions of local spatial resolution and noise are computed via an optimization by which regularization (specifically, the quadratic penalty strength) is allowed to vary throughout the image to maximize local task-based detectability index ({{d}\\prime} ). Simulation studies and test-bench experiments were performed using an anthropomorphic head phantom. Three PWLS implementations were tested: conventional (constant) penalty; a certainty-based penalty derived to enforce constant point-spread function, PSF; and the task-based penalty derived to maximize local detectability at each location. Conventional (constant) regularization exhibited a fairly strong degree of spatial variation in {{d}\\prime} , and the certainty-based method achieved uniform PSF, but each exhibited a reduction in detectability compared to the task-based method, which improved detectability up to ~15%. The improvement was strongest in areas of high attenuation (skull base), where the conventional and certainty-based methods tended to over-smooth the data. The task-driven reconstruction method presents a promising regularization method in MBIR by explicitly incorporating task-based imaging performance as the objective. The results demonstrate improved ICH conspicuity and support the development of high-quality CBCT systems.
Maximizers in Lipschitz spacetimes are either timelike or null
NASA Astrophysics Data System (ADS)
Graf, Melanie; Ling, Eric
2018-04-01
We prove that causal maximizers in C 0,1 spacetimes are either timelike or null. This question was posed in Sämann and Steinbauer (2017 arXiv:1710.10887) since bubbling regions in C0, α spacetimes (α <1 ) can produce causal maximizers that contain a segment which is timelike and a segment which is null, see Chruściel and Grant (2012 Class. Quantum Grav. 29 145001). While C 0,1 spacetimes do not produce bubbling regions, the causal character of maximizers for spacetimes with regularity at least C 0,1 but less than C 1,1 was unknown until now. As an application we show that timelike geodesically complete spacetimes are C 0,1-inextendible.
Crowther, Robert G; Leicht, Anthony S; Spinks, Warwick L; Sangla, Kunwarjit; Quigley, Frank; Golledge, Jonathan
2012-01-01
The purpose of this study was to examine the effects of a 6-month exercise program on submaximal walking economy in individuals with peripheral arterial disease and intermittent claudication (PAD-IC). Participants (n = 16) were randomly allocated to either a control PAD-IC group (CPAD-IC, n = 6) which received standard medical therapy, or a treatment PAD-IC group (TPAD-IC; n = 10) which took part in a supervised exercise program. During a graded treadmill test, physiological responses, including oxygen consumption, were assessed to calculate walking economy during submaximal and maximal walking performance. Differences between groups at baseline and post-intervention were analyzed via Kruskal-Wallis tests. At baseline, CPAD-IC and TPAD-IC groups demonstrated similar walking performance and physiological responses. Postintervention, TPAD-IC patients demonstrated significantly lower oxygen consumption during the graded exercise test, and greater maximal walking performance compared to CPAD-IC. These preliminary results indicate that 6 months of regular exercise improves both submaximal walking economy and maximal walking performance, without significant changes in maximal walking economy. Enhanced walking economy may contribute to physiological efficiency, which in turn may improve walking performance as demonstrated by PAD-IC patients following regular exercise programs.
Regularity estimates up to the boundary for elliptic systems of difference equations
NASA Technical Reports Server (NTRS)
Strikwerda, J. C.; Wade, B. A.; Bube, K. P.
1986-01-01
Regularity estimates up to the boundary for solutions of elliptic systems of finite difference equations were proved. The regularity estimates, obtained for boundary fitted coordinate systems on domains with smooth boundary, involve discrete Sobolev norms and are proved using pseudo-difference operators to treat systems with variable coefficients. The elliptic systems of difference equations and the boundary conditions which are considered are very general in form. The regularity of a regular elliptic system of difference equations was proved equivalent to the nonexistence of eigensolutions. The regularity estimates obtained are analogous to those in the theory of elliptic systems of partial differential equations, and to the results of Gustafsson, Kreiss, and Sundstrom (1972) and others for hyperbolic difference equations.
Condition Number Regularized Covariance Estimation*
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2012-01-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
Aisbett, B; Le Rossignol, P
2003-09-01
The VO2-power regression and estimated total energy demand for a 6-minute supra-maximal exercise test was predicted from a continuous incremental exercise test. Sub-maximal VO2-power co-ordinates were established from the last 40 seconds (s) of 150-second exercise stages. The precision of the estimated total energy demand was determined using the 95% confidence interval (95% CI) of the estimated total energy demand. The linearity of the individual VO2-power regression equations was determined using Pearson's correlation coefficient. The mean 95% CI of the estimated total energy demand was 5.9 +/- 2.5 mL O2 Eq x kg(-1) x min(-1), and the mean correlation coefficient was 0.9942 +/- 0.0042. The current study contends that the sub-maximal VO2-power co-ordinates from a continuous incremental exercise test can be used to estimate supra-maximal energy demand without compromising the precision of the accumulated oxygen deficit (AOD) method.
Shape regularized active contour based on dynamic programming for anatomical structure segmentation
NASA Astrophysics Data System (ADS)
Yu, Tianli; Luo, Jiebo; Singhal, Amit; Ahuja, Narendra
2005-04-01
We present a method to incorporate nonlinear shape prior constraints into segmenting different anatomical structures in medical images. Kernel space density estimation (KSDE) is used to derive the nonlinear shape statistics and enable building a single model for a class of objects with nonlinearly varying shapes. The object contour is coerced by image-based energy into the correct shape sub-distribution (e.g., left or right lung), without the need for model selection. In contrast to an earlier algorithm that uses a local gradient-descent search (susceptible to local minima), we propose an algorithm that iterates between dynamic programming (DP) and shape regularization. DP is capable of finding an optimal contour in the search space that maximizes a cost function related to the difference between the interior and exterior of the object. To enforce the nonlinear shape prior, we propose two shape regularization methods, global and local regularization. Global regularization is applied after each DP search to move the entire shape vector in the shape space in a gradient descent fashion to the position of probable shapes learned from training. The regularized shape is used as the starting shape for the next iteration. Local regularization is accomplished through modifying the search space of the DP. The modified search space only allows a certain amount of deformation of the local shape from the starting shape. Both regularization methods ensure the consistency between the resulted shape with the training shapes, while still preserving DP"s ability to search over a large range and avoid local minima. Our algorithm was applied to two different segmentation tasks for radiographic images: lung field and clavicle segmentation. Both applications have shown that our method is effective and versatile in segmenting various anatomical structures under prior shape constraints; and it is robust to noise and local minima caused by clutter (e.g., blood vessels) and other similar structures (e.g., ribs). We believe that the proposed algorithm represents a major step in the paradigm shift to object segmentation under nonlinear shape constraints.
On split regular Hom-Lie superalgebras
NASA Astrophysics Data System (ADS)
Albuquerque, Helena; Barreiro, Elisabete; Calderón, A. J.; Sánchez, José M.
2018-06-01
We introduce the class of split regular Hom-Lie superalgebras as the natural extension of the one of split Hom-Lie algebras and Lie superalgebras, and study its structure by showing that an arbitrary split regular Hom-Lie superalgebra L is of the form L = U +∑jIj with U a linear subspace of a maximal abelian graded subalgebra H and any Ij a well described (split) ideal of L satisfying [Ij ,Ik ] = 0 if j ≠ k. Under certain conditions, the simplicity of L is characterized and it is shown that L is the direct sum of the family of its simple ideals.
Rice, Treva K; Sarzynski, Mark A; Sung, Yun Ju; Argyropoulos, George; Stütz, Adrian M; Teran-Garcia, Margarita; Rao, D C; Bouchard, Claude; Rankinen, Tuomo
2012-08-01
Although regular exercise improves submaximal aerobic capacity, there is large variability in its response to exercise training. While this variation is thought to be partly due to genetic differences, relatively little is known about the causal genes. Submaximal aerobic capacity traits in the current report include the responses of oxygen consumption (ΔVO(2)60), power output (ΔWORK60), and cardiac output (ΔQ60) at 60% of VO2max to a standardized 20-week endurance exercise training program. Genome-wide linkage analysis in 475 HERITAGE Family Study Caucasians identified a locus on chromosome 13q for ΔVO(2)60 (LOD = 3.11). Follow-up fine mapping involved a dense marker panel of over 1,800 single-nucleotide polymorphisms (SNPs) in a 7.9-Mb region (21.1-29.1 Mb from p-terminus). Single-SNP analyses found 14 SNPs moderately associated with both ΔVO(2)60 at P ≤ 0.005 and the correlated traits of ΔWORK60 and ΔQ60 at P < 0.05. Haplotype analyses provided several strong signals (P < 1.0 × 10(-5)) for ΔVO(2)60. Overall, association analyses narrowed the target region and included potential biological candidate genes (MIPEP and SGCG). Consistent with maximal heritability estimates of 23%, up to 20% of the phenotypic variance in ΔVO(2)60 was accounted for by these SNPs. These results implicate candidate genes on chromosome 13q12 for the ability to improve submaximal exercise capacity in response to regular exercise. Submaximal exercise at 60% of maximal capacity is an exercise intensity that falls well within the range recommended in the Physical Activity Guidelines for Americans and thus has potential public health relevance.
Rice, Treva K.; Sarzynski, Mark A.; Sung, Yun Ju; Argyropoulos, George; Stütz, Adrian M.; Teran-Garcia, Margarita; Rao, D. C.; Bouchard, Claude
2014-01-01
Although regular exercise improves submaximal aerobic capacity, there is large variability in its response to exercise training. While this variation is thought to be partly due to genetic differences, relatively little is known about the causal genes. Submaximal aerobic capacity traits in the current report include the responses of oxygen consumption (ΔVO260), power output (ΔWORK60), and cardiac output (ΔQ60) at 60% of VO2max to a standardized 20-week endurance exercise training program. Genome-wide linkage analysis in 475 HERITAGE Family Study Caucasians identified a locus on chromosome 13q for ΔVO260 (LOD = 3.11). Follow-up fine mapping involved a dense marker panel of over 1,800 single-nucleotide polymorphisms (SNPs) in a 7.9-Mb region (21.1–29.1 Mb from p-terminus). Single-SNP analyses found 14 SNPs moderately associated with both ΔVO260 at P ≤ 0.005 and the correlated traits of ΔWORK60 and ΔQ60 at P < 0.05. Haplotype analyses provided several strong signals (P<1.0 × 10−5) for ΔVO260. Overall, association analyses narrowed the target region and included potential biological candidate genes (MIPEP and SGCG). Consistent with maximal heritability estimates of 23%, up to 20% of the phenotypic variance in ΔVO260 was accounted for by these SNPs. These results implicate candidate genes on chromosome 13q12 for the ability to improve submaximal exercise capacity in response to regular exercise. Submaximal exercise at 60% of maximal capacity is an exercise intensity that falls well within the range recommended in the Physical Activity Guidelines for Americans and thus has potential public health relevance. PMID:22170014
Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.
Dazard, Jean-Eudes; Rao, J Sunil
2012-07-01
The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.
Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.
Han, Lei; Zhang, Yu; Zhang, Tong
2016-08-01
The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.
Soneson, Charlotte; Lilljebjörn, Henrik; Fioretos, Thoas; Fontes, Magnus
2010-04-15
With the rapid development of new genetic measurement methods, several types of genetic alterations can be quantified in a high-throughput manner. While the initial focus has been on investigating each data set separately, there is an increasing interest in studying the correlation structure between two or more data sets. Multivariate methods based on Canonical Correlation Analysis (CCA) have been proposed for integrating paired genetic data sets. The high dimensionality of microarray data imposes computational difficulties, which have been addressed for instance by studying the covariance structure of the data, or by reducing the number of variables prior to applying the CCA. In this work, we propose a new method for analyzing high-dimensional paired genetic data sets, which mainly emphasizes the correlation structure and still permits efficient application to very large data sets. The method is implemented by translating a regularized CCA to its dual form, where the computational complexity depends mainly on the number of samples instead of the number of variables. The optimal regularization parameters are chosen by cross-validation. We apply the regularized dual CCA, as well as a classical CCA preceded by a dimension-reducing Principal Components Analysis (PCA), to a paired data set of gene expression changes and copy number alterations in leukemia. Using the correlation-maximizing methods, regularized dual CCA and PCA+CCA, we show that without pre-selection of known disease-relevant genes, and without using information about clinical class membership, an exploratory analysis singles out two patient groups, corresponding to well-known leukemia subtypes. Furthermore, the variables showing the highest relevance to the extracted features agree with previous biological knowledge concerning copy number alterations and gene expression changes in these subtypes. Finally, the correlation-maximizing methods are shown to yield results which are more biologically interpretable than those resulting from a covariance-maximizing method, and provide different insight compared to when each variable set is studied separately using PCA. We conclude that regularized dual CCA as well as PCA+CCA are useful methods for exploratory analysis of paired genetic data sets, and can be efficiently implemented also when the number of variables is very large.
Highly Symmetric and Congruently Tiled Meshes for Shells and Domes
Rasheed, Muhibur; Bajaj, Chandrajit
2016-01-01
We describe the generation of all possible shell and dome shapes that can be uniquely meshed (tiled) using a single type of mesh face (tile), and following a single meshing (tiling) rule that governs the mesh (tile) arrangement with maximal vertex, edge and face symmetries. Such tiling arrangements or congruently tiled meshed shapes, are frequently found in chemical forms (fullerenes or Bucky balls, crystals, quasi-crystals, virus nano shells or capsids), and synthetic shapes (cages, sports domes, modern architectural facades). Congruently tiled meshes are both aesthetic and complete, as they support maximal mesh symmetries with minimal complexity and possess simple generation rules. Here, we generate congruent tilings and meshed shape layouts that satisfy these optimality conditions. Further, the congruent meshes are uniquely mappable to an almost regular 3D polyhedron (or its dual polyhedron) and which exhibits face-transitive (and edge-transitive) congruency with at most two types of vertices (each type transitive to the other). The family of all such congruently meshed polyhedra create a new class of meshed shapes, beyond the well-studied regular, semi-regular and quasi-regular classes, and their duals (platonic, Catalan and Johnson). While our new mesh class is infinite, we prove that there exists a unique mesh parametrization, where each member of the class can be represented by two integer lattice variables, and moreover efficiently constructable. PMID:27563368
Kuntzelman, Karl; Jack Rhodes, L; Harrington, Lillian N; Miskovic, Vladimir
2018-06-01
There is a broad family of statistical methods for capturing time series regularity, with increasingly widespread adoption by the neuroscientific community. A common feature of these methods is that they permit investigators to quantify the entropy of brain signals - an index of unpredictability/complexity. Despite the proliferation of algorithms for computing entropy from neural time series data there is scant evidence concerning their relative stability and efficiency. Here we evaluated several different algorithmic implementations (sample, fuzzy, dispersion and permutation) of multiscale entropy in terms of their stability across sessions, internal consistency and computational speed, accuracy and precision using a combination of electroencephalogram (EEG) and synthetic 1/ƒ noise signals. Overall, we report fair to excellent internal consistency and longitudinal stability over a one-week period for the majority of entropy estimates, with several caveats. Computational timing estimates suggest distinct advantages for dispersion and permutation entropy over other entropy estimates. Considered alongside the psychometric evidence, we suggest several ways in which researchers can maximize computational resources (without sacrificing reliability), especially when working with high-density M/EEG data or multivoxel BOLD time series signals. Copyright © 2018 Elsevier Inc. All rights reserved.
Polarimetric image reconstruction algorithms
NASA Astrophysics Data System (ADS)
Valenzuela, John R.
In the field of imaging polarimetry Stokes parameters are sought and must be inferred from noisy and blurred intensity measurements. Using a penalized-likelihood estimation framework we investigate reconstruction quality when estimating intensity images and then transforming to Stokes parameters (traditional estimator), and when estimating Stokes parameters directly (Stokes estimator). We define our cost function for reconstruction by a weighted least squares data fit term and a regularization penalty. It is shown that under quadratic regularization, the traditional and Stokes estimators can be made equal by appropriate choice of regularization parameters. It is empirically shown that, when using edge preserving regularization, estimating the Stokes parameters directly leads to lower RMS error in reconstruction. Also, the addition of a cross channel regularization term further lowers the RMS error for both methods especially in the case of low SNR. The technique of phase diversity has been used in traditional incoherent imaging systems to jointly estimate an object and optical system aberrations. We extend the technique of phase diversity to polarimetric imaging systems. Specifically, we describe penalized-likelihood methods for jointly estimating Stokes images and optical system aberrations from measurements that contain phase diversity. Jointly estimating Stokes images and optical system aberrations involves a large parameter space. A closed-form expression for the estimate of the Stokes images in terms of the aberration parameters is derived and used in a formulation that reduces the dimensionality of the search space to the number of aberration parameters only. We compare the performance of the joint estimator under both quadratic and edge-preserving regularization. The joint estimator with edge-preserving regularization yields higher fidelity polarization estimates than with quadratic regularization. Under quadratic regularization, using the reduced-parameter search strategy, accurate aberration estimates can be obtained without recourse to regularization "tuning". Phase-diverse wavefront sensing is emerging as a viable candidate wavefront sensor for adaptive-optics systems. In a quadratically penalized weighted least squares estimation framework a closed form expression for the object being imaged in terms of the aberrations in the system is available. This expression offers a dramatic reduction of the dimensionality of the estimation problem and thus is of great interest for practical applications. We have derived an expression for an approximate joint covariance matrix for object and aberrations in the phase diversity context. Our expression for the approximate joint covariance is compared with the "known-object" Cramer-Rao lower bound that is typically used for system parameter optimization. Estimates of the optimal amount of defocus in a phase-diverse wavefront sensor derived from the joint-covariance matrix, the known-object Cramer-Rao bound, and Monte Carlo simulations are compared for an extended scene and a point object. It is found that our variance approximation, that incorporates the uncertainty of the object, leads to an improvement in predicting the optimal amount of defocus to use in a phase-diverse wavefront sensor.
Numerical simulation of coherent resonance in a model network of Rulkov neurons
NASA Astrophysics Data System (ADS)
Andreev, Andrey V.; Runnova, Anastasia E.; Pisarchik, Alexander N.
2018-04-01
In this paper we study the spiking behaviour of a neuronal network consisting of Rulkov elements. We find that the regularity of this behaviour maximizes at a certain level of environment noise. This effect referred to as coherence resonance is demonstrated in a random complex network of Rulkov neurons. An external stimulus added to some of neurons excites them, and then activates other neurons in the network. The network coherence is also maximized at the certain stimulus amplitude.
Estimation of High-Dimensional Graphical Models Using Regularized Score Matching
Lin, Lina; Drton, Mathias; Shojaie, Ali
2017-01-01
Graphical models are widely used to model stochastic dependences among large collections of variables. We introduce a new method of estimating undirected conditional independence graphs based on the score matching loss, introduced by Hyvärinen (2005), and subsequently extended in Hyvärinen (2007). The regularized score matching method we propose applies to settings with continuous observations and allows for computationally efficient treatment of possibly non-Gaussian exponential family models. In the well-explored Gaussian setting, regularized score matching avoids issues of asymmetry that arise when applying the technique of neighborhood selection, and compared to existing methods that directly yield symmetric estimates, the score matching approach has the advantage that the considered loss is quadratic and gives piecewise linear solution paths under ℓ1 regularization. Under suitable irrepresentability conditions, we show that ℓ1-regularized score matching is consistent for graph estimation in sparse high-dimensional settings. Through numerical experiments and an application to RNAseq data, we confirm that regularized score matching achieves state-of-the-art performance in the Gaussian case and provides a valuable tool for computationally efficient estimation in non-Gaussian graphical models. PMID:28638498
Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data
Dazard, Jean-Eudes; Rao, J. Sunil
2012-01-01
The paper addresses a common problem in the analysis of high-dimensional high-throughput “omics” data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel “similarity statistic”-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called ‘MVR’ (‘Mean-Variance Regularization’), downloadable from the CRAN website. PMID:22711950
Instationary Generalized Stokes Equations in Partially Periodic Domains
NASA Astrophysics Data System (ADS)
Sauer, Jonas
2018-06-01
We consider an instationary generalized Stokes system with nonhomogeneous divergence data under a periodic condition in only some directions. The problem is set in the whole space, the half space or in (after an identification of the periodic directions with a torus) bounded domains with sufficiently regular boundary. We show unique solvability for all times in Muckenhoupt weighted Lebesgue spaces. The divergence condition is dealt with by analyzing the associated reduced Stokes system and in particular by showing maximal regularity of the partially periodic reduced Stokes operator.
Adding statistical regularity results in a global slowdown in visual search.
Vaskevich, Anna; Luria, Roy
2018-05-01
Current statistical learning theories predict that embedding implicit regularities within a task should further improve online performance, beyond general practice. We challenged this assumption by contrasting performance in a visual search task containing either a consistent-mapping (regularity) condition, a random-mapping condition, or both conditions, mixed. Surprisingly, performance in a random visual search, without any regularity, was better than performance in a mixed design search that contained a beneficial regularity. This result was replicated using different stimuli and different regularities, suggesting that mixing consistent and random conditions leads to an overall slowing down of performance. Relying on the predictive-processing framework, we suggest that this global detrimental effect depends on the validity of the regularity: when its predictive value is low, as it is in the case of a mixed design, reliance on all prior information is reduced, resulting in a general slowdown. Our results suggest that our cognitive system does not maximize speed, but rather continues to gather and implement statistical information at the expense of a possible slowdown in performance. Copyright © 2018 Elsevier B.V. All rights reserved.
Generalized t-statistic for two-group classification.
Komori, Osamu; Eguchi, Shinto; Copas, John B
2015-06-01
In the classic discriminant model of two multivariate normal distributions with equal variance matrices, the linear discriminant function is optimal both in terms of the log likelihood ratio and in terms of maximizing the standardized difference (the t-statistic) between the means of the two distributions. In a typical case-control study, normality may be sensible for the control sample but heterogeneity and uncertainty in diagnosis may suggest that a more flexible model is needed for the cases. We generalize the t-statistic approach by finding the linear function which maximizes a standardized difference but with data from one of the groups (the cases) filtered by a possibly nonlinear function U. We study conditions for consistency of the method and find the function U which is optimal in the sense of asymptotic efficiency. Optimality may also extend to other measures of discriminatory efficiency such as the area under the receiver operating characteristic curve. The optimal function U depends on a scalar probability density function which can be estimated non-parametrically using a standard numerical algorithm. A lasso-like version for variable selection is implemented by adding L1-regularization to the generalized t-statistic. Two microarray data sets in the study of asthma and various cancers are used as motivating examples. © 2014, The International Biometric Society.
Exercise Adherence. ERIC Digest.
ERIC Educational Resources Information Center
Sullivan, Pat
This digest discusses exercise adherence, noting its vital role in maximizing the benefits associated with physical activity. Information is presented on the following: (1) factors that influence adherence to self-monitored programs of regular exercise (childhood eating habits, and psychological, physical, social, and situational factors); (2)…
Instructional Variables that Make a Difference: Attention to Task and Beyond.
ERIC Educational Resources Information Center
Rieth, Herbert J.; And Others
1981-01-01
Three procedures for increasing the disabled students' academic learning time(ALT)by maximizing allocation time, engagement time, and success rate are discussed, and a direct instructional model for enhancing ALT in both regular and special education environments is described. (CL)
Ouerghi, Nejmeddine; Khammassi, Marwa; Boukorraa, Sami; Feki, Moncef; Kaabachi, Naziha; Bouassida, Anissa
2014-01-01
Background Data regarding the effect of training on plasma lipids are controversial. Most studies have addressed continuous or long intermittent training programs. The present study evaluated the effect of short-short high-intensity intermittent training (HIIT) on aerobic capacity and plasma lipids in soccer players. Methods The study included 24 male subjects aged 21–26 years, divided into three groups: experimental group 1 (EG1, n=8) comprising soccer players who exercised in addition to regular short-short HIIT twice a week for 12 weeks; experimental group 2 (EG2, n=8) comprising soccer players who exercised in a regular football training program; and a control group (CG, n=8) comprising untrained subjects who did not practice regular physical activity. Maximal aerobic velocity and maximal oxygen uptake along with plasma lipids were measured before and after 6 weeks and 12 weeks of the respective training program. Results Compared with basal values, maximal oxygen uptake had significantly increased in EG1 (from 53.3±4.0 mL/min/kg to 54.8±3.0 mL/min/kg at 6 weeks [P<0.05] and to 57.0±3.2 mL/min/kg at 12 weeks [P<0.001]). Maximal oxygen uptake was increased only after 12 weeks in EG2 (from 52.8±2.7 mL/min/kg to 54.2±2.6 mL/min/kg, [P<0.05]), but remain unchanged in CG. After 12 weeks of training, maximal oxygen uptake was significantly higher in EG1 than in EG2 (P<0.05). During training, no significant changes in plasma lipids occurred. However, after 12 weeks, total and low-density lipoprotein cholesterol levels had decreased (by about 2%) in EG1 but increased in CG. High-density lipoprotein cholesterol levels increased in EG1 and EG2, but decreased in CG. Plasma triglycerides decreased by 8% in EG1 and increased by about 4% in CG. Conclusion Twelve weeks of short-short HIIT improves aerobic capacity. Although changes in the lipid profile were not significant after this training program, they may have a beneficial impact on health. PMID:25378960
Consistent Partial Least Squares Path Modeling via Regularization.
Jung, Sunho; Park, JaeHong
2018-01-01
Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.
de Vries, Aijse W; Faber, Gert; Jonkers, Ilse; Van Dieen, Jaap H; Verschueren, Sabine M P
2018-01-01
Virtual Reality (VR) balance training may have advantages over regular exercise training in older adults. However, results so far are conflicting potentially due to the lack of challenge imposed by the movements in those games. Therefore, the aim of this study was to assess to which extent two similar skiing games challenge balance, as reflected in center of mass (COM) movements relative to their Functional Limits of Stability (FLOS). Thirty young and elderly participants performed two skiing games, one on the Wii Balance board (Wiiski), which uses a force plate, and one with the Kinect sensor (Kinski), which performs motion tracking. During gameplay, kinematics were captured using seven opto-electronical cameras. FLOS were obtained for eight directions. The influence of games and trials on COM displacement in each of the eight directions, and maximal COM speed, were tested with Generalized Estimated Equations. In all directions with anterior and medio-lateral, but not with a posterior component, subjects showed significantly larger maximal %FLOS displacements during the Kinski game than during the Wiiski game. Furthermore, maximal COM displacement, and COM speed in Kinski remained similar or increased over trials, whereas for Wiiski it decreased. Our results show the importance of assessing the movement challenge in games used for balance training. Similar games impose different challenges, with the control sensors and their gain settings playing an important role. Furthermore, adaptations led to a decrease in challenge in Wiiski, which might limit the effectiveness of the game as a balance-training tool. Copyright © 2017 Elsevier B.V. All rights reserved.
Estimating the number of regular and dependent methamphetamine users in Australia, 2002-2014.
Degenhardt, Louisa; Larney, Sarah; Chan, Gary; Dobbins, Timothy; Weier, Megan; Roxburgh, Amanda; Hall, Wayne D; McKetin, Rebecca
2016-03-07
To estimate the number of regular and dependent methamphetamine users in Australia. Indirect prevalence estimates were made for each year from 2002-03 to 2013-14. We applied multiplier methods to data on treatment episodes for amphetamines (eg, counselling, rehabilitation, detoxification) and amphetamine-related hospitalisations to estimate the numbers of regular (at least monthly) and dependent methamphetamine users for each year. Dependent users comprised a subgroup of those who used the drug regularly, so that estimates of the sizes of these two populations were not additive. We estimated that during 2013-14 there were 268 000 regular methamphetamine users (95% CI, 187 000-385 000) and 160 000 dependent users (95% CI, 110 000-232 000) aged 15-54 years in Australia. This equated to population rates of 2.09% (95% CI, 1.45-3.00%) for regular and 1.24% (95% CI, 0.85-1.81%) for dependent use. The rate of dependent use had increased since 2009-10 (when the rate was estimated to be 0.74%), and was higher than the previous peak (1.22% in 2006-07). The highest rates were consistently among those aged 25-34 years, in whom the rate of dependent use during 2012-2013 was estimated to be 1.50% (95% CI, 1.05-2.22%). There had also been an increase in the rate of dependent use among those aged 15-24 years (in 2012-13 reaching 1.14%; 95% CI, 0.80-1.69%). There have been increases over the past 12 years in the numbers of regular and dependent methamphetamine users in Australia. Our estimates suggest that the most recent numbers are the highest for this period, and that the increase has been most marked among young adults (those aged 15-34 years). There is an increasing need for health services to engage with people who have developed problems related to their methamphetamine use.
Suction-based grasping tool for removal of regular- and irregular-shaped intraocular foreign bodies.
Erlanger, Michael S; Velez-Montoya, Raul; Mackenzie, Douglas; Olson, Jeffrey L
2013-01-01
To describe a suction-based grasping tool for the surgical removal of irregular-shaped and nonferromagnetic intraocular foreign bodies. A surgical tool with suction capabilities, consisting of a stainless steel shaft with a plastic handle and a customizable and interchangeable suction tip, was designed in order to better engage and manipulate irregular-shaped in-traocular foreign bodies of various sizes and physical properties. The maximal suction force and surgical capabilities were assessed in the laboratory and on a cadaveric eye vitrectomy model. The suction force of the water-tight seal between the intraocular foreign body and the suction tip was estimated to be approximately 40 MN. During an open-sky vitrectomy in a porcine model, the device was successful in engaging and firmly securing foreign bodies of different sizes and shapes. The suction-based grasping tool enables removal of irregular-shaped and nonferromagnetic foreign bodies. Copyright 2013, SLACK Incorporated.
Kanada, Yoshikiyo; Sakurai, Hiroaki; Sugiura, Yoshito; Arai, Tomoaki; Koyama, Soichiro; Tanabe, Shigeo
2017-11-01
[Purpose] To create a regression formula in order to estimate 1RM for knee extensors, based on the maximal isometric muscle strength measured using a hand-held dynamometer and data regarding the body composition. [Subjects and Methods] Measurement was performed in 21 healthy males in their twenties to thirties. Single regression analysis was performed, with measurement values representing 1RM and the maximal isometric muscle strength as dependent and independent variables, respectively. Furthermore, multiple regression analysis was performed, with data regarding the body composition incorporated as another independent variable, in addition to the maximal isometric muscle strength. [Results] Through single regression analysis with the maximal isometric muscle strength as an independent variable, the following regression formula was created: 1RM (kg)=0.714 + 0.783 × maximal isometric muscle strength (kgf). On multiple regression analysis, only the total muscle mass was extracted. [Conclusion] A highly accurate regression formula to estimate 1RM was created based on both the maximal isometric muscle strength and body composition. Using a hand-held dynamometer and body composition analyzer, it was possible to measure these items in a short time, and obtain clinically useful results.
Effect of inhibitory firing pattern on coherence resonance in random neural networks
NASA Astrophysics Data System (ADS)
Yu, Haitao; Zhang, Lianghao; Guo, Xinmeng; Wang, Jiang; Cao, Yibin; Liu, Jing
2018-01-01
The effect of inhibitory firing patterns on coherence resonance (CR) in random neuronal network is systematically studied. Spiking and bursting are two main types of firing pattern considered in this work. Numerical results show that, irrespective of the inhibitory firing patterns, the regularity of network is maximized by an optimal intensity of external noise, indicating the occurrence of coherence resonance. Moreover, the firing pattern of inhibitory neuron indeed has a significant influence on coherence resonance, but the efficacy is determined by network property. In the network with strong coupling strength but weak inhibition, bursting neurons largely increase the amplitude of resonance, while they can decrease the noise intensity that induced coherence resonance within the neural system of strong inhibition. Different temporal windows of inhibition induced by different inhibitory neurons may account for the above observations. The network structure also plays a constructive role in the coherence resonance. There exists an optimal network topology to maximize the regularity of the neural systems.
Gaudino, Paolo; Alberti, Giampietro; Iaia, F Marcello
2014-08-01
The present study examined the extent to which game format (possession play, SSG-P and game with regular goals and goalkeepers, SSG-G) and the number of players (5, 7 and 10 a-side) influence the physical demands of small-sided soccer games (SSGs) in elite soccer players. Training data were collected during the in-season period from 26 English Premier League outfield players using global positioning system technology. Total distance covered, distance at different speed categories and maximal speed were calculated. In addition, we focused on changes in velocity by reporting the number of accelerations and decelerations carried out during the SSGs (divided in two categories: moderate and high) and the absolute maximal values of acceleration and deceleration achieved. By taking into account these parameters besides speed and distance values, estimated energy expenditure and average metabolic power and distance covered at different metabolic power categories were calculated. All variables were normalized by time (i.e., 4min). The main findings were that the total distance, distances run at high speed (>14.4kmh(-1)) as well as absolute maximum velocity, maximum acceleration and maximum deceleration increased with pitch size (10v10>7v7>5v5; p<.05). Furthermore, total distance, very high (19.8-25.2kmh(-1)) and maximal (>25.2kmh(-1)) speed distances, absolute maximal velocity and maximum acceleration and deceleration were higher in SSG-G than in SSG-P (p<.001). On the other hand, the number of moderate (2-3ms(-2)) accelerations and decelerations as well as the total number of changes in velocity were greater as the pitch dimensions decreased (i.e., 5v5>7v7>10v10; p<.001) in both SSG-G and SSG-P. In addition, predicted energy cost, average metabolic power and distance covered at every metabolic power categories were higher in SSG-P compared to SSG-G and in big than in small pitch areas (p<.05). A detailed analysis of these drills is pivotal in contemporary football as it enables an in depth understanding of the workload imposed on each player which consequently has practical implications for the prescription of the adequate type and amount of stimulus during exercise training. Copyright © 2014 Elsevier B.V. All rights reserved.
Schuler, Megan S; Vasilenko, Sara A; Lanza, Stephanie T
2015-12-01
Substance use and depression often co-occur, complicating treatment of both substance use and depression. Despite research documenting age-related trends in both substance use and depression, little research has examined how the associations between substance use behaviors and depression changes across the lifespan. This study examines how the associations between substance use behaviors (daily smoking, regular heavy episodic drinking (HED), and marijuana use) and depressive symptoms vary from adolescence into young adulthood (ages 12-31), and how these associations differ by gender. Using data from the National Longitudinal Study of Adolescent to Adult Health (Add Health), we implemented time-varying effect models (TVEM), an analytic approach that estimates how the associations between predictors (e.g., substance use measures) and an outcome (e.g., depressive symptoms) vary across age. Marijuana use and daily smoking were significantly associated with depressive symptoms at most ages from 12 to 31. Regular HED was significantly associated with depressive symptoms during adolescence only. In bivariate analyses, the association with depressive symptoms for each substance use behavior was significantly stronger for females at certain ages; when adjusting for concurrent substance use in a multivariate analysis, no gender differences were observed. While the associations between depressive symptoms and both marijuana and daily smoking were relatively stable across ages 12-31, regular HED was only significantly associated with depressive symptoms during adolescence. Understanding age and gender trends in these associations can help tailor prevention efforts and joint treatment methods in order to maximize public health benefit. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Xue, Zhaohui; Du, Peijun; Li, Jun; Su, Hongjun
2017-02-01
The generally limited availability of training data relative to the usually high data dimension pose a great challenge to accurate classification of hyperspectral imagery, especially for identifying crops characterized with highly correlated spectra. However, traditional parametric classification models are problematic due to the need of non-singular class-specific covariance matrices. In this research, a novel sparse graph regularization (SGR) method is presented, aiming at robust crop mapping using hyperspectral imagery with very few in situ data. The core of SGR lies in propagating labels from known data to unknown, which is triggered by: (1) the fraction matrix generated for the large unknown data by using an effective sparse representation algorithm with respect to the few training data serving as the dictionary; (2) the prediction function estimated for the few training data by formulating a regularization model based on sparse graph. Then, the labels of large unknown data can be obtained by maximizing the posterior probability distribution based on the two ingredients. SGR is more discriminative, data-adaptive, robust to noise, and efficient, which is unique with regard to previously proposed approaches and has high potentials in discriminating crops, especially when facing insufficient training data and high-dimensional spectral space. The study area is located at Zhangye basin in the middle reaches of Heihe watershed, Gansu, China, where eight crop types were mapped with Compact Airborne Spectrographic Imager (CASI) and Shortwave Infrared Airborne Spectrogrpahic Imager (SASI) hyperspectral data. Experimental results demonstrate that the proposed method significantly outperforms other traditional and state-of-the-art methods.
Callréus, M; McGuigan, F; Ringsberg, K; Akesson, K
2012-10-01
Recreational physical activity in 25-year-old women in Sweden increases bone mineral density (BMD) in the trochanter by 5.5% when combining regularity and impact. Jogging and spinning were especially beneficial for hip BMD (6.4-8.5%). Women who enjoyed physical education in school maintained their higher activity level at age 25. The aims of this study were to evaluate the effects of recreational exercise on BMD and describe how exercise patterns change with time in a normal population of young adult women. In a population-based study of 1,061 women, age 25 (±0.2), BMD was measured at total body (TB-BMD), femoral neck (FN-BMD), trochanter (TR-BMD), and spine (LS-BMD). Self-reported physical activity status was assessed by questionnaire. Regularity of exercise was expressed as recreational activity level (RAL) and impact load as peak strain score (PSS). A permutation (COMB-RP) was used to evaluate combined endurance and impacts on bone mass. More than half of the women reported exercising on a regular basis and the most common activities were running, strength training, aerobics, and spinning. Seventy percent participated in at least one activity during the year. Women with high RAL or PSS had higher BMD in the hip (2.6-3.5%) and spine (1.5-2.1%), with the greatest differences resulting from PSS (p < 0.001-0.02). Combined regularity and impact (high-COMB-RP) conferred the greatest gains in BMD (FN 4.7%, TR 5.5%, LS 3.1%; p < 0.001) despite concomitant lower body weight. Jogging and spinning were particularly beneficial for hip BMD (+6.4-8.5%). Women with high-COMB-RP scores enjoyed physical education in school more and maintained higher activity levels throughout compared to those with low scores. Self-reported recreational levels of physical activity positively influence BMD in young adult women but to maximize BMD gains, regular, high-impact exercise is required. Enjoyment of exercise contributes to regularity of exercising which has short- and long-term implications for bone health.
Coverage-maximization in networks under resource constraints.
Nandi, Subrata; Brusch, Lutz; Deutsch, Andreas; Ganguly, Niloy
2010-06-01
Efficient coverage algorithms are essential for information search or dispersal in all kinds of networks. We define an extended coverage problem which accounts for constrained resources of consumed bandwidth B and time T . Our solution to the network challenge is here studied for regular grids only. Using methods from statistical mechanics, we develop a coverage algorithm with proliferating message packets and temporally modulated proliferation rate. The algorithm performs as efficiently as a single random walker but O(B(d-2)/d) times faster, resulting in significant service speed-up on a regular grid of dimension d . The algorithm is numerically compared to a class of generalized proliferating random walk strategies and on regular grids shown to perform best in terms of the product metric of speed and efficiency.
Enhancing Social Work Education through Team-Based Learning
ERIC Educational Resources Information Center
Gillespie, Judy
2012-01-01
Group learning strategies are used extensively in social work education, despite the challenges and negative outcomes regularly experienced by students and faculty. Building on principles of cooperative learning, team-based learning offers a more structured approach that maximizes the benefits of cooperative learning while also offering…
Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory
NASA Astrophysics Data System (ADS)
Suliman, Mohamed; Ballal, Tarig; Kammoun, Abla; Al-Naffouri, Tareq Y.
2016-12-01
In this supplementary appendix we provide proofs and additional extensive simulations that complement the analysis of the main paper (constrained perturbation regularization approach for signal estimation using random matrix theory).
Aerobic exercise and respiratory muscle strength in patients with cystic fibrosis.
Dassios, Theodore; Katelari, Anna; Doudounakis, Stavros; Dimitriou, Gabriel
2013-05-01
The beneficial role of exercise in maintaining health in patients with cystic fibrosis (CF) is well described. Few data exist on the effect of exercise on respiratory muscle function in patients with CF. Our objective was to compare respiratory muscle function indices in CF patients that regularly exercise with those CF patients that do not. This cross-sectional study assessed nutrition, pulmonary function and respiratory muscle function in 37 CF patients that undertook regular aerobic exercise and in a control group matched for age and gender which consisted of 44 CF patients that did not undertake regular exercise. Respiratory muscle function in CF was assessed by maximal inspiratory pressure (Pimax), maximal expiratory pressure (Pemax) and pressure-time index of the respiratory muscles (PTImus). Median Pimax and Pemax were significantly higher in the exercise group compared to the control group (92 vs. 63 cm H2O and 94 vs. 64 cm H2O respectively). PTImus was significantly lower in the exercise group compared to the control group (0.089 vs. 0.121). Upper arm muscle area (UAMA) and mid-arm muscle circumference were significantly increased in the exercise group compared to the control group (2608 vs. 2178 mm2 and 23 vs. 21 cm respectively). UAMA was significantly related to Pimax in the exercising group. These results suggest that CF patients that undertake regular aerobic exercise maintain higher indices of respiratory muscle strength and lower PTImus values, while increased UAMA values in exercising patients highlight the importance of muscular competence in respiratory muscle function in this population. Copyright © 2013 Elsevier Ltd. All rights reserved.
Consistent Partial Least Squares Path Modeling via Regularization
Jung, Sunho; Park, JaeHong
2018-01-01
Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present. PMID:29515491
Nordgren, Birgitta; Fridén, Cecilia; Jansson, Eva; Österlund, Ted; Grooten, Wilhelmus Johannes; Opava, Christina H; Rickenlund, Anette
2014-09-17
Aerobic capacity tests are important to evaluate exercise programs and to encourage individuals to have a physically active lifestyle. Submaximal tests, if proven valid and reliable could be used for estimation of maximal oxygen uptake (VO2max). The purpose of the study was to examine the criterion-validity of the submaximal self-monitoring Fox-walk test and the submaximal Åstrand cycle test against a maximal cycle test in people with rheumatoid arthritis (RA). A secondary aim was to study the influence of different formulas for age predicted maximal heart rate when estimating VO2max by the Åstrand test. Twenty seven subjects (81% female), mean (SD) age 62 (8.1) years, diagnosed with RA since 17.9 (11.7) years, participated in the study. They performed the Fox-walk test (775 meters), the Åstrand test and the maximal cycle test (measured VO2max test). Pearson's correlation coefficients were calculated to determine the direction and strength of the association between the tests, and paired t-tests were used to test potential differences between the tests. Bland and Altman methods were used to assess whether there was any systematic disagreement between the submaximal tests and the maximal test. The correlation between the estimated and measured VO2max values were strong and ranged between r = 0.52 and r = 0.82 including the use of different formulas for age predicted maximal heart rate, when estimating VO2max by the Åstrand test. VO2max was overestimated by 30% by the Fox-walk test and underestimated by 10% by the Åstrand test corrected for age. When the different formulas for age predicted maximal heart rate were used, the results showed that two formulas better predicted maximal heart rate and consequently a more precise estimation of VO2max. Despite the fact that the Fox-walk test overestimated VO2max substantially, the test is a promising method for self-monitoring VO2max and further development of the test is encouraged. The Åstrand test should be considered as highly valid and feasible and the two newly developed formulas for predicting maximal heart rate according to age are preferable to use when estimating VO2max by the Åstrand test.
Chiao, P C; Rogers, W L; Fessler, J A; Clinthorne, N H; Hero, A O
1994-01-01
The authors have previously developed a model-based strategy for joint estimation of myocardial perfusion and boundaries using ECT (emission computed tomography). They have also reported difficulties with boundary estimation in low contrast and low count rate situations. Here they propose using boundary side information (obtainable from high resolution MRI and CT images) or boundary regularization to improve both perfusion and boundary estimation in these situations. To fuse boundary side information into the emission measurements, the authors formulate a joint log-likelihood function to include auxiliary boundary measurements as well as ECT projection measurements. In addition, they introduce registration parameters to align auxiliary boundary measurements with ECT measurements and jointly estimate these parameters with other parameters of interest from the composite measurements. In simulated PET O-15 water myocardial perfusion studies using a simplified model, the authors show that the joint estimation improves perfusion estimation performance and gives boundary alignment accuracy of <0.5 mm even at 0.2 million counts. They implement boundary regularization through formulating a penalized log-likelihood function. They also demonstrate in simulations that simultaneous regularization of the epicardial boundary and myocardial thickness gives comparable perfusion estimation accuracy with the use of boundary side information.
Lin, Wei; Feng, Rui; Li, Hongzhe
2014-01-01
In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642
Relationships between maximal anaerobic power of the arms and legs and javelin performance.
Bouhlel, E; Chelly, M S; Tabka, Z; Shephard, R
2007-06-01
The aim of this study was to examine relationships between maximal anaerobic power, as measured by leg and arm force-velocity tests, estimates of local muscle volume and javelin performance. Ten trained national level male javelin throwers (mean age 19.6+/- 2 years) participated in this study. Maximal anaerobic power, maximal force and maximal velocity were measured during leg (Wmax-L) and arm (Wmax-A) force-velocity tests, performed on appropriately modified forms of Monark cycle ergometer. Estimates of leg and arm muscle volume were made using a standard anthropometric kit. Maximal force of the leg (Fmax-L) was significantly correlated with estimated leg muscle volume (r=0.71, P<0.05). Wmax-L and Wmax-A were both significantly correlated with javelin performance (r=0.76, P<0.01; r=0.71, P <0.05, respectively). Maximal velocity of the leg (Vmax-L) was also significantly correlated with throwing performance (r=0.83; P<0.001). Wmax of both legs and arms were significantly correlated with javelin performance, the closest correlation being for Wmax-L; this emphasizes the importance of the leg muscles in this sport. Fmax-L and Vmax-L were related to muscle volume and to javelin performance, respectively. Force-velocity testing may have value in regulating conditioning and rehabilitation in sports involving throwing.
Induced venous pooling and cardiorespiratory responses to exercise after bed rest
NASA Technical Reports Server (NTRS)
Convertino, V. A.; Sandler, H.; Webb, P.; Annis, J. F.
1982-01-01
Venous pooling induced by a specially constructed garment is investigated as a possible means for reversing the reduction in maximal oxygen uptake regularly observed following bed rest. Experiments involved a 15-day period of bed rest during which four healthy male subjects, while remaining recumbent in bed, received daily 210-min venous pooling treatments from a reverse gradient garment supplying counterpressure to the torso. Results of exercise testing indicate that while maximal oxygen uptake endurance time and plasma volume were reduced and maximal heart rate increased after bed rest in the control group, those parameters remained essentially unchanged for the group undergoing venous pooling treatment. Results demonstrate the importance of fluid shifts and venous pooling within the cardiovascular system in addition to physical activity to the maintenance of cardiovascular conditioning.
High Intensity Interval Training for Maximizing Health Outcomes.
Karlsen, Trine; Aamot, Inger-Lise; Haykowsky, Mark; Rognmo, Øivind
Regular physical activity and exercise training are important actions to improve cardiorespiratory fitness and maintain health throughout life. There is solid evidence that exercise is an effective preventative strategy against at least 25 medical conditions, including cardiovascular disease, stroke, hypertension, colon and breast cancer, and type 2 diabetes. Traditionally, endurance exercise training (ET) to improve health related outcomes has consisted of low- to moderate ET intensity. However, a growing body of evidence suggests that higher exercise intensities may be superior to moderate intensity for maximizing health outcomes. The primary objective of this review is to discuss how aerobic high-intensity interval training (HIIT) as compared to moderate continuous training may maximize outcomes, and to provide practical advices for successful clinical and home-based HIIT. Copyright © 2017. Published by Elsevier Inc.
20 CFR 220.11 - Definitions as used in this subpart.
Code of Federal Regulations, 2014 CFR
2014-04-01
... tests which provide objective measures of a claimant's maximal work ability and includes functional... DETERMINING DISABILITY Disability Under the Railroad Retirement Act for Work in an Employee's Regular Railroad... position to which the employee holds seniority rights or the position which he or she left to work for a...
20 CFR 220.11 - Definitions as used in this subpart.
Code of Federal Regulations, 2012 CFR
2012-04-01
... tests which provide objective measures of a claimant's maximal work ability and includes functional... DETERMINING DISABILITY Disability Under the Railroad Retirement Act for Work in an Employee's Regular Railroad... position to which the employee holds seniority rights or the position which he or she left to work for a...
20 CFR 220.11 - Definitions as used in this subpart.
Code of Federal Regulations, 2013 CFR
2013-04-01
... tests which provide objective measures of a claimant's maximal work ability and includes functional... DETERMINING DISABILITY Disability Under the Railroad Retirement Act for Work in an Employee's Regular Railroad... position to which the employee holds seniority rights or the position which he or she left to work for a...
Educational Programming for Pupils with Neurologically Based Language Disorders. Final Report.
ERIC Educational Resources Information Center
Zedler, Empress Y.
To investigate procedures whereby schools may achieve maximal results with otherwise normal underachieving pupils with neurologically based language-learning disorders, 100 such subjects were studied over a 2-year period. Fifty experimental subjects remained in regular classes in school and received individualized teaching outside of school hours…
Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A.; Valdés-Hernández, Pedro A.; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A.
2017-01-01
The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website. PMID:29200994
Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A; Valdés-Hernández, Pedro A; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A
2017-01-01
The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website.
NASA Astrophysics Data System (ADS)
Melchert, O.; Hartmann, A. K.
2015-02-01
In this work we consider information-theoretic observables to analyze short symbolic sequences, comprising time series that represent the orientation of a single spin in a two-dimensional (2D) Ising ferromagnet on a square lattice of size L2=1282 for different system temperatures T . The latter were chosen from an interval enclosing the critical point Tc of the model. At small temperatures the sequences are thus very regular; at high temperatures they are maximally random. In the vicinity of the critical point, nontrivial, long-range correlations appear. Here we implement estimators for the entropy rate, excess entropy (i.e., "complexity"), and multi-information. First, we implement a Lempel-Ziv string-parsing scheme, providing seemingly elaborate entropy rate and multi-information estimates and an approximate estimator for the excess entropy. Furthermore, we apply easy-to-use black-box data-compression utilities, providing approximate estimators only. For comparison and to yield results for benchmarking purposes, we implement the information-theoretic observables also based on the well-established M -block Shannon entropy, which is more tedious to apply compared to the first two "algorithmic" entropy estimation procedures. To test how well one can exploit the potential of such data-compression techniques, we aim at detecting the critical point of the 2D Ising ferromagnet. Among the above observables, the multi-information, which is known to exhibit an isolated peak at the critical point, is very easy to replicate by means of both efficient algorithmic entropy estimation procedures. Finally, we assess how good the various algorithmic entropy estimates compare to the more conventional block entropy estimates and illustrate a simple modification that yields enhanced results.
Stability region maximization by decomposition-aggregation method. [Skylab stability
NASA Technical Reports Server (NTRS)
Siljak, D. D.; Cuk, S. M.
1974-01-01
This work is to improve the estimates of the stability regions by formulating and resolving a proper maximization problem. The solution of the problem provides the best estimate of the maximal value of the structural parameter and at the same time yields the optimum comparison system, which can be used to determine the degree of stability of the Skylab. The analysis procedure is completely computerized, resulting in a flexible and powerful tool for stability considerations of large-scale linear as well as nonlinear systems.
On split regular BiHom-Lie superalgebras
NASA Astrophysics Data System (ADS)
Zhang, Jian; Chen, Liangyun; Zhang, Chiping
2018-06-01
We introduce the class of split regular BiHom-Lie superalgebras as the natural extension of the one of split Hom-Lie superalgebras and the one of split Lie superalgebras. By developing techniques of connections of roots for this kind of algebras, we show that such a split regular BiHom-Lie superalgebra L is of the form L = U +∑ [ α ] ∈ Λ / ∼I[α] with U a subspace of the Abelian (graded) subalgebra H and any I[α], a well described (graded) ideal of L, satisfying [I[α] ,I[β] ] = 0 if [ α ] ≠ [ β ] . Under certain conditions, in the case of L being of maximal length, the simplicity of the algebra is characterized and it is shown that L is the direct sum of the family of its simple (graded) ideals.
Demura, Shinichi; Morishita, Koji; Yamada, Takayoshi; Yamaji, Shunsuke; Komatsu, Miho
2011-11-01
L-Ornithine plays an important role in ammonia metabolism via the urea cycle. This study aimed to examine the effect of L-ornithine hydrochloride ingestion on ammonia metabolism and performance after intermittent maximal anaerobic cycle ergometer exercise. Ten healthy young adults (age, 23.8 ± 3.9 year; height, 172.3 ± 5.5 cm; body mass, 67.7 ± 6.1 kg) with regular training experience ingested L-ornithine hydrochloride (0.1 g/kg, body mass) or placebo after 30 s of maximal cycling exercise. Five sets of the same maximal cycling exercise were conducted 60 min after ingestion, and maximal cycling exercise was conducted after a 15 min rest. The intensity of cycling exercise was based on each subject's body mass (0.74 N kg(-1)). Work volume (watt), peak rpm (rpm) before and after intermittent maximal ergometer exercise and the following serum parameters were measured before ingestion, immediately after exercise and 15 min after exercise: ornithine, ammonia, urea, lactic acid and glutamate. Peak rpm was significantly greater with L-ornithine hydrochloride ingestion than with placebo ingestion. Serum ornithine level was significantly greater with L-ornithine hydrochloride ingestion than with placebo ingestion immediately and 15 min after intermittent maximal cycle ergometer exercise. In conclusion, although maximal anaerobic performance may be improved by L-ornithine hydrochloride ingestion before intermittent maximal anaerobic cycle ergometer exercise, the above may not depend on increase of ammonia metabolism with L-ornithine hydrochloride.
NASA Astrophysics Data System (ADS)
Zhu, Yansong; Jha, Abhinav K.; Dreyer, Jakob K.; Le, Hanh N. D.; Kang, Jin U.; Roland, Per E.; Wong, Dean F.; Rahmim, Arman
2017-02-01
Fluorescence molecular tomography (FMT) is a promising tool for real time in vivo quantification of neurotransmission (NT) as we pursue in our BRAIN initiative effort. However, the acquired image data are noisy and the reconstruction problem is ill-posed. Further, while spatial sparsity of the NT effects could be exploited, traditional compressive-sensing methods cannot be directly applied as the system matrix in FMT is highly coherent. To overcome these issues, we propose and assess a three-step reconstruction method. First, truncated singular value decomposition is applied on the data to reduce matrix coherence. The resultant image data are input to a homotopy-based reconstruction strategy that exploits sparsity via l1 regularization. The reconstructed image is then input to a maximum-likelihood expectation maximization (MLEM) algorithm that retains the sparseness of the input estimate and improves upon the quantitation by accurate Poisson noise modeling. The proposed reconstruction method was evaluated in a three-dimensional simulated setup with fluorescent sources in a cuboidal scattering medium with optical properties simulating human brain cortex (reduced scattering coefficient: 9.2 cm-1, absorption coefficient: 0.1 cm-1 and tomographic measurements made using pixelated detectors. In different experiments, fluorescent sources of varying size and intensity were simulated. The proposed reconstruction method provided accurate estimates of the fluorescent source intensity, with a 20% lower root mean square error on average compared to the pure-homotopy method for all considered source intensities and sizes. Further, compared with conventional l2 regularized algorithm, overall, the proposed method reconstructed substantially more accurate fluorescence distribution. The proposed method shows considerable promise and will be tested using more realistic simulations and experimental setups.
Joint estimation of preferential attachment and node fitness in growing complex networks
NASA Astrophysics Data System (ADS)
Pham, Thong; Sheridan, Paul; Shimodaira, Hidetoshi
2016-09-01
Complex network growth across diverse fields of science is hypothesized to be driven in the main by a combination of preferential attachment and node fitness processes. For measuring the respective influences of these processes, previous approaches make strong and untested assumptions on the functional forms of either the preferential attachment function or fitness function or both. We introduce a Bayesian statistical method called PAFit to estimate preferential attachment and node fitness without imposing such functional constraints that works by maximizing a log-likelihood function with suitably added regularization terms. We use PAFit to investigate the interplay between preferential attachment and node fitness processes in a Facebook wall-post network. While we uncover evidence for both preferential attachment and node fitness, thus validating the hypothesis that these processes together drive complex network evolution, we also find that node fitness plays the bigger role in determining the degree of a node. This is the first validation of its kind on real-world network data. But surprisingly the rate of preferential attachment is found to deviate from the conventional log-linear form when node fitness is taken into account. The proposed method is implemented in the R package PAFit.
Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George
2009-08-01
We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.
Joint estimation of preferential attachment and node fitness in growing complex networks
Pham, Thong; Sheridan, Paul; Shimodaira, Hidetoshi
2016-01-01
Complex network growth across diverse fields of science is hypothesized to be driven in the main by a combination of preferential attachment and node fitness processes. For measuring the respective influences of these processes, previous approaches make strong and untested assumptions on the functional forms of either the preferential attachment function or fitness function or both. We introduce a Bayesian statistical method called PAFit to estimate preferential attachment and node fitness without imposing such functional constraints that works by maximizing a log-likelihood function with suitably added regularization terms. We use PAFit to investigate the interplay between preferential attachment and node fitness processes in a Facebook wall-post network. While we uncover evidence for both preferential attachment and node fitness, thus validating the hypothesis that these processes together drive complex network evolution, we also find that node fitness plays the bigger role in determining the degree of a node. This is the first validation of its kind on real-world network data. But surprisingly the rate of preferential attachment is found to deviate from the conventional log-linear form when node fitness is taken into account. The proposed method is implemented in the R package PAFit. PMID:27601314
NASA Astrophysics Data System (ADS)
Cho, Yumi
2018-05-01
We study nonlinear elliptic problems with nonstandard growth and ellipticity related to an N-function. We establish global Calderón-Zygmund estimates of the weak solutions in the framework of Orlicz spaces over bounded non-smooth domains. Moreover, we prove a global regularity result for asymptotically regular problems which are getting close to the regular problems considered, when the gradient variable goes to infinity.
Exercise Responses after Inactivity
NASA Technical Reports Server (NTRS)
Convertino, Victor A.
1986-01-01
The exercise response after bed rest inactivity is a reduction in the physical work capacity and is manifested by significant decreases in oxygen uptake. The magnitude of decrease in maximal oxygen intake V(dot)O2max is related to the duration of confinement and the pre-bed-rest level of aerobic fitness; these relationships are relatively independent of age and gender. The reduced exercise performance and V(dot)O2max following bed rest are associated with various physiological adaptations including reductions in blood volume, submaximal and maximal stroke volume, maximal cardiac output, sceletal muscle tone and strength, and aerobic enzyme capacities, as well as increases in venous compliance and submaximal and maximal heart rate. This reduction in physiological capacity can be partially restored by specific countermeasures that provide regular muscular activity or orhtostatic stress or both during the bed rest exposure. The understanding of these physiological and physical responses to exercise following bed rest inactivity has important implications for the solution to safety and health problems that arise in clinical medicine, aerospace medicine, sedentary living, and aging.
Zachary, Chase E; Jiao, Yang; Torquato, Salvatore
2011-05-01
Hyperuniform many-particle distributions possess a local number variance that grows more slowly than the volume of an observation window, implying that the local density is effectively homogeneous beyond a few characteristic length scales. Previous work on maximally random strictly jammed sphere packings in three dimensions has shown that these systems are hyperuniform and possess unusual quasi-long-range pair correlations decaying as r(-4), resulting in anomalous logarithmic growth in the number variance. However, recent work on maximally random jammed sphere packings with a size distribution has suggested that such quasi-long-range correlations and hyperuniformity are not universal among jammed hard-particle systems. In this paper, we show that such systems are indeed hyperuniform with signature quasi-long-range correlations by characterizing the more general local-volume-fraction fluctuations. We argue that the regularity of the void space induced by the constraints of saturation and strict jamming overcomes the local inhomogeneity of the disk centers to induce hyperuniformity in the medium with a linear small-wave-number nonanalytic behavior in the spectral density, resulting in quasi-long-range spatial correlations scaling with r(-(d+1)) in d Euclidean space dimensions. A numerical and analytical analysis of the pore-size distribution for a binary maximally random jammed system in addition to a local characterization of the n-particle loops governing the void space surrounding the inclusions is presented in support of our argument. This paper is the first part of a series of two papers considering the relationships among hyperuniformity, jamming, and regularity of the void space in hard-particle packings.
Local Regularity Analysis with Wavelet Transform in Gear Tooth Failure Detection
NASA Astrophysics Data System (ADS)
Nissilä, Juhani
2017-09-01
Diagnosing gear tooth and bearing failures in industrial power transition situations has been studied a lot but challenges still remain. This study aims to look at the problem from a more theoretical perspective. Our goal is to find out if the local regularity i.e. smoothness of the measured signal can be estimated from the vibrations of epicyclic gearboxes and if the regularity can be linked to the meshing events of the gear teeth. Previously it has been shown that the decreasing local regularity of the measured acceleration signals can reveal the inner race faults in slowly rotating bearings. The local regularity is estimated from the modulus maxima ridges of the signal's wavelet transform. In this study, the measurements come from the epicyclic gearboxes of the Kelukoski water power station (WPS). The very stable rotational speed of the WPS makes it possible to deduce that the gear mesh frequencies of the WPS and a frequency related to the rotation of the turbine blades are the most significant components in the spectra of the estimated local regularity signals.
ERIC Educational Resources Information Center
Mollenkopf, Dawn L.
2009-01-01
The "highly qualified teacher" requirement of No Child Left Behind has put pressure on rural school districts to recruit and retain highly qualified regular and special education teachers. If necessary, they may utilize uncertified, rural teachers with provisional certification; however, these teachers may find completing the necessary…
Portfolios for Prior Learning Assessment: Caught between Diversity and Standardization
ERIC Educational Resources Information Center
Sweygers, Annelies; Soetewey, Kim; Meeus, Wil; Struyf, Elke; Pieters, Bert
2009-01-01
In recent years, procedures have been established in Flanders for "Prior Learning Assessment" (PLA) outside the formal learning circuit, of which the portfolio is a regular component. In order to maximize the possibilities of acknowledgement of prior learning assessment, the Flemish government is looking for a set of common criteria and…
A Study of Coordination Between Mathematics and Chemistry in the Pre-Technical Program.
ERIC Educational Resources Information Center
Loiseau, Roger A.
This research was undertaken to determine whether the mathematics course offered to students taking courses in chemical technology was adequate. Students in a regular class and an experimental class were given mathematics and chemistry pretests and posttests. The experimental class was taught using a syllabus designed to maximize the coherence…
Benefits of Moderate-Intensity Exercise during a Calorie-Restricted Low-Fat Diet
ERIC Educational Resources Information Center
Apekey, Tanefa A.; Morris, A. E. J.; Fagbemi, S.; Griffiths, G. J.
2012-01-01
Objective: Despite the health benefits, many people do not undertake regular exercise. This study investigated the effects of moderate-intensity exercise on cardiorespiratory fitness (lung age, blood pressure and maximal aerobic power, VO[subscript 2]max), serum lipids concentration and body mass index (BMI) in sedentary overweight/obese adults…
Minimize Subjective Theory, Maximize Authentic Experience in the Teaching of French Civilization.
ERIC Educational Resources Information Center
Corredor, Eva L.
A program developed to teach French civilization and modern France at the U.S. Naval Academy (Annapolis, Maryland) was designed to take advantage of readily available, relatively sophisticated technology for classroom instruction. The hardware used includes a satellite earth station that receives regular television broadcasts from France, a…
A trace ratio maximization approach to multiple kernel-based dimensionality reduction.
Jiang, Wenhao; Chung, Fu-lai
2014-01-01
Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.
Graphs to estimate an individualized risk of breast cancer.
Benichou, J; Gail, M H; Mulvihill, J J
1996-01-01
Clinicians who counsel women about their risk for developing breast cancer need a rapid method to estimate individualized risk (absolute risk), as well as the confidence limits around that point. The Breast Cancer Detection Demonstration Project (BCDDP) model (sometimes called the Gail model) assumes no genetic model and simultaneously incorporates five risk factors, but involves cumbersome calculations and interpolations. This report provides graphs to estimate the absolute risk of breast cancer from the BCDDP model. The BCDDP recruited 280,000 women from 1973 to 1980 who were monitored for 5 years. From this cohort, 2,852 white women developed breast cancer and 3,146 controls were selected, all with complete risk-factor information. The BCDDP model, previously developed from these data, was used to prepare graphs that relate a specific summary relative-risk estimate to the absolute risk of developing breast cancer over intervals of 10, 20, and 30 years. Once a summary relative risk is calculated, the appropriate graph is chosen that shows the 10-, 20-, or 30-year absolute risk of developing breast cancer. A separate graph gives the 95% confidence limits around the point estimate of absolute risk. Once a clinician rules out a single gene trait that predisposes to breast cancer and elicits information on age and four risk factors, the tables and figures permit an estimation of a women's absolute risk of developing breast cancer in the next three decades. These results are intended to be applied to women who undergo regular screening. They should be used only in a formal counseling program to maximize a woman's understanding of the estimates and the proper use of them.
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
Pereira, N F; Sitek, A
2011-01-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated. PMID:20736496
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
NASA Astrophysics Data System (ADS)
Pereira, N. F.; Sitek, A.
2010-09-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kagie, Matthew J.; Lanterman, Aaron D.
2017-12-01
This paper addresses parameter estimation for an optical transient signal when the received data has been right-censored. We develop an expectation-maximization (EM) algorithm to estimate the amplitude of a Poisson intensity with a known shape in the presence of additive background counts, where the measurements are subject to saturation effects. We compare the results of our algorithm with those of an EM algorithm that is unaware of the censoring.
Estimation and classification by sigmoids based on mutual information
NASA Technical Reports Server (NTRS)
Baram, Yoram
1994-01-01
An estimate of the probability density function of a random vector is obtained by maximizing the mutual information between the input and the output of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's s method, applied to an estimated density, yields a recursive maximum likelihood estimator, consisting of a single internal layer of sigmoids, for a random variable or a random sequence. Applications to the diamond classification and to the prediction of a sun-spot process are demonstrated.
Hadamard States for the Klein-Gordon Equation on Lorentzian Manifolds of Bounded Geometry
NASA Astrophysics Data System (ADS)
Gérard, Christian; Oulghazi, Omar; Wrochna, Michał
2017-06-01
We consider the Klein-Gordon equation on a class of Lorentzian manifolds with Cauchy surface of bounded geometry, which is shown to include examples such as exterior Kerr, Kerr-de Sitter spacetime and the maximal globally hyperbolic extension of the Kerr outer region. In this setup, we give an approximate diagonalization and a microlocal decomposition of the Cauchy evolution using a time-dependent version of the pseudodifferential calculus on Riemannian manifolds of bounded geometry. We apply this result to construct all pure regular Hadamard states (and associated Feynman inverses), where regular refers to the state's two-point function having Cauchy data given by pseudodifferential operators. This allows us to conclude that there is a one-parameter family of elliptic pseudodifferential operators that encodes both the choice of (pure, regular) Hadamard state and the underlying spacetime metric.
Direct Regularized Estimation of Retinal Vascular Oxygen Tension Based on an Experimental Model
Yildirim, Isa; Ansari, Rashid; Yetik, I. Samil; Shahidi, Mahnaz
2014-01-01
Phosphorescence lifetime imaging is commonly used to generate oxygen tension maps of retinal blood vessels by classical least squares (LS) estimation method. A spatial regularization method was later proposed and provided improved results. However, both methods obtain oxygen tension values from the estimates of intermediate variables, and do not yield an optimum estimate of oxygen tension values, due to their nonlinear dependence on the ratio of intermediate variables. In this paper, we provide an improved solution by devising a regularized direct least squares (RDLS) method that exploits available knowledge in studies that provide models of oxygen tension in retinal arteries and veins, unlike the earlier regularized LS approach where knowledge about intermediate variables is limited. The performance of the proposed RDLS method is evaluated by investigating and comparing the bias, variance, oxygen tension maps, 1-D profiles of arterial oxygen tension, and mean absolute error with those of earlier methods, and its superior performance both quantitatively and qualitatively is demonstrated. PMID:23732915
A method of bias correction for maximal reliability with dichotomous measures.
Penev, Spiridon; Raykov, Tenko
2010-02-01
This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.
Bayesian image reconstruction for improving detection performance of muon tomography.
Wang, Guobao; Schultz, Larry J; Qi, Jinyi
2009-05-01
Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.
NASA Astrophysics Data System (ADS)
Luque, Pablo; Mántaras, Daniel A.; Fidalgo, Eloy; Álvarez, Javier; Riva, Paolo; Girón, Pablo; Compadre, Diego; Ferran, Jordi
2013-12-01
The main objective of this work is to determine the limit of safe driving conditions by identifying the maximal friction coefficient in a real vehicle. The study will focus on finding a method to determine this limit before reaching the skid, which is valuable information in the context of traffic safety. Since it is not possible to measure the friction coefficient directly, it will be estimated using the appropriate tools in order to get the most accurate information. A real vehicle is instrumented to collect information of general kinematics and steering tie-rod forces. A real-time algorithm is developed to estimate forces and aligning torque in the tyres using an extended Kalman filter and neural networks techniques. The methodology is based on determining the aligning torque; this variable allows evaluation of the behaviour of the tyre. It transmits interesting information from the tyre-road contact and can be used to predict the maximal tyre grip and safety margin. The maximal grip coefficient is estimated according to a knowledge base, extracted from computer simulation of a high detailed three-dimensional model, using Adams® software. The proposed methodology is validated and applied to real driving conditions, in which maximal grip and safety margin are properly estimated.
Evaluation of uncertainty for regularized deconvolution: A case study in hydrophone measurements.
Eichstädt, S; Wilkens, V
2017-06-01
An estimation of the measurand in dynamic metrology usually requires a deconvolution based on a dynamic calibration of the measuring system. Since deconvolution is, mathematically speaking, an ill-posed inverse problem, some kind of regularization is required to render the problem stable and obtain usable results. Many approaches to regularized deconvolution exist in the literature, but the corresponding evaluation of measurement uncertainties is, in general, an unsolved issue. In particular, the uncertainty contribution of the regularization itself is a topic of great importance, because it has a significant impact on the estimation result. Here, a versatile approach is proposed to express prior knowledge about the measurand based on a flexible, low-dimensional modeling of an upper bound on the magnitude spectrum of the measurand. This upper bound allows the derivation of an uncertainty associated with the regularization method in line with the guidelines in metrology. As a case study for the proposed method, hydrophone measurements in medical ultrasound with an acoustic working frequency of up to 7.5 MHz are considered, but the approach is applicable for all kinds of estimation methods in dynamic metrology, where regularization is required and which can be expressed as a multiplication in the frequency domain.
On non-parametric maximum likelihood estimation of the bivariate survivor function.
Prentice, R L
The likelihood function for the bivariate survivor function F, under independent censorship, is maximized to obtain a non-parametric maximum likelihood estimator &Fcirc;. &Fcirc; may or may not be unique depending on the configuration of singly- and doubly-censored pairs. The likelihood function can be maximized by placing all mass on the grid formed by the uncensored failure times, or half lines beyond the failure time grid, or in the upper right quadrant beyond the grid. By accumulating the mass along lines (or regions) where the likelihood is flat, one obtains a partially maximized likelihood as a function of parameters that can be uniquely estimated. The score equations corresponding to these point mass parameters are derived, using a Lagrange multiplier technique to ensure unit total mass, and a modified Newton procedure is used to calculate the parameter estimates in some limited simulation studies. Some considerations for the further development of non-parametric bivariate survivor function estimators are briefly described.
NASA Astrophysics Data System (ADS)
Parekh, Ankit
Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal decomposition technique for an important biomedical signal processing problem: the detection of sleep spindles and K-complexes in human sleep electroencephalography (EEG). We propose a non-linear model for the EEG consisting of three components: (1) a transient (sparse piecewise constant) component, (2) a low-frequency component, and (3) an oscillatory component. The oscillatory component admits a sparse time-frequency representation. Using a convex objective function, we propose a fast non-linear optimization algorithm to estimate the three components in the proposed signal model. The low-frequency and oscillatory components are then used to estimate the K-complexes and sleep spindles respectively. The proposed detection method is shown to outperform several state-of-the-art automated sleep spindles detection methods.
A robust background regression based score estimation algorithm for hyperspectral anomaly detection
NASA Astrophysics Data System (ADS)
Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei
2016-12-01
Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement in practice.
Bolduc, Virginie; Thorin-Trescases, Nathalie; Thorin, Eric
2013-09-01
Cognitive performances are tightly associated with the maximal aerobic exercise capacity, both of which decline with age. The benefits on mental health of regular exercise, which slows the age-dependent decline in maximal aerobic exercise capacity, have been established for centuries. In addition, the maintenance of an optimal cerebrovascular endothelial function through regular exercise, part of a healthy lifestyle, emerges as one of the key and primary elements of successful brain aging. Physical exercise requires the activation of specific brain areas that trigger a local increase in cerebral blood flow to match neuronal metabolic needs. In this review, we propose three ways by which exercise could maintain the cerebrovascular endothelial function, a premise to a healthy cerebrovascular function and an optimal regulation of cerebral blood flow. First, exercise increases blood flow locally and increases shear stress temporarily, a known stimulus for endothelial cell maintenance of Akt-dependent expression of endothelial nitric oxide synthase, nitric oxide generation, and the expression of antioxidant defenses. Second, the rise in circulating catecholamines during exercise not only facilitates adequate blood and nutrient delivery by stimulating heart function and mobilizing energy supplies but also enhances endothelial repair mechanisms and angiogenesis. Third, in the long term, regular exercise sustains a low resting heart rate that reduces the mechanical stress imposed to the endothelium of cerebral arteries by the cardiac cycle. Any chronic variation from a healthy environment will perturb metabolism and thus hasten endothelial damage, favoring hypoperfusion and neuronal stress.
History matching by spline approximation and regularization in single-phase areal reservoirs
NASA Technical Reports Server (NTRS)
Lee, T. Y.; Kravaris, C.; Seinfeld, J.
1986-01-01
An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.
The extended Fourier transform for 2D spectral estimation.
Armstrong, G S; Mandelshtam, V A
2001-11-01
We present a linear algebraic method, named the eXtended Fourier Transform (XFT), for spectral estimation from truncated time signals. The method is a hybrid of the discrete Fourier transform (DFT) and the regularized resolvent transform (RRT) (J. Chen et al., J. Magn. Reson. 147, 129-137 (2000)). Namely, it estimates the remainder of a finite DFT by RRT. The RRT estimation corresponds to solution of an ill-conditioned problem, which requires regularization. The regularization depends on a parameter, q, that essentially controls the resolution. By varying q from 0 to infinity one can "tune" the spectrum between a high-resolution spectral estimate and the finite DFT. The optimal value of q is chosen according to how well the data fits the form of a sum of complex sinusoids and, in particular, the signal-to-noise ratio. Both 1D and 2D XFT are presented with applications to experimental NMR signals. Copyright 2001 Academic Press.
Colorectal Cancer After Start of Nonsteroidal Anti-Inflammatory Drug Use
Stürmer, Til; Buring, Julie E.; Lee, I-Min; Kurth, Tobias; Gaziano, J. Michael; Glynn, Robert J.
2006-01-01
Purpose Nonsteroidal anti-inflammatory drugs (NSAIDs), including aspirin, have been consistently shown to reduce the risk of colorectal cancer (CRC) in non-experimental studies, but little is known of the factors associated with starting and continuing regular NSAID use and their effect on the NSAID - CRC association. Subjects and Methods Prospective cohort study of 22,071 healthy male physicians aged 40–84 years without indications or contraindications to regular NSAID use at baseline. Annual questionnaires assessed quantity of NSAID use, occurrence of cancer, and risk factors for CRC. Propensity for regular NSAID use (> 60 days/year) was estimated using generalized estimating equations. We used a time-varying Cox proportional hazards model to estimate the association between duration since initiation of regular NSAID use and risk for CRC. Results Regular non-aspirin and any NSAID use increased from 0 to 12% and 1 to 56% over time, respectively, and was predicted by age, body mass index, alcohol consumption, medication use, coronary artery disease, gastrointestinal diseases, arthritis, hypertension, and headaches. Over a median follow-up of 18 years, 495 physicians were diagnosed with CRC. There was no trend of CRC risk with increased duration of regular NSAID use. Five or more years of regular use of any NSAID were associated with a relative risk for CRC of 1.0 (95% confidence interval: 0.7 – 1.5), after adjustment for predictors of regular NSAID use. Conclusion Regular NSAID use was not associated with a substantial risk reduction of CRC after controlling for time-varying predictors of both NSAID use and CRC. PMID:16750963
Dense motion estimation using regularization constraints on local parametric models.
Patras, Ioannis; Worring, Marcel; van den Boomgaard, Rein
2004-11-01
This paper presents a method for dense optical flow estimation in which the motion field within patches that result from an initial intensity segmentation is parametrized with models of different order. We propose a novel formulation which introduces regularization constraints between the model parameters of neighboring patches. In this way, we provide the additional constraints for very small patches and for patches whose intensity variation cannot sufficiently constrain the estimation of their motion parameters. In order to preserve motion discontinuities, we use robust functions as a regularization mean. We adopt a three-frame approach and control the balance between the backward and forward constraints by a real-valued direction field on which regularization constraints are applied. An iterative deterministic relaxation method is employed in order to solve the corresponding optimization problem. Experimental results show that the proposed method deals successfully with motions large in magnitude, motion discontinuities, and produces accurate piecewise-smooth motion fields.
An ERP study of regular and irregular English past tense inflection.
Newman, Aaron J; Ullman, Michael T; Pancheva, Roumyana; Waligura, Diane L; Neville, Helen J
2007-01-01
Compositionality is a critical and universal characteristic of human language. It is found at numerous levels, including the combination of morphemes into words and of words into phrases and sentences. These compositional patterns can generally be characterized by rules. For example, the past tense of most English verbs ("regulars") is formed by adding an -ed suffix. However, many complex linguistic forms have rather idiosyncratic mappings. For example, "irregular" English verbs have past tense forms that cannot be derived from their stems in a consistent manner. Whether regular and irregular forms depend on fundamentally distinct neurocognitive processes (rule-governed combination vs. lexical memorization), or whether a single processing system is sufficient to explain the phenomena, has engendered considerable investigation and debate. We recorded event-related potentials while participants read English sentences that were either correct or had violations of regular past tense inflection, irregular past tense inflection, syntactic phrase structure, or lexical semantics. Violations of regular past tense and phrase structure, but not of irregular past tense or lexical semantics, elicited left-lateralized anterior negativities (LANs). These seem to reflect neurocognitive substrates that underlie compositional processes across linguistic domains, including morphology and syntax. Regular, irregular, and phrase structure violations all elicited later positivities that were maximal over midline parietal sites (P600s), and seem to index aspects of controlled syntactic processing of both phrase structure and morphosyntax. The results suggest distinct neurocognitive substrates for processing regular and irregular past tense forms: regulars depending on compositional processing, and irregulars stored in lexical memory.
Physical activity and maximal oxygen uptake in adults with Prader-Willi syndrome.
Gross, Itai; Hirsch, Harry J; Constantini, Naama; Nice, Shachar; Pollak, Yehuda; Genstil, Larry; Eldar-Geva, Talia; Tsur, Varda Gross
2017-03-16
Prader-Willi Syndrome (PWS) is the most common genetic syndrome causing life-threatening obesity. Strict adherence to a low-calorie diet and regular physical activity are needed to prevent weight gain. Direct measurement of maximal oxygen uptake (VO 2 max), the "gold standard" for assessing aerobic exercise capacity, has not been previously described in PWS. Assess aerobic capacity by direct measurement of VO 2 max in adults with PWS, and in age and BMI-matched controls (OC), and compare the results with values obtained by indirect prediction methods. Seventeen individuals (12 males) age: 19-35 (28.6 ± 4.9) years, BMI: 19.4-38.1 (27.8 ± 5) kg/m 2 with genetically confirmed PWS who exercise daily, and 32 matched OC (22 males) age: 19-36 (29.3 ± 5.2) years, BMI: 21.1-48.1 (26.3 ± 4.9) kg/m 2 . All completed a medical questionnaire and performed strength and flexibility tests. VO 2 max was determined by measuring oxygen consumption during a graded exercise test on a treadmill. VO 2 max (24.6 ± 3.4 vs 46.5 ± 12.2 ml/kg/min, p < 0.001) and ventilatory threshold (20 ± 2 and 36.2 ± 10.5 ml/kg/min, p < 0.001), maximal strength of both hands (36 ± 4 vs 91.4 ± 21.2 kg, p < 0.001), and flexibility (15.2 ± 9.5 vs 26 ± 11.1 cm, p = 0.001) were all significantly lower for PWS compared to OC. Predicted estimates and direct measurements of VO 2 max were almost identical for the OC group (p = 0.995), for the PWS group, both methods for estimating VO 2 max gave values which were significantly greater (p < 0.001) than results obtained by direct measurements. Aerobic capacity, assessed by direct measurement of VO 2 max, is significantly lower in PWS adults, even in those who exercise daily, compared to OCs. Indirect estimates of VO 2 max are accurate for OC, but unreliable in PWS. Direct measurement of VO 2 should be used for designing personal training programs and in clinical studies of exercise in PWS.
Recent advancements in GRACE mascon regularization and uncertainty assessment
NASA Astrophysics Data System (ADS)
Loomis, B. D.; Luthcke, S. B.
2017-12-01
The latest release of the NASA Goddard Space Flight Center (GSFC) global time-variable gravity mascon product applies a new regularization strategy along with new methods for estimating noise and leakage uncertainties. The critical design component of mascon estimation is the construction of the applied regularization matrices, and different strategies exist between the different centers that produce mascon solutions. The new approach from GSFC directly applies the pre-fit Level 1B inter-satellite range-acceleration residuals in the design of time-dependent regularization matrices, which are recomputed at each step of our iterative solution method. We summarize this new approach, demonstrating the simultaneous increase in recovered time-variable gravity signal and reduction in the post-fit inter-satellite residual magnitudes, until solution convergence occurs. We also present our new approach for estimating mascon noise uncertainties, which are calibrated to the post-fit inter-satellite residuals. Lastly, we present a new technique for end users to quickly estimate the signal leakage errors for any selected grouping of mascons, and we test the viability of this leakage assessment procedure on the mascon solutions produced by other processing centers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tom, Nathan M; Yu, Yi-Hsiang; Wright, Alan D
In this work, the net power delivered to the grid from a nonideal power take-off (PTO) is introduced followed by a review of the pseudo-spectral control theory. A power-to-load ratio, used to evaluate the pseudo-spectral controller performance, is discussed, and the results obtained from optimizing a multiterm objective function are compared against results obtained from maximizing the net output power to the grid. Simulation results are then presented for four different oscillating wave energy converter geometries to highlight the potential of combing both geometry and PTO control to maximize power while minimizing loads.
ERIC Educational Resources Information Center
Wood, Richard E.
Second language instruction in the U.S. and Europe is in difficulties. The choice of a second language is artibrary and the motivation dubious. In Europe and now also in the U.S., attention has turned to the planned interlanguage Esperanto, which offers a maximally regularized structure, is considered "easy" by learners, and has the…
NASA Astrophysics Data System (ADS)
Kountouris, Panagiotis; Gerbig, Christoph; Rödenbeck, Christian; Karstens, Ute; Koch, Thomas Frank; Heimann, Martin
2018-03-01
Atmospheric inversions are widely used in the optimization of surface carbon fluxes on a regional scale using information from atmospheric CO2 dry mole fractions. In many studies the prior flux uncertainty applied to the inversion schemes does not directly reflect the true flux uncertainties but is used to regularize the inverse problem. Here, we aim to implement an inversion scheme using the Jena inversion system and applying a prior flux error structure derived from a model-data residual analysis using high spatial and temporal resolution over a full year period in the European domain. We analyzed the performance of the inversion system with a synthetic experiment, in which the flux constraint is derived following the same residual analysis but applied to the model-model mismatch. The synthetic study showed a quite good agreement between posterior and true
fluxes on European, country, annual and monthly scales. Posterior monthly and country-aggregated fluxes improved their correlation coefficient with the known truth
by 7 % compared to the prior estimates when compared to the reference, with a mean correlation of 0.92. The ratio of the SD between the posterior and reference and between the prior and reference was also reduced by 33 % with a mean value of 1.15. We identified temporal and spatial scales on which the inversion system maximizes the derived information; monthly temporal scales at around 200 km spatial resolution seem to maximize the information gain.
Psycho-physiological analysis of an aerobic dance programme for women
Rockefeller, Kathleen A.; Burke, E. J.
1979-01-01
The purpose of this study was to determine: (1) the energy cost and (2) the psycho-physiological effects of an aerobic dance programme in young women. Twenty-one college-age women participated 40 minutes a day, three days a week, for a 10-week training period. Each work session included a five-minute warm-up period, a 30-minute stimulus period (including walk-runs) and a five-minute cool-down period. During the last four weeks of the training period, the following parameters were monitored in six of the subjects during two consecutive sessions: perceived exertion (RPE) utilising the Borg 6-20 scale, Mean = 13.19; heart rate (HR) monitored at regular intervals during the training session, Mean = 166.37; and estimated caloric expenditure based on measured oxygen consumption (V̇O2) utilising a Kofranyi-Michaelis respirometer, Mean = 289.32. Multivariate analysis of variance (MANOVA) computed between pre and post tests for the six dependent variables revealed a significant approximate F-ratio of 5.72 (p <.05). Univariate t-test analysis of mean changes revealed significant pre-post test differences for V̇O2 max expressed in ml/kg min-1, maximal pulmonary ventilation, maximal working capacity on the bicycle ergometer, submaximal HR and submaximal RPE. Body weight was not significantly altered. It was concluded that the aerobic dance training programme employed was of sufficient intensity to elicit significant physiological and psycho-physiological alterations in college-age women. PMID:465914
On the Solutions of a 2+1-Dimensional Model for Epitaxial Growth with Axial Symmetry
NASA Astrophysics Data System (ADS)
Lu, Xin Yang
2018-04-01
In this paper, we study the evolution equation derived by Xu and Xiang (SIAM J Appl Math 69(5):1393-1414, 2009) to describe heteroepitaxial growth in 2+1 dimensions with elastic forces on vicinal surfaces is in the radial case and uniform mobility. This equation is strongly nonlinear and contains two elliptic integrals and defined via Cauchy principal value. We will first derive a formally equivalent parabolic evolution equation (i.e., full equivalence when sufficient regularity is assumed), and the main aim is to prove existence, uniqueness and regularity of strong solutions. We will extensively use techniques from the theory of evolution equations governed by maximal monotone operators in Banach spaces.
TASEP of interacting particles of arbitrary size
NASA Astrophysics Data System (ADS)
Narasimhan, S. L.; Baumgaertner, A.
2017-10-01
A mean-field description of the stationary state behaviour of interacting k-mers performing totally asymmetric exclusion processes (TASEP) on an open lattice segment is presented employing the discrete Takahashi formalism. It is shown how the maximal current and the phase diagram, including triple-points, depend on the strength of repulsive and attractive interactions. We compare the mean-field results with Monte Carlo simulation of three types interacting k-mers: monomers, dimers and trimers. (a) We find that the Takahashi estimates of the maximal current agree quantitatively with those of the Monte Carlo simulation in the absence of interaction as well as in both the the attractive and the strongly repulsive regimes. However, theory and Monte Carlo results disagree in the range of weak repulsion, where the Takahashi estimates of the maximal current show a monotonic behaviour, whereas the Monte Carlo data show a peaking behaviour. It is argued that the peaking of the maximal current is due to a correlated motion of the particles. In the limit of very strong repulsion the theory predicts a universal behavior: th maximal currents of k-mers correspond to that of non-interacting (k+1) -mers; (b) Monte Carlo estimates of the triple-points for monomers, dimers and trimers show an interesting general behaviour : (i) the phase boundaries α * and β* for entry and exit current, respectively, as function of interaction strengths show maxima for α* whereas β * exhibit minima at the same strength; (ii) in the attractive regime, however, the trend is reversed (β * > α * ). The Takahashi estimates of the triple-point for monomers show a similar trend as the Monte Carlo data except for the peaking of α * ; for dimers and trimers, however, the Takahashi estimates show an opposite trend as compared to the Monte Carlo data.
Non-Cartesian MRI Reconstruction With Automatic Regularization Via Monte-Carlo SURE
Weller, Daniel S.; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.
2013-01-01
Magnetic resonance image (MRI) reconstruction from undersampled k-space data requires regularization to reduce noise and aliasing artifacts. Proper application of regularization however requires appropriate selection of associated regularization parameters. In this work, we develop a data-driven regularization parameter adjustment scheme that minimizes an estimate (based on the principle of Stein’s unbiased risk estimate—SURE) of a suitable weighted squared-error measure in k-space. To compute this SURE-type estimate, we propose a Monte-Carlo scheme that extends our previous approach to inverse problems (e.g., MRI reconstruction) involving complex-valued images. Our approach depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings, so it is capable of tackling a wide variety of reconstruction algorithms and nonquadratic regularizers including total variation and those based on the ℓ1-norm. Experiments with simulated and real MR data indicate that the proposed approach is capable of providing near mean squared-error (MSE) optimal regularization parameters for single-coil undersampled non-Cartesian MRI reconstruction. PMID:23591478
Mahfouz, Zaher; Verloock, Leen; Joseph, Wout; Tanghe, Emmeric; Gati, Azeddine; Wiart, Joe; Lautru, David; Hanna, Victor Fouad; Martens, Luc
2013-12-01
The influence of temporal daily exposure to global system for mobile communications (GSM) and universal mobile telecommunications systems and high speed downlink packet access (UMTS-HSDPA) is investigated using spectrum analyser measurements in two countries, France and Belgium. Temporal variations and traffic distributions are investigated. Three different methods to estimate maximal electric-field exposure are compared. The maximal realistic (99 %) and the maximal theoretical extrapolation factor used to extrapolate the measured broadcast control channel (BCCH) for GSM and the common pilot channel (CPICH) for UMTS are presented and compared for the first time in the two countries. Similar conclusions are found in the two countries for both urban and rural areas: worst-case exposure assessment overestimates realistic maximal exposure up to 5.7 dB for the considered example. In France, the values are the highest, because of the higher population density. The results for the maximal realistic extrapolation factor at the weekdays are similar to those from weekend days.
Receiver Diversity Combining Using Evolutionary Algorithms in Rayleigh Fading Channel
Akbari, Mohsen; Manesh, Mohsen Riahi
2014-01-01
In diversity combining at the receiver, the output signal-to-noise ratio (SNR) is often maximized by using the maximal ratio combining (MRC) provided that the channel is perfectly estimated at the receiver. However, channel estimation is rarely perfect in practice, which results in deteriorating the system performance. In this paper, an imperialistic competitive algorithm (ICA) is proposed and compared with two other evolutionary based algorithms, namely, particle swarm optimization (PSO) and genetic algorithm (GA), for diversity combining of signals travelling across the imperfect channels. The proposed algorithm adjusts the combiner weights of the received signal components in such a way that maximizes the SNR and minimizes the bit error rate (BER). The results indicate that the proposed method eliminates the need of channel estimation and can outperform the conventional diversity combining methods. PMID:25045725
A more powerful exact test of noninferiority from binary matched-pairs data.
Lloyd, Chris J; Moldovan, Max V
2008-08-15
Assessing the therapeutic noninferiority of one medical treatment compared with another is often based on the difference in response rates from a matched binary pairs design. This paper develops a new exact unconditional test for noninferiority that is more powerful than available alternatives. There are two new elements presented in this paper. First, we introduce the likelihood ratio statistic as an alternative to the previously proposed score statistic of Nam (Biometrics 1997; 53:1422-1430). Second, we eliminate the nuisance parameter by estimation followed by maximization as an alternative to the partial maximization of Berger and Boos (Am. Stat. Assoc. 1994; 89:1012-1016) or traditional full maximization. Based on an extensive numerical study, we recommend tests based on the score statistic, the nuisance parameter being controlled by estimation followed by maximization. 2008 John Wiley & Sons, Ltd
Exponential series approaches for nonparametric graphical models
NASA Astrophysics Data System (ADS)
Janofsky, Eric
Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.
Prediction of Maximal Aerobic Capacity in Severely Burned Children
Porro, Laura; Rivero, Haidy G.; Gonzalez, Dante; Tan, Alai; Herndon, David N.; Suman, Oscar E.
2011-01-01
Introduction Maximal oxygen uptake (VO2 peak) is an indicator of cardiorespiratory fitness, but requires expensive equipment and a relatively high technical skill level. Purpose The aim of this study is to provide a formula for estimating VO2 peak in burned children, using information obtained without expensive equipment. Methods Children, with ≥40% total surface area burned (TBSA), underwent a modified Bruce treadmill test to asses VO2 peak at 6 months after injury. We recorded gender, age, %TBSA, %3rd degree burn, height, weight, treadmill time, maximal speed, maximal grade, and peak heart rate, and applied McHenry’s select algorithm to extract important independent variables and Robust multiple regression to establish prediction equations. Results 42 children; 7 to 17 years old were tested. Robust multiple regression model provided the equation: VO2=10.33 – 0.62 *Age (years) + 1.88 * Treadmill Time (min) + 2.3 (gender; Females = 0, Males = 1). The correlation between measured and estimated VO2 peak was R=0.80. We then validated the equation with a group of 33 burned children, which yielded a correlation between measured and estimated VO2 peak of R=0.79. Conclusions Using only a treadmill and easily gathered information, VO2 peak can be estimated in children with burns. PMID:21316155
Regularity of a renewal process estimated from binary data.
Rice, John D; Strawderman, Robert L; Johnson, Brent A
2017-10-09
Assessment of the regularity of a sequence of events over time is important for clinical decision-making as well as informing public health policy. Our motivating example involves determining the effect of an intervention on the regularity of HIV self-testing behavior among high-risk individuals when exact self-testing times are not recorded. Assuming that these unobserved testing times follow a renewal process, the goals of this work are to develop suitable methods for estimating its distributional parameters when only the presence or absence of at least one event per subject in each of several observation windows is recorded. We propose two approaches to estimation and inference: a likelihood-based discrete survival model using only time to first event; and a potentially more efficient quasi-likelihood approach based on the forward recurrence time distribution using all available data. Regularity is quantified and estimated by the coefficient of variation (CV) of the interevent time distribution. Focusing on the gamma renewal process, where the shape parameter of the corresponding interevent time distribution has a monotone relationship with its CV, we conduct simulation studies to evaluate the performance of the proposed methods. We then apply them to our motivating example, concluding that the use of text message reminders significantly improves the regularity of self-testing, but not its frequency. A discussion on interesting directions for further research is provided. © 2017, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Burman, Erik; Hansbo, Peter; Larson, Mats G.
2018-03-01
Tikhonov regularization is one of the most commonly used methods for the regularization of ill-posed problems. In the setting of finite element solutions of elliptic partial differential control problems, Tikhonov regularization amounts to adding suitably weighted least squares terms of the control variable, or derivatives thereof, to the Lagrangian determining the optimality system. In this note we show that the stabilization methods for discretely ill-posed problems developed in the setting of convection-dominated convection-diffusion problems, can be highly suitable for stabilizing optimal control problems, and that Tikhonov regularization will lead to less accurate discrete solutions. We consider some inverse problems for Poisson’s equation as an illustration and derive new error estimates both for the reconstruction of the solution from the measured data and reconstruction of the source term from the measured data. These estimates include both the effect of the discretization error and error in the measurements.
On the validity of time-dependent AUC estimators.
Schmid, Matthias; Kestler, Hans A; Potapov, Sergej
2015-01-01
Recent developments in molecular biology have led to the massive discovery of new marker candidates for the prediction of patient survival. To evaluate the predictive value of these markers, statistical tools for measuring the performance of survival models are needed. We consider estimators of discrimination measures, which are a popular approach to evaluate survival predictions in biomarker studies. Estimators of discrimination measures are usually based on regularity assumptions such as the proportional hazards assumption. Based on two sets of molecular data and a simulation study, we show that violations of the regularity assumptions may lead to over-optimistic estimates of prediction accuracy and may therefore result in biased conclusions regarding the clinical utility of new biomarkers. In particular, we demonstrate that biased medical decision making is possible even if statistical checks indicate that all regularity assumptions are satisfied. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Tucker, Megan R; Laugesen, Murray; Grace, Randolph C
2017-03-03
Very Low Nicotine Content (VLNC) cigarettes might be useful as part of a tobacco control strategy, but relatively little is known about their acceptability as substitutes for regular cigarettes. We compared subjective effects and demand for regular cigarettes and Very Low Nicotine Content (VLNC) cigarettes, and estimated cross-price elasticity for VLNC cigarettes, using simulated demand tasks. 40 New Zealand smokers sampled a VLNC cigarette and completed Cigarette Purchase Tasks to indicate their demand for regular cigarettes and VLNC cigarettes at a range of prices, and a cross-price task indicating how many regular cigarettes and VLNC cigarettes they would purchase at 0.5x, 1x, and 2x the current market price for regular cigarettes, assuming the price of VLNC cigarettes remained constant. They also rated the subjective effects of the VLNC cigarette and their usual-brand regular cigarettes. Cross-price elasticity for VLNC cigarettes was estimated as 0.24 and was significantly positive, indicating that VLNC cigarettes are partially substitutable for regular cigarettes. VLNC cigarettes were rated as less satisfying and psychologically rewarding than regular cigarettes, but this was unrelated to demand or substitutability. VLNC cigarettes are potentially substitutable for regular cigarettes. Their availability may reduce tobacco consumption, nicotine intake and addiction; making it easier for smokers to quit. VLNC cigarettes share the behavioural and sensory components of smoking whilst delivering negligible levels of nicotine. Although smokers rated VLNCs as less satisfying than regular cigarettes, smokers said they would increase their consumption of VLNCs as the price of regular cigarettes increased, if VLNCs were available at a lower price. This suggests that VLNCs are partially substitutable for regular cigarettes. VLNCs can be part of an effective tobacco control strategy, by reducing nicotine dependence and improving health and financial outcomes for smokers. © The Author 2017. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Schaffrin, Burkhard
2008-02-01
In a linear Gauss-Markov model, the parameter estimates from BLUUE (Best Linear Uniformly Unbiased Estimate) are not robust against possible outliers in the observations. Moreover, by giving up the unbiasedness constraint, the mean squared error (MSE) risk may be further reduced, in particular when the problem is ill-posed. In this paper, the α-weighted S-homBLE (Best homogeneously Linear Estimate) is derived via formulas originally used for variance component estimation on the basis of the repro-BIQUUE (reproducing Best Invariant Quadratic Uniformly Unbiased Estimate) principle in a model with stochastic prior information. In the present model, however, such prior information is not included, which allows the comparison of the stochastic approach (α-weighted S-homBLE) with the well-established algebraic approach of Tykhonov-Phillips regularization, also known as R-HAPS (Hybrid APproximation Solution), whenever the inverse of the “substitute matrix” S exists and is chosen as the R matrix that defines the relative impact of the regularizing term on the final result.
With age a lower individual breathing reserve is associated with a higher maximal heart rate.
Burtscher, Martin; Gatterer, Hannes; Faulhaber, Martin; Burtscher, Johannes
2018-01-01
Maximal heart rate (HRmax) is linearly declining with increasing age. Regular exercise training is supposed to partly prevent this decline, whereas sex and habitual physical activity do not. High exercise capacity is associated with a high cardiac output (HR x stroke volume) and high ventilatory requirements. Due to the close cardiorespiratory coupling, we hypothesized that the individual ventilatory response to maximal exercise might be associated with the age-related HRmax. Retrospective analyses have been conducted on the results of 129 consecutively performed routine cardiopulmonary exercise tests. The study sample comprised healthy subjects of both sexes of a broad range of age (20-86 years). Maximal values of power output, minute ventilation, oxygen uptake and heart rate were assessed by the use of incremental cycle spiroergometry. Linear multivariate regression analysis revealed that in addition to age the individual breathing reserve at maximal exercise was independently predictive for HRmax. A lower breathing reserve due to a high ventilatory demand and/or a low ventilatory capacity, which is more pronounced at a higher age, was associated with higher HRmax. Age explained the observed variance in HRmax by 72% and was improved to 83% when the variable "breathing reserve" was entered. The presented findings indicate an independent association between the breathing reserve at maximal exercise and maximal heart rate, i.e. a low individual breathing reserve is associated with a higher age-related HRmax. A deeper understanding of this association has to be investigated in a more physiological scenario. Copyright © 2017 Elsevier B.V. All rights reserved.
Second-Order Two-Sided Estimates in Nonlinear Elliptic Problems
NASA Astrophysics Data System (ADS)
Cianchi, Andrea; Maz'ya, Vladimir G.
2018-05-01
Best possible second-order regularity is established for solutions to p-Laplacian type equations with {p \\in (1, ∞)} and a square-integrable right-hand side. Our results provide a nonlinear counterpart of the classical L 2-coercivity theory for linear problems, which is missing in the existing literature. Both local and global estimates are obtained. The latter apply to solutions to either Dirichlet or Neumann boundary value problems. Minimal regularity on the boundary of the domain is required, although our conclusions are new even for smooth domains. If the domain is convex, no regularity of its boundary is needed at all.
NASA Astrophysics Data System (ADS)
Kovalev, I. V.; Sidorov, V. G.; Zelenkov, P. V.; Khoroshko, A. Y.; Lelekov, A. T.
2015-10-01
To optimize parameters of beta-electrical converter of isotope Nickel-63 radiation, model of the distribution of EHP generation rate in semiconductor must be derived. By using Monte-Carlo methods in GEANT4 system with ultra-low energy electron physics models this distribution in silicon calculated and approximated with Gauss function. Maximal efficient isotope layer thickness and maximal energy efficiency of EHP generation were estimated.
Ning, Jing; Chen, Yong; Piao, Jin
2017-07-01
Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Deterministic quantum annealing expectation-maximization algorithm
NASA Astrophysics Data System (ADS)
Miyahara, Hideyuki; Tsumura, Koji; Sughiyama, Yuki
2017-11-01
Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization approach. Motivated by QA, we propose a quantum annealing extension of EM, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. We also discuss its advantage in terms of the path integral formulation. Furthermore, by employing numerical simulations, we illustrate how DQAEM works in MLE and show that DQAEM moderate the problem of local optima in EM.
Periodic binary sequence generators: VLSI circuits considerations
NASA Technical Reports Server (NTRS)
Perlman, M.
1984-01-01
Feedback shift registers are efficient periodic binary sequence generators. Polynomials of degree r over a Galois field characteristic 2(GF(2)) characterize the behavior of shift registers with linear logic feedback. The algorithmic determination of the trinomial of lowest degree, when it exists, that contains a given irreducible polynomial over GF(2) as a factor is presented. This corresponds to embedding the behavior of an r-stage shift register with linear logic feedback into that of an n-stage shift register with a single two-input modulo 2 summer (i.e., Exclusive-OR gate) in its feedback. This leads to Very Large Scale Integrated (VLSI) circuit architecture of maximal regularity (i.e., identical cells) with intercell communications serialized to a maximal degree.
A comparative study of covariance selection models for the inference of gene regulatory networks.
Stifanelli, Patrizia F; Creanza, Teresa M; Anglani, Roberto; Liuzzi, Vania C; Mukherjee, Sayan; Schena, Francesco P; Ancona, Nicola
2013-10-01
The inference, or 'reverse-engineering', of gene regulatory networks from expression data and the description of the complex dependency structures among genes are open issues in modern molecular biology. In this paper we compared three regularized methods of covariance selection for the inference of gene regulatory networks, developed to circumvent the problems raising when the number of observations n is smaller than the number of genes p. The examined approaches provided three alternative estimates of the inverse covariance matrix: (a) the 'PINV' method is based on the Moore-Penrose pseudoinverse, (b) the 'RCM' method performs correlation between regression residuals and (c) 'ℓ(2C)' method maximizes a properly regularized log-likelihood function. Our extensive simulation studies showed that ℓ(2C) outperformed the other two methods having the most predictive partial correlation estimates and the highest values of sensitivity to infer conditional dependencies between genes even when a few number of observations was available. The application of this method for inferring gene networks of the isoprenoid biosynthesis pathways in Arabidopsis thaliana allowed to enlighten a negative partial correlation coefficient between the two hubs in the two isoprenoid pathways and, more importantly, provided an evidence of cross-talk between genes in the plastidial and the cytosolic pathways. When applied to gene expression data relative to a signature of HRAS oncogene in human cell cultures, the method revealed 9 genes (p-value<0.0005) directly interacting with HRAS, sharing the same Ras-responsive binding site for the transcription factor RREB1. This result suggests that the transcriptional activation of these genes is mediated by a common transcription factor downstream of Ras signaling. Software implementing the methods in the form of Matlab scripts are available at: http://users.ba.cnr.it/issia/iesina18/CovSelModelsCodes.zip. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
Hasegawa, Takanori; Yamaguchi, Rui; Nagasaki, Masao; Miyano, Satoru; Imoto, Seiya
2014-01-01
Comprehensive understanding of gene regulatory networks (GRNs) is a major challenge in the field of systems biology. Currently, there are two main approaches in GRN analysis using time-course observation data, namely an ordinary differential equation (ODE)-based approach and a statistical model-based approach. The ODE-based approach can generate complex dynamics of GRNs according to biologically validated nonlinear models. However, it cannot be applied to ten or more genes to simultaneously estimate system dynamics and regulatory relationships due to the computational difficulties. The statistical model-based approach uses highly abstract models to simply describe biological systems and to infer relationships among several hundreds of genes from the data. However, the high abstraction generates false regulations that are not permitted biologically. Thus, when dealing with several tens of genes of which the relationships are partially known, a method that can infer regulatory relationships based on a model with low abstraction and that can emulate the dynamics of ODE-based models while incorporating prior knowledge is urgently required. To accomplish this, we propose a method for inference of GRNs using a state space representation of a vector auto-regressive (VAR) model with L1 regularization. This method can estimate the dynamic behavior of genes based on linear time-series modeling constructed from an ODE-based model and can infer the regulatory structure among several tens of genes maximizing prediction ability for the observational data. Furthermore, the method is capable of incorporating various types of existing biological knowledge, e.g., drug kinetics and literature-recorded pathways. The effectiveness of the proposed method is shown through a comparison of simulation studies with several previous methods. For an application example, we evaluated mRNA expression profiles over time upon corticosteroid stimulation in rats, thus incorporating corticosteroid kinetics/dynamics, literature-recorded pathways and transcription factor (TF) information. PMID:25162401
Regular use of alcohol and tobacco in India and its association with age, gender, and poverty.
Neufeld, K J; Peters, D H; Rani, M; Bonu, S; Brooner, R K
2005-03-07
This study provides national estimates of regular tobacco and alcohol use in India and their associations with gender, age, and economic group obtained from a representative survey of 471,143 people over the age of 10 years in 1995-96, the National Sample Survey. The national prevalence of regular use of smoking tobacco is estimated to be 16.2%, chewing tobacco 14.0%, and alcohol 4.5%. Men were 25.5 times more likely than women to report regular smoking, 3.7 times more likely to regularly chew tobacco, and 9.7 times more likely to regularly use alcohol. Respondents belonging to scheduled castes and tribes (recognized disadvantaged groups) were significantly more likely to report regular use of alcohol as well as smoking and chewing tobacco. People from rural areas had higher rates compared to urban dwellers, as did those with no formal education. Individuals with incomes below the poverty line had higher relative odds of use of chewing tobacco and alcohol compared to those above the poverty line. The regular use of both tobacco and alcohol also increased significantly with each diminishing income quintile. Comparisons are made between these results and those found in the United States and elsewhere, highlighting the need to address control of these substances on the public health agenda.
Quantum-state reconstruction by maximizing likelihood and entropy.
Teo, Yong Siah; Zhu, Huangjun; Englert, Berthold-Georg; Řeháček, Jaroslav; Hradil, Zdeněk
2011-07-08
Quantum-state reconstruction on a finite number of copies of a quantum system with informationally incomplete measurements, as a rule, does not yield a unique result. We derive a reconstruction scheme where both the likelihood and the von Neumann entropy functionals are maximized in order to systematically select the most-likely estimator with the largest entropy, that is, the least-bias estimator, consistent with a given set of measurement data. This is equivalent to the joint consideration of our partial knowledge and ignorance about the ensemble to reconstruct its identity. An interesting structure of such estimators will also be explored.
Can anti-gravity running improve performance to the same degree as over-ground running?
Brennan, Christopher T; Jenkins, David G; Osborne, Mark A; Oyewale, Michael; Kelly, Vincent G
2018-03-11
This study examined the changes in running performance, maximal blood lactate concentrations and running kinematics between 85%BM anti-gravity (AG) running and normal over-ground (OG) running over an 8-week training period. Fifteen elite male developmental cricketers were assigned to either the AG or over-ground (CON) running group. The AG group (n = 7) ran twice a week on an AG treadmill and once per week over-ground. The CON group (n = 8) completed all sessions OG on grass. Both AG and OG training resulted in similar improvements in time trial and shuttle run performance. Maximal running performance showed moderate differences between the groups, however the AG condition resulted in less improvement. Large differences in maximal blood lactate concentrations existed with OG running resulting in greater improvements in blood lactate concentrations measured during maximal running. Moderate increases in stride length paired with moderate decreases in stride rate also resulted from AG training. The use of AG training to supplement regular OG training for performance should be used cautiously, as extended use over long periods of time could lead to altered stride mechanics and reduced blood lactate.
NASA Astrophysics Data System (ADS)
Koshinchanov, Georgy; Dimitrov, Dobri
2008-11-01
The characteristics of rainfall intensity are important for many purposes, including design of sewage and drainage systems, tuning flood warning procedures, etc. Those estimates are usually statistical estimates of the intensity of precipitation realized for certain period of time (e.g. 5, 10 min., etc) with different return period (e.g. 20, 100 years, etc). The traditional approach in evaluating the mentioned precipitation intensities is to process the pluviometer's records and fit probability distribution to samples of intensities valid for certain locations ore regions. Those estimates further become part of the state regulations to be used for various economic activities. Two problems occur using the mentioned approach: 1. Due to various factors the climate conditions are changed and the precipitation intensity estimates need regular update; 2. As far as the extremes of the probability distribution are of particular importance for the practice, the methodology of the distribution fitting needs specific attention to those parts of the distribution. The aim of this paper is to make review of the existing methodologies for processing the intensive rainfalls and to refresh some of the statistical estimates for the studied areas. The methodologies used in Bulgaria for analyzing the intensive rainfalls and produce relevant statistical estimates: The method of the maximum intensity, used in the National Institute of Meteorology and Hydrology to process and decode the pluviometer's records, followed by distribution fitting for each precipitation duration period; As the above, but with separate modeling of probability distribution for the middle and high probability quantiles. Method is similar to the first one, but with a threshold of 0,36 mm/min of intensity; Another method proposed by the Russian hydrologist G. A. Aleksiev for regionalization of estimates over some territory, improved and adapted by S. Gerasimov for Bulgaria; Next method is considering only the intensive rainfalls (if any) during the day with the maximal annual daily precipitation total for a given year; Conclusions are drown on the relevance and adequacy of the applied methods.
NASA Astrophysics Data System (ADS)
Dang, H.; Wang, A. S.; Sussman, Marc S.; Siewerdsen, J. H.; Stayman, J. W.
2014-09-01
Sequential imaging studies are conducted in many clinical scenarios. Prior images from previous studies contain a great deal of patient-specific anatomical information and can be used in conjunction with subsequent imaging acquisitions to maintain image quality while enabling radiation dose reduction (e.g., through sparse angular sampling, reduction in fluence, etc). However, patient motion between images in such sequences results in misregistration between the prior image and current anatomy. Existing prior-image-based approaches often include only a simple rigid registration step that can be insufficient for capturing complex anatomical motion, introducing detrimental effects in subsequent image reconstruction. In this work, we propose a joint framework that estimates the 3D deformation between an unregistered prior image and the current anatomy (based on a subsequent data acquisition) and reconstructs the current anatomical image using a model-based reconstruction approach that includes regularization based on the deformed prior image. This framework is referred to as deformable prior image registration, penalized-likelihood estimation (dPIRPLE). Central to this framework is the inclusion of a 3D B-spline-based free-form-deformation model into the joint registration-reconstruction objective function. The proposed framework is solved using a maximization strategy whereby alternating updates to the registration parameters and image estimates are applied allowing for improvements in both the registration and reconstruction throughout the optimization process. Cadaver experiments were conducted on a cone-beam CT testbench emulating a lung nodule surveillance scenario. Superior reconstruction accuracy and image quality were demonstrated using the dPIRPLE algorithm as compared to more traditional reconstruction methods including filtered backprojection, penalized-likelihood estimation (PLE), prior image penalized-likelihood estimation (PIPLE) without registration, and prior image penalized-likelihood estimation with rigid registration of a prior image (PIRPLE) over a wide range of sampling sparsity and exposure levels.
Maximizing the Science Output of GOES-R SUVI during Operations
NASA Astrophysics Data System (ADS)
Shaw, M.; Vasudevan, G.; Mathur, D. P.; Mansir, D.; Shing, L.; Edwards, C. G.; Seaton, D. B.; Darnel, J.; Nwachuku, C.
2017-12-01
Regular manual calibrations are an often-unavoidable demand on ground operations personnel during long-term missions. This paper describes a set of features built into the instrument control software and the techniques employed by the Solar Ultraviolet Imager (SUVI) team to automate a large fraction of regular on-board calibration activities, allowing SUVI to be operated with little manual commanding from the ground and little interruption to nominal sequencing. SUVI is a Generalized Cassegrain telescope with a large field of view that images the Sun in six extreme ultraviolet (EUV) narrow bandpasses centered at 9.4, 13.1, 17.1, 19.5, 28.4 and 30.4 nm. It is part of the payload of the Geostationary Operational Environmental Satellite (GOES-R) mission.
A theoretical framework to predict the most likely ion path in particle imaging.
Collins-Fekete, Charles-Antoine; Volz, Lennart; Portillo, Stephen K N; Beaulieu, Luc; Seco, Joao
2017-03-07
In this work, a generic rigorous Bayesian formalism is introduced to predict the most likely path of any ion crossing a medium between two detection points. The path is predicted based on a combination of the particle scattering in the material and measurements of its initial and final position, direction and energy. The path estimate's precision is compared to the Monte Carlo simulated path. Every ion from hydrogen to carbon is simulated in two scenarios, (1) where the range is fixed and (2) where the initial velocity is fixed. In the scenario where the range is kept constant, the maximal root-mean-square error between the estimated path and the Monte Carlo path drops significantly between the proton path estimate (0.50 mm) and the helium path estimate (0.18 mm), but less so up to the carbon path estimate (0.09 mm). However, this scenario is identified as the configuration that maximizes the dose while minimizing the path resolution. In the scenario where the initial velocity is fixed, the maximal root-mean-square error between the estimated path and the Monte Carlo path drops significantly between the proton path estimate (0.29 mm) and the helium path estimate (0.09 mm) but increases for heavier ions up to carbon (0.12 mm). As a result, helium is found to be the particle with the most accurate path estimate for the lowest dose, potentially leading to tomographic images of higher spatial resolution.
Information fusion in regularized inversion of tomographic pumping tests
Bohling, Geoffrey C.; ,
2008-01-01
In this chapter we investigate a simple approach to incorporating geophysical information into the analysis of tomographic pumping tests for characterization of the hydraulic conductivity (K) field in an aquifer. A number of authors have suggested a tomographic approach to the analysis of hydraulic tests in aquifers - essentially simultaneous analysis of multiple tests or stresses on the flow system - in order to improve the resolution of the estimated parameter fields. However, even with a large amount of hydraulic data in hand, the inverse problem is still plagued by non-uniqueness and ill-conditioning and the parameter space for the inversion needs to be constrained in some sensible fashion in order to obtain plausible estimates of aquifer properties. For seismic and radar tomography problems, the parameter space is often constrained through the application of regularization terms that impose penalties on deviations of the estimated parameters from a prior or background model, with the tradeoff between data fit and model norm explored through systematic analysis of results for different levels of weighting on the regularization terms. In this study we apply systematic regularized inversion to analysis of tomographic pumping tests in an alluvial aquifer, taking advantage of the steady-shape flow regime exhibited in these tests to expedite the inversion process. In addition, we explore the possibility of incorporating geophysical information into the inversion through a regularization term relating the estimated K distribution to ground penetrating radar velocity and attenuation distributions through a smoothing spline model. ?? 2008 Springer-Verlag Berlin Heidelberg.
NASA Astrophysics Data System (ADS)
Mendoza, Sergio; Rothenberger, Michael; Hake, Alison; Fathy, Hosam
2016-03-01
This article presents a framework for optimizing the thermal cycle to estimate a battery cell's entropy coefficient at 20% state of charge (SOC). Our goal is to maximize Fisher identifiability: a measure of the accuracy with which a parameter can be estimated. Existing protocols in the literature for estimating entropy coefficients demand excessive laboratory time. Identifiability optimization makes it possible to achieve comparable accuracy levels in a fraction of the time. This article demonstrates this result for a set of lithium iron phosphate (LFP) cells. We conduct a 24-h experiment to obtain benchmark measurements of their entropy coefficients. We optimize a thermal cycle to maximize parameter identifiability for these cells. This optimization proceeds with respect to the coefficients of a Fourier discretization of this thermal cycle. Finally, we compare the estimated parameters using (i) the benchmark test, (ii) the optimized protocol, and (iii) a 15-h test from the literature (by Forgez et al.). The results are encouraging for two reasons. First, they confirm the simulation-based prediction that the optimized experiment can produce accurate parameter estimates in 2 h, compared to 15-24. Second, the optimized experiment also estimates a thermal time constant representing the effects of thermal capacitance and convection heat transfer.
NASA Astrophysics Data System (ADS)
Dong, Bo-Qing; Jia, Yan; Li, Jingna; Wu, Jiahong
2018-05-01
This paper focuses on a system of the 2D magnetohydrodynamic (MHD) equations with the kinematic dissipation given by the fractional operator (-Δ )^α and the magnetic diffusion by partial Laplacian. We are able to show that this system with any α >0 always possesses a unique global smooth solution when the initial data is sufficiently smooth. In addition, we make a detailed study on the large-time behavior of these smooth solutions and obtain optimal large-time decay rates. Since the magnetic diffusion is only partial here, some classical tools such as the maximal regularity property for the 2D heat operator can no longer be applied. A key observation on the structure of the MHD equations allows us to get around the difficulties due to the lack of full Laplacian magnetic diffusion. The results presented here are the sharpest on the global regularity problem for the 2D MHD equations with only partial magnetic diffusion.
Ibraheem, J J; Paalzow, L; Tfelt-Hansen, P
1983-01-01
Fifteen migraine patients were administered 2 mg ergotamine tartrate in a partial cross-over design as a single, oral tablet, rectal suppository and rectal solution. Eight of these patients were in a previous investigation given 0.5 mg ergotamine tartrate intravenously. The blood samples were taken up to 54 h after oral and suppository while it was followed for only 3 h after rectal solution. The chemical analysis was performed by applying h.p.l.c. method with a limit of sensitivity of 0.1 ng/ml ergotamine base in plasma. No ergotamine was detected in the blood samples after the oral route, whereas small and very variable quantities was found in blood after the rectal route. Regular calculation of bioavailability could therefore not be performed. An estimate of the maximal possible bioavailability was found to yield a mean value of 2% (tablets); 5% (suppositories) and 6% (rectal solution). Rectal solution elicited faster absorption and the extent of absorption was significantly higher (P less than 0.05) than for the suppository. PMID:6419759
NASA Astrophysics Data System (ADS)
Shimelevich, M. I.; Obornev, E. A.; Obornev, I. E.; Rodionov, E. A.
2017-07-01
The iterative approximation neural network method for solving conditionally well-posed nonlinear inverse problems of geophysics is presented. The method is based on the neural network approximation of the inverse operator. The inverse problem is solved in the class of grid (block) models of the medium on a regularized parameterization grid. The construction principle of this grid relies on using the calculated values of the continuity modulus of the inverse operator and its modifications determining the degree of ambiguity of the solutions. The method provides approximate solutions of inverse problems with the maximal degree of detail given the specified degree of ambiguity with the total number of the sought parameters n × 103 of the medium. The a priori and a posteriori estimates of the degree of ambiguity of the approximated solutions are calculated. The work of the method is illustrated by the example of the three-dimensional (3D) inversion of the synthesized 2D areal geoelectrical (audio magnetotelluric sounding, AMTS) data corresponding to the schematic model of a kimberlite pipe.
Hypergraph-based anomaly detection of high-dimensional co-occurrences.
Silva, Jorge; Willett, Rebecca
2009-03-01
This paper addresses the problem of detecting anomalous multivariate co-occurrences using a limited number of unlabeled training observations. A novel method based on using a hypergraph representation of the data is proposed to deal with this very high-dimensional problem. Hypergraphs constitute an important extension of graphs which allow edges to connect more than two vertices simultaneously. A variational Expectation-Maximization algorithm for detecting anomalies directly on the hypergraph domain without any feature selection or dimensionality reduction is presented. The resulting estimate can be used to calculate a measure of anomalousness based on the False Discovery Rate. The algorithm has O(np) computational complexity, where n is the number of training observations and p is the number of potential participants in each co-occurrence event. This efficiency makes the method ideally suited for very high-dimensional settings, and requires no tuning, bandwidth or regularization parameters. The proposed approach is validated on both high-dimensional synthetic data and the Enron email database, where p > 75,000, and it is shown that it can outperform other state-of-the-art methods.
The neural substrates of impaired finger tapping regularity after stroke.
Calautti, Cinzia; Jones, P Simon; Guincestre, Jean-Yves; Naccarato, Marcello; Sharma, Nikhil; Day, Diana J; Carpenter, T Adrian; Warburton, Elizabeth A; Baron, Jean-Claude
2010-03-01
Not only finger tapping speed, but also tapping regularity can be impaired after stroke, contributing to reduced dexterity. The neural substrates of impaired tapping regularity after stroke are unknown. Previous work suggests damage to the dorsal premotor cortex (PMd) and prefrontal cortex (PFCx) affects externally-cued hand movement. We tested the hypothesis that these two areas are involved in impaired post-stroke tapping regularity. In 19 right-handed patients (15 men/4 women; age 45-80 years; purely subcortical in 16) partially to fully recovered from hemiparetic stroke, tri-axial accelerometric quantitative assessment of tapping regularity and BOLD fMRI were obtained during fixed-rate auditory-cued index-thumb tapping, in a single session 10-230 days after stroke. A strong random-effect correlation between tapping regularity index and fMRI signal was found in contralesional PMd such that the worse the regularity the stronger the activation. A significant correlation in the opposite direction was also present within contralesional PFCx. Both correlations were maintained if maximal index tapping speed, degree of paresis and time since stroke were added as potential confounds. Thus, the contralesional PMd and PFCx appear to be involved in the impaired ability of stroke patients to fingertap in pace with external cues. The findings for PMd are consistent with repetitive TMS investigations in stroke suggesting a role for this area in affected-hand movement timing. The inverse relationship with tapping regularity observed for the PFCx and the PMd suggests these two anatomically-connected areas negatively co-operate. These findings have implications for understanding the disruption and reorganization of the motor systems after stroke. Copyright (c) 2009 Elsevier Inc. All rights reserved.
Boverman, Gregory; Isaacson, David; Newell, Jonathan C; Saulnier, Gary J; Kao, Tzu-Jen; Amm, Bruce C; Wang, Xin; Davenport, David M; Chong, David H; Sahni, Rakesh; Ashe, Jeffrey M
2017-04-01
In electrical impedance tomography (EIT), we apply patterns of currents on a set of electrodes at the external boundary of an object, measure the resulting potentials at the electrodes, and, given the aggregate dataset, reconstruct the complex conductivity and permittivity within the object. It is possible to maximize sensitivity to internal conductivity changes by simultaneously applying currents and measuring potentials on all electrodes but this approach also maximizes sensitivity to changes in impedance at the interface. We have, therefore, developed algorithms to assess contact impedance changes at the interface as well as to efficiently and simultaneously reconstruct internal conductivity/permittivity changes within the body. We use simple linear algebraic manipulations, the generalized singular value decomposition, and a dual-mesh finite-element-based framework to reconstruct images in real time. We are also able to efficiently compute the linearized reconstruction for a wide range of regularization parameters and to compute both the generalized cross-validation parameter as well as the L-curve, objective approaches to determining the optimal regularization parameter, in a similarly efficient manner. Results are shown using data from a normal subject and from a clinical intensive care unit patient, both acquired with the GE GENESIS prototype EIT system, demonstrating significantly reduced boundary artifacts due to electrode drift and motion artifact.
AN ERP STUDY OF REGULAR AND IRREGULAR ENGLISH PAST TENSE INFLECTION
Newman, Aaron J.; Ullman, Michael T.; Pancheva, Roumyana; Waligura, Diane L.; Neville, Helen J.
2006-01-01
Compositionality is a critical and universal characteristic of human language. It is found at numerous levels, including the combination of morphemes into words and of words into phrases and sentences. These compositional patterns can generally be characterized by rules. For example, the past tense of most English verbs (“regulars”) is formed by adding an -ed suffix. However, many complex linguistic forms have rather idiosyncratic mappings. For example, “irregular” English verbs have past tense forms that cannot be derived from their stems in a consistent manner. Whether regular and irregular forms depend on fundamentally distinct neurocognitive processes (rule-governed combination vs. lexical memorization), or whether a single processing system is sufficient to explain the phenomena, has engendered considerable investigation and debate. We recorded event-related potentials while participants read English sentences that were either correct or had violations of regular past tense inflection, irregular past tense inflection, syntactic phrase structure, or lexical semantics. Violations of regular past tense and phrase structure, but not of irregular past tense or lexical semantics, elicited left-lateralized anterior negativities (LANs). These seem to reflect neurocognitive substrates that underlie compositional processes across linguistic domains, including morphology and syntax. Regular, irregular, and phrase structure violations all elicited later positivities that were maximal over right parietal sites (P600s), and which seem to index aspects of controlled syntactic processing of both phrase structure and morphosyntax. The results suggest distinct neurocognitive substrates for processing regular and irregular past tense forms: regulars depending on compositional processing, and irregulars stored in lexical memory. PMID:17070703
Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart
2011-01-01
We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the "model-free" variational analysis (VA)-based image enhancement approach and the "model-based" descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations.
NASA Astrophysics Data System (ADS)
Gong, Changfei; Zeng, Dong; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Feng, Qianjin; Liang, Zhengrong; Ma, Jianhua
2016-03-01
Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for diagnosis and risk stratification of coronary artery disease by assessing the myocardial perfusion hemodynamic maps (MPHM). Meanwhile, the repeated scanning of the same region results in a relatively large radiation dose to patients potentially. In this work, we present a robust MPCT deconvolution algorithm with adaptive-weighted tensor total variation regularization to estimate residue function accurately under the low-dose context, which is termed `MPD-AwTTV'. More specifically, the AwTTV regularization takes into account the anisotropic edge property of the MPCT images compared with the conventional total variation (TV) regularization, which can mitigate the drawbacks of TV regularization. Subsequently, an effective iterative algorithm was adopted to minimize the associative objective function. Experimental results on a modified XCAT phantom demonstrated that the present MPD-AwTTV algorithm outperforms and is superior to other existing deconvolution algorithms in terms of noise-induced artifacts suppression, edge details preservation and accurate MPHM estimation.
Efficient multidimensional regularization for Volterra series estimation
NASA Astrophysics Data System (ADS)
Birpoutsoukis, Georgios; Csurcsia, Péter Zoltán; Schoukens, Johan
2018-05-01
This paper presents an efficient nonparametric time domain nonlinear system identification method. It is shown how truncated Volterra series models can be efficiently estimated without the need of long, transient-free measurements. The method is a novel extension of the regularization methods that have been developed for impulse response estimates of linear time invariant systems. To avoid the excessive memory needs in case of long measurements or large number of estimated parameters, a practical gradient-based estimation method is also provided, leading to the same numerical results as the proposed Volterra estimation method. Moreover, the transient effects in the simulated output are removed by a special regularization method based on the novel ideas of transient removal for Linear Time-Varying (LTV) systems. Combining the proposed methodologies, the nonparametric Volterra models of the cascaded water tanks benchmark are presented in this paper. The results for different scenarios varying from a simple Finite Impulse Response (FIR) model to a 3rd degree Volterra series with and without transient removal are compared and studied. It is clear that the obtained models capture the system dynamics when tested on a validation dataset, and their performance is comparable with the white-box (physical) models.
NASA Astrophysics Data System (ADS)
Li, Yinan; Qiao, Youming; Wang, Xin; Duan, Runyao
2018-03-01
We study the problem of transforming a tripartite pure state to a bipartite one using stochastic local operations and classical communication (SLOCC). It is known that the tripartite-to-bipartite SLOCC convertibility is characterized by the maximal Schmidt rank of the given tripartite state, i.e. the largest Schmidt rank over those bipartite states lying in the support of the reduced density operator. In this paper, we further study this problem and exhibit novel results in both multi-copy and asymptotic settings, utilizing powerful results from the structure of matrix spaces. In the multi-copy regime, we observe that the maximal Schmidt rank is strictly super-multiplicative, i.e. the maximal Schmidt rank of the tensor product of two tripartite pure states can be strictly larger than the product of their maximal Schmidt ranks. We then provide a full characterization of those tripartite states whose maximal Schmidt rank is strictly super-multiplicative when taking tensor product with itself. Notice that such tripartite states admit strict advantages in tripartite-to-bipartite SLOCC transformation when multiple copies are provided. In the asymptotic setting, we focus on determining the tripartite-to-bipartite SLOCC entanglement transformation rate. Computing this rate turns out to be equivalent to computing the asymptotic maximal Schmidt rank of the tripartite state, defined as the regularization of its maximal Schmidt rank. Despite the difficulty caused by the super-multiplicative property, we provide explicit formulas for evaluating the asymptotic maximal Schmidt ranks of two important families of tripartite pure states by resorting to certain results of the structure of matrix spaces, including the study of matrix semi-invariants. These formulas turn out to be powerful enough to give a sufficient and necessary condition to determine whether a given tripartite pure state can be transformed to the bipartite maximally entangled state under SLOCC, in the asymptotic setting. Applying the recent progress on the non-commutative rank problem, we can verify this condition in deterministic polynomial time.
Maximal radius of the aftershock zone in earthquake networks
NASA Astrophysics Data System (ADS)
Mezentsev, A. Yu.; Hayakawa, M.
2009-09-01
In this paper, several seismoactive regions were investigated (Japan, Southern California and two tectonically distinct Japanese subregions) and structural seismic constants were estimated for each region. Using the method for seismic clustering detection proposed by Baiesi and Paczuski [M. Baiesi, M. Paczuski, Phys. Rev. E 69 (2004) 066106; M. Baiesi, M. Paczuski, Nonlin. Proc. Geophys. (2005) 1607-7946], we obtained the equation of the aftershock zone (AZ). It was shown that the consideration of a finite velocity of seismic signal leads to the natural appearance of maximal possible radius of the AZ. We obtained the equation of maximal radius of the AZ as a function of the magnitude of the main event and estimated its values for each region.
Done, Aaron J; Traustadóttir, Tinna
2016-12-01
Older individuals who exercise regularly exhibit greater resistance to oxidative stress than their sedentary peers, suggesting that exercise can modify age-associated loss of resistance to oxidative stress. However, we recently demonstrated that a single bout of exercise confers protection against a subsequent oxidative challenge in young, but not older adults. We therefore hypothesized that repeated bouts of exercise would be needed to increase resistance to an oxidative challenge in sedentary older middle-aged adults. Sedentary older middle-aged men and women (50-63 years, n = 11) participated in an 8-week exercise intervention. Maximal oxygen consumption was measured before and after the intervention. The exercise intervention consisted of three sessions per week, for 45 min at an intensity corresponding to 70-85 % maximal heart rate (HR max ). Resistance to oxidative stress was measured by F 2 -isoprostane response to a forearm ischemia/reperfusion (I/R) trial. Each participant underwent the I/R trial before and after the exercise intervention. The intervention elicited a significant increase in maximal oxygen consumption (VO 2max ) (P < 0.0001). Baseline levels of F 2 -isoprostanes pre- and post-intervention did not differ, but the F 2 -isoprostane response to the I/R trial was significantly lower following the exercise intervention (time-by-trial interaction, P = 0.043). Individual improvements in aerobic fitness were associated with greater improvements in the F 2 -isoprostane response (r = -0.761, P = 0.011), further supporting the role of aerobic fitness in resistance to oxidative stress. These data demonstrate that regular exercise with improved fitness leads to increased resistance to oxidative stress in older middle-aged adults and that this measure is modifiable in previously sedentary individuals.
Tancredi, Giancarlo; Lambiase, Caterina; Favoriti, Alessandra; Ricupito, Francesca; Paoli, Sara; Duse, Marzia; De Castro, Giovanna; Zicari, Anna Maria; Vitaliti, Giovanna; Falsaperla, Raffaele; Lubrano, Riccardo
2016-04-27
An increasing number of children with chronic disease require a complete medical examination to be able to practice physical activity. Particularly children with solitary functioning kidney (SFK) need an accurate functional evaluation to perform sports activities safely. The aim of our study was to evaluate the influence of regular physical activity on the cardiorespiratory function of children with solitary functioning kidney. Twenty-nine patients with congenital SFK, mean age 13.9 ± 5.0 years, and 36 controls (C), mean age 13.8 ± 3.7 years, underwent a cardiorespiratory assessment with spirometry and maximal cardiopulmonary exercise testing. All subjects were divided in two groups: sedentary (S) and trained (T) patients, by means of a standardized questionnaire about their weekly physical activity. We found that mean values of maximal oxygen consumption (VO2max) and exercise time (ET) were higher in T subjects than in S subjects. Particularly SFK-T presented mean values of VO2max similar to C-T and significantly higher than C-S (SFK-T: 44.7 ± 6.3 vs C-S: 37.8 ± 3.7 ml/min/kg; p < 0.0008). We also found significantly higher mean values of ET (minutes) in minutes in SFK-T than C-S subjects (SFK-T: 12.9 ± 1.6 vs C-S: 10.8 ± 2.5 min; p <0.02). Our study showed that regular moderate/high level of physical activity improve aerobic capacity (VO2max) and exercise tolerance in congenital SFK patients without increasing the risks for cardiovascular accidents and accordingly sports activities should be strongly encouraged in SFK patients to maximize health benefits.
Balasubramanian, Hari; Biehl, Sebastian; Dai, Longjie; Muriel, Ana
2014-03-01
Appointments in primary care are of two types: 1) prescheduled appointments, which are booked in advance of a given workday; and 2) same-day appointments, which are booked as calls come during the workday. The challenge for practices is to provide preferred time slots for prescheduled appointments and yet see as many same-day patients as possible during regular work hours. It is also important, to the extent possible, to match same-day patients with their own providers (so as to maximize continuity of care). In this paper, we present a mathematical framework (a stochastic dynamic program) for same-day patient allocation in multi-physician practices in which calls for same-day appointments come in dynamically over a workday. Allocation decisions have to be made in the presence of prescheduled appointments and without complete demand information. The objective is to maximize a weighted measure that includes the number of same-day patients seen during regular work hours as well as the continuity provided to these patients. Our experimental design is motivated by empirical data we collected at a 3-provider family medicine practice in Massachusetts. Our results show that the location of prescheduled appointments - i.e. where in the day these appointments are booked - has a significant impact on the number of same-day patients a practice can see during regular work hours, as well as the continuity the practice is able to provide. We find that a 2-Blocks policy which books prescheduled appointments in two clusters - early morning and early afternoon - works very well. We also provide a simple, easily implementable policy for schedulers to assign incoming same-day requests to appointment slots. Our results show that this policy provides near-optimal same-day assignments in a variety of settings.
Fast Quantitative Susceptibility Mapping with L1-Regularization and Automatic Parameter Selection
Bilgic, Berkin; Fan, Audrey P.; Polimeni, Jonathan R.; Cauley, Stephen F.; Bianciardi, Marta; Adalsteinsson, Elfar; Wald, Lawrence L.; Setsompop, Kawin
2014-01-01
Purpose To enable fast reconstruction of quantitative susceptibility maps with Total Variation penalty and automatic regularization parameter selection. Methods ℓ1-regularized susceptibility mapping is accelerated by variable-splitting, which allows closed-form evaluation of each iteration of the algorithm by soft thresholding and FFTs. This fast algorithm also renders automatic regularization parameter estimation practical. A weighting mask derived from the magnitude signal can be incorporated to allow edge-aware regularization. Results Compared to the nonlinear Conjugate Gradient (CG) solver, the proposed method offers 20× speed-up in reconstruction time. A complete pipeline including Laplacian phase unwrapping, background phase removal with SHARP filtering and ℓ1-regularized dipole inversion at 0.6 mm isotropic resolution is completed in 1.2 minutes using Matlab on a standard workstation compared to 22 minutes using the Conjugate Gradient solver. This fast reconstruction allows estimation of regularization parameters with the L-curve method in 13 minutes, which would have taken 4 hours with the CG algorithm. Proposed method also permits magnitude-weighted regularization, which prevents smoothing across edges identified on the magnitude signal. This more complicated optimization problem is solved 5× faster than the nonlinear CG approach. Utility of the proposed method is also demonstrated in functional BOLD susceptibility mapping, where processing of the massive time-series dataset would otherwise be prohibitive with the CG solver. Conclusion Online reconstruction of regularized susceptibility maps may become feasible with the proposed dipole inversion. PMID:24259479
Thermal comfort of dual-chamber ski gloves
NASA Astrophysics Data System (ADS)
Dotti, F.; Colonna, M.; Ferri, A.
2017-10-01
In this work, the special design of a pair of ski gloves has been assessed in terms of thermal comfort. The glove 2in1 Gore-Tex has a dual-chamber construction, with two possible wearing configurations: one called “grip” to maximize finger flexibility and one called “warm” to maximize thermal insulation in extremely cold conditions. The dual-chamber gloves has been compared with two regular ski gloves produced by the same company. An intermittent test on a treadmill was carried out in a climatic chamber: it was made of four intense activity phases, during which the volunteer ran at 9 km/h on a 5% slope for 4 minutes, spaced out by 5-min resting phases. Finger temperature measurements were compared with the thermal sensations expressed by two volunteers during the test.
NASA Astrophysics Data System (ADS)
Kim, Bong-Sik
Three dimensional (3D) Navier-Stokes-alpha equations are considered for uniformly rotating geophysical fluid flows (large Coriolis parameter f = 2O). The Navier-Stokes-alpha equations are a nonlinear dispersive regularization of usual Navier-Stokes equations obtained by Lagrangian averaging. The focus is on the existence and global regularity of solutions of the 3D rotating Navier-Stokes-alpha equations and the uniform convergence of these solutions to those of the original 3D rotating Navier-Stokes equations for large Coriolis parameters f as alpha → 0. Methods are based on fast singular oscillating limits and results are obtained for periodic boundary conditions for all domain aspect ratios, including the case of three wave resonances which yields nonlinear "2½-dimensional" limit resonant equations for f → 0. The existence and global regularity of solutions of limit resonant equations is established, uniformly in alpha. Bootstrapping from global regularity of the limit equations, the existence of a regular solution of the full 3D rotating Navier-Stokes-alpha equations for large f for an infinite time is established. Then, the uniform convergence of a regular solution of the 3D rotating Navier-Stokes-alpha equations (alpha ≠ 0) to the one of the original 3D rotating NavierStokes equations (alpha = 0) for f large but fixed as alpha → 0 follows; this implies "shadowing" of trajectories of the limit dynamical systems by those of the perturbed alpha-dynamical systems. All the estimates are uniform in alpha, in contrast with previous estimates in the literature which blow up as alpha → 0. Finally, the existence of global attractors as well as exponential attractors is established for large f and the estimates are uniform in alpha.
Source term identification in atmospheric modelling via sparse optimization
NASA Astrophysics Data System (ADS)
Adam, Lukas; Branda, Martin; Hamburger, Thomas
2015-04-01
Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the concept of sparsity. In the paper, we summarize several optimization techniques which are used for finding sparse solutions and propose their modifications to handle selected constraints such as nonnegativity constraints and simple linear constraints, for example the minimal or maximal amount of total release. These techniques range from successive convex approximations to solution of one nonconvex problem. On simple examples, we explain these techniques and compare them from the point of implementation simplicity, approximation capability and convergence properties. Finally, these methods will be applied on the European Tracer Experiment (ETEX) data and the results will be compared with the current state of arts techniques such as regularized least squares or Bayesian approach. The obtained results show the surprisingly good results of these techniques. This research is supported by EEA/Norwegian Financial Mechanism under project 7F14287 STRADI.
NASA Astrophysics Data System (ADS)
Christensen, N. K.; Christensen, S.; Ferre, T. P. A.
2015-09-01
Despite geophysics is being used increasingly, it is still unclear how and when the integration of geophysical data improves the construction and predictive capability of groundwater models. Therefore, this paper presents a newly developed HYdrogeophysical TEst-Bench (HYTEB) which is a collection of geological, groundwater and geophysical modeling and inversion software wrapped to make a platform for generation and consideration of multi-modal data for objective hydrologic analysis. It is intentionally flexible to allow for simple or sophisticated treatments of geophysical responses, hydrologic processes, parameterization, and inversion approaches. It can also be used to discover potential errors that can be introduced through petrophysical models and approaches to correlating geophysical and hydrologic parameters. With HYTEB we study alternative uses of electromagnetic (EM) data for groundwater modeling in a hydrogeological environment consisting of various types of glacial deposits with typical hydraulic conductivities and electrical resistivities covering impermeable bedrock with low resistivity. It is investigated to what extent groundwater model calibration and, often more importantly, model predictions can be improved by including in the calibration process electrical resistivity estimates obtained from TEM data. In all calibration cases, the hydraulic conductivity field is highly parameterized and the estimation is stabilized by regularization. For purely hydrologic inversion (HI, only using hydrologic data) we used Tikhonov regularization combined with singular value decomposition. For joint hydrogeophysical inversion (JHI) and sequential hydrogeophysical inversion (SHI) the resistivity estimates from TEM are used together with a petrophysical relationship to formulate the regularization term. In all cases, the regularization stabilizes the inversion, but neither the HI nor the JHI objective function could be minimized uniquely. SHI or JHI with regularization based on the use of TEM data produced estimated hydraulic conductivity fields that bear more resemblance to the reference fields than when using HI with Tikhonov regularization. However, for the studied system the resistivities estimated by SHI or JHI must be used with caution as estimators of hydraulic conductivity or as regularization means for subsequent hydrological inversion. Much of the lack of value of the geophysical data arises from a mistaken faith in the power of the petrophysical model in combination with geophysical data of low sensitivity, thereby propagating geophysical estimation errors into the hydrologic model parameters. With respect to reducing model prediction error, it depends on the type of prediction whether it has value to include geophysical data in the model calibration. It is found that all calibrated models are good predictors of hydraulic head. When the stress situation is changed from that of the hydrologic calibration data, then all models make biased predictions of head change. All calibrated models turn out to be a very poor predictor of the pumping well's recharge area and groundwater age. The reason for this is that distributed recharge is parameterized as depending on estimated hydraulic conductivity of the upper model layer which tends to be underestimated. Another important insight from the HYTEB analysis is thus that either recharge should be parameterized and estimated in a different way, or other types of data should be added to better constrain the recharge estimates.
Multidimensional density shaping by sigmoids.
Roth, Z; Baram, Y
1996-01-01
An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.
Balakrishnan, Narayanaswamy; Pal, Suvra
2016-08-01
Recently, a flexible cure rate survival model has been developed by assuming the number of competing causes of the event of interest to follow the Conway-Maxwell-Poisson distribution. This model includes some of the well-known cure rate models discussed in the literature as special cases. Data obtained from cancer clinical trials are often right censored and expectation maximization algorithm can be used in this case to efficiently estimate the model parameters based on right censored data. In this paper, we consider the competing cause scenario and assuming the time-to-event to follow the Weibull distribution, we derive the necessary steps of the expectation maximization algorithm for estimating the parameters of different cure rate survival models. The standard errors of the maximum likelihood estimates are obtained by inverting the observed information matrix. The method of inference developed here is examined by means of an extensive Monte Carlo simulation study. Finally, we illustrate the proposed methodology with a real data on cancer recurrence. © The Author(s) 2013.
Influence of mobile phone traffic on base station exposure of the general public.
Joseph, Wout; Verloock, Leen
2010-11-01
The influence of mobile phone traffic on temporal radiofrequency exposure due to base stations during 7 d is compared for five different sites with Erlang data (representing average mobile phone traffic intensity during a period of time). The time periods of high exposure and high traffic during a day are compared and good agreement is obtained. The minimal required measurement periods to obtain accurate estimates for maximal and average long-period exposure (7 d) are determined. It is shown that these periods may be very long, indicating the necessity of new methodologies to estimate maximal and average exposure from short-period measurement data. Therefore, a new method to calculate the fields at a time instant from fields at another time instant using normalized Erlang values is proposed. This enables the estimation of maximal and average exposure during a week from short-period measurements using only Erlang data and avoids the necessity of long measurement times.
Unfolding sphere size distributions with a density estimator based on Tikhonov regularization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weese, J.; Korat, E.; Maier, D.
1997-12-01
This report proposes a method for unfolding sphere size distributions given a sample of radii that combines the advantages of a density estimator with those of Tikhonov regularization methods. The following topics are discusses in this report to achieve this method: the relation between the profile and the sphere size distribution; the method for unfolding sphere size distributions; the results based on simulations; and the experimental data comparison.
Novel cooperative neural fusion algorithms for image restoration and image fusion.
Xia, Youshen; Kamel, Mohamed S
2007-02-01
To deal with the problem of restoring degraded images with non-Gaussian noise, this paper proposes a novel cooperative neural fusion regularization (CNFR) algorithm for image restoration. Compared with conventional regularization algorithms for image restoration, the proposed CNFR algorithm can relax need of the optimal regularization parameter to be estimated. Furthermore, to enhance the quality of restored images, this paper presents a cooperative neural fusion (CNF) algorithm for image fusion. Compared with existing signal-level image fusion algorithms, the proposed CNF algorithm can greatly reduce the loss of contrast information under blind Gaussian noise environments. The performance analysis shows that the proposed two neural fusion algorithms can converge globally to the robust and optimal image estimate. Simulation results confirm that in different noise environments, the proposed two neural fusion algorithms can obtain a better image estimate than several well known image restoration and image fusion methods.
The Fermi LAT Very Important Project (VIP) List of Active Galactic Nuclei
NASA Astrophysics Data System (ADS)
Thompson, David J.; Fermi Large Area Telescope Collaboration
2018-01-01
Using nine years of Fermi Gamma-ray Space Telescope Large Area Telescope (LAT) observations, we have identified 30 projects for Active Galactic Nuclei (AGN) that appear to provide strong prospects for significant scientific advances. This Very Important Project (VIP) AGN list includes AGNs that have good multiwavelength coverage, are regularly detected by the Fermi LAT, and offer scientifically interesting timing or spectral properties. Each project has one or more LAT scientists identified who are actively monitoring the source. They will be regularly updating the LAT results for these VIP AGNs, working together with multiwavelength observers and theorists to maximize the scientific return during the coming years of the Fermi mission. See https://confluence.slac.stanford.edu/display/GLAMCOG/VIP+List+of+AGNs+for+Continued+Study
Cortical dipole imaging using truncated total least squares considering transfer matrix error.
Hori, Junichi; Takeuchi, Kosuke
2013-01-01
Cortical dipole imaging has been proposed as a method to visualize electroencephalogram in high spatial resolution. We investigated the inverse technique of cortical dipole imaging using a truncated total least squares (TTLS). The TTLS is a regularization technique to reduce the influence from both the measurement noise and the transfer matrix error caused by the head model distortion. The estimation of the regularization parameter was also investigated based on L-curve. The computer simulation suggested that the estimation accuracy was improved by the TTLS compared with Tikhonov regularization. The proposed method was applied to human experimental data of visual evoked potentials. We confirmed the TTLS provided the high spatial resolution of cortical dipole imaging.
Space-dependent perfusion coefficient estimation in a 2D bioheat transfer problem
NASA Astrophysics Data System (ADS)
Bazán, Fermín S. V.; Bedin, Luciano; Borges, Leonardo S.
2017-05-01
In this work, a method for estimating the space-dependent perfusion coefficient parameter in a 2D bioheat transfer model is presented. In the method, the bioheat transfer model is transformed into a time-dependent semidiscrete system of ordinary differential equations involving perfusion coefficient values as parameters, and the estimation problem is solved through a nonlinear least squares technique. In particular, the bioheat problem is solved by the method of lines based on a highly accurate pseudospectral approach, and perfusion coefficient values are estimated by the regularized Gauss-Newton method coupled with a proper regularization parameter. The performance of the method on several test problems is illustrated numerically.
AUC-Maximizing Ensembles through Metalearning.
LeDell, Erin; van der Laan, Mark J; Petersen, Maya
2016-05-01
Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree.
AUC-Maximizing Ensembles through Metalearning
LeDell, Erin; van der Laan, Mark J.; Peterson, Maya
2016-01-01
Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree. PMID:27227721
Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart
2011-01-01
We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the “model-free” variational analysis (VA)-based image enhancement approach and the “model-based” descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations. PMID:22163859
Length and elasticity of side reins affect rein tension at trot.
Clayton, Hilary M; Larson, Britt; Kaiser, LeeAnn J; Lavagnino, Michael
2011-06-01
This study investigated the horse's contribution to tension in the reins. The experimental hypotheses were that tension in side reins (1) increases biphasically in each trot stride, (2) changes inversely with rein length, and (3) changes with elasticity of the reins. Eight riding horses trotted in hand at consistent speed in a straight line wearing a bit and bridle and three types of side reins (inelastic, stiff elastic, compliant elastic) were evaluated in random order at long, neutral, and short lengths. Strain gauge transducers (240 Hz) measured minimal, maximal and mean rein tension, rate of loading and impulse. The effects of rein type and length were evaluated using ANOVA with Bonferroni post hoc tests. Rein tension oscillated in a regular pattern with a peak during each diagonal stance phase. Within each rein type, minimal, maximal and mean tensions were higher with shorter reins. At neutral or short lengths, minimal tension increased and maximal tension decreased with elasticity of the reins. Short, inelastic reins had the highest maximal tension and rate of loading. Since the tension variables respond differently to rein elasticity at different lengths, it is recommended that a set of variables representing different aspects of rein tension should be reported. Copyright © 2010 Elsevier Ltd. All rights reserved.
Cho, J; Overton, T R; Schwab, C G; Tauer, L W
2007-10-01
The profitability of feeding rumen-protected Met (RPMet) sources to produce milk protein was estimated using a 2-step procedure: First, the effect of Met in metabolizable protein (MP) on milk protein production was estimated by using a quadratic Box-Cox functional form. Then, using these estimation results, the amounts of RPMet supplement that corresponded to the optimal levels of Met in MP for maximizing milk protein production and profit on dairy farms were determined. The data used in this study were modified from data used to determine the optimal level of Met in MP for lactating cows in the Nutrient Requirements of Dairy Cattle (NRC, 2001). The data used in this study differ from that in the NRC (2001) data in 2 ways. First, because dairy feed generally contains 1.80 to 1.90% Met in MP, this study adjusts the reference production value (RPV) from 2.06 to 1.80 or 1.90%. Consequently, the milk protein production response is also modified to an RPV of 1.80 or 1.90% Met in MP. Second, because this study is especially interested in how much additional Met, beyond the 1.80 or 1.90% already contained in the basal diet, is required to maximize farm profits, the data used are limited to concentrations of Met in MP above 1.80 or 1.90%. This allowed us to calculate any additional cost to farmers based solely on the price of an RPMet supplement and eliminated the need to estimate the dollar value of each gram of Met already contained in the basal diet. Results indicated that the optimal level of Met in MP for maximizing milk protein production was 2.40 and 2.42%, where the RPV was 1.80 and 1.90%, respectively. These optimal levels were almost identical to the recommended level of Met in MP of 2.40% in the NRC (2001). The amounts of RPMet required to increase the percentage of Met in MP from each RPV to 2.40 and 2.42% were 21.6 and 18.5 g/d, respectively. On the other hand, the optimal levels of Met in MP for maximizing profit were 2.32 and 2.34%, respectively. The amounts of RPMet required to increase the percentage of Met in MP from each RPV to 2.32 and 2.34% were 18.7 and 15.6 g/d, respectively. In each case, the additional daily profit per cow was estimated to be $0.38 and $0.29. These additional profit estimates were $0.02 higher than the additional profit estimates for maximizing milk protein production.
1981-03-01
percentage of fat , maximal aerobic power, serum concentrations of triglycerides, total cholesterol and HDL cholesterol. This information is obtained fr-m each...drinkinq and physical activity related to health parameters su-h as wei ,iht , body fat content, maximal aerobic T1ewer and serum concentrations of...subjects, and the body fat is estimated from the body density (4). 3. The maximal aerobic power is assessed indirectly according to the method of Astrand
Wyoming Low-Volume Roads Traffic Volume Estimation
DOT National Transportation Integrated Search
2015-10-01
Low-volume roads are excluded from regular traffic counts except on a need to know basis. But needs for traffic volume data on low-volume roads in road infrastructure management, safety, and air quality analysis have necessitated regular traffic volu...
Meylan, César M P; Cronin, John B; Oliver, Jon L; Hughes, Michael M G; Jidovtseff, Boris; Pinder, Shane
2015-03-01
The purpose of this study was to quantify the inter-session reliability of force-velocity-power profiling and estimated maximal strength in youth. Thirty-six males (11-15 years old) performed a ballistic supine leg press test at five randomized loads (80%, 100%, 120%, 140%, and 160% body mass) on three separate occasions. Peak and mean force, power, velocity, and peak displacement were collected with a linear position transducer attached to the weight stack. Mean values at each load were used to calculate different regression lines and estimate maximal strength, force, velocity, and power. All variables were found reliable (change in the mean [CIM] = - 1 to 14%; coefficient of variation [CV] = 3-18%; intraclass correlation coefficient [ICC] = 0.74-0.99), but were likely to benefit from a familiarization, apart from the unreliable maximal force/velocity ratio (CIM = 0-3%; CV = 23-25%; ICC = 0.35-0.54) and load at maximal power (CIM = - 1 to 2%; CV = 10-13%; ICC = 0.26-0.61). Isoinertial force-velocity-power profiling and maximal strength in youth can be assessed after a familiarization session. Such profiling may provide valuable insight into neuromuscular capabilities during growth and maturation and may be used to monitor specific training adaptations.
Underreporting of high-risk water and sanitation practices undermines progress on global targets.
Vedachalam, Sridhar; MacDonald, Luke H; Shiferaw, Solomon; Seme, Assefa; Schwab, Kellogg J
2017-01-01
Water and sanitation indicators under the Millennium Development Goals failed to capture high-risk practices undertaken on a regular basis. In conjunction with local partners, fourteen rounds of household surveys using mobile phones with a customized open-source application were conducted across nine study geographies in Asia and Africa. In addition to the main water and sanitation facilities, interviewees (n = 245,054) identified all water and sanitation options regularly used for at least one season of the year. Unimproved water consumption and open defecation were targeted as high-risk practices. We defined underreporting as the difference between the regular and main use of high-risk practices. Our estimates of high-risk practices as the main option matched the widely accepted Demographic and Health Surveys (DHS) estimates within the 95% confidence interval. However, estimates of these practices as a regular option was far higher than the DHS estimates. Across the nine geographies, median underreporting of unimproved water use was 5.5%, with a range of 0.5% to 13.9%. Median underreporting of open defecation was much higher at 9.9%, with a range of 2.7% to 11.5%. This resulted in an underreported population of 25 million regularly consuming unimproved water and 50 million regularly practicing open defecation. Further examination of data from Ethiopia suggested that location and socio-economic factors were significant drivers of underreporting. Current global monitoring relies on a framework that considers the availability and use of a single option to meet drinking water and sanitation needs. Our analysis demonstrates the use of multiple options and widespread underreporting of high-risk practices. Policies based on current monitoring data, therefore, fail to consider the range of challenges and solutions to meeting water and sanitation needs, and result in an inflated sense of progress. Mobile surveys offer a cost-effective and innovative platform to rapidly and repeatedly monitor critical development metrics.
Underreporting of high-risk water and sanitation practices undermines progress on global targets
Vedachalam, Sridhar; MacDonald, Luke H.; Shiferaw, Solomon; Seme, Assefa; Schwab, Kellogg J.
2017-01-01
Water and sanitation indicators under the Millennium Development Goals failed to capture high-risk practices undertaken on a regular basis. In conjunction with local partners, fourteen rounds of household surveys using mobile phones with a customized open-source application were conducted across nine study geographies in Asia and Africa. In addition to the main water and sanitation facilities, interviewees (n = 245,054) identified all water and sanitation options regularly used for at least one season of the year. Unimproved water consumption and open defecation were targeted as high-risk practices. We defined underreporting as the difference between the regular and main use of high-risk practices. Our estimates of high-risk practices as the main option matched the widely accepted Demographic and Health Surveys (DHS) estimates within the 95% confidence interval. However, estimates of these practices as a regular option was far higher than the DHS estimates. Across the nine geographies, median underreporting of unimproved water use was 5.5%, with a range of 0.5% to 13.9%. Median underreporting of open defecation was much higher at 9.9%, with a range of 2.7% to 11.5%. This resulted in an underreported population of 25 million regularly consuming unimproved water and 50 million regularly practicing open defecation. Further examination of data from Ethiopia suggested that location and socio-economic factors were significant drivers of underreporting. Current global monitoring relies on a framework that considers the availability and use of a single option to meet drinking water and sanitation needs. Our analysis demonstrates the use of multiple options and widespread underreporting of high-risk practices. Policies based on current monitoring data, therefore, fail to consider the range of challenges and solutions to meeting water and sanitation needs, and result in an inflated sense of progress. Mobile surveys offer a cost-effective and innovative platform to rapidly and repeatedly monitor critical development metrics. PMID:28489904
Gleeson, Fergus V.; Brady, Michael; Schnabel, Julia A.
2018-01-01
Abstract. Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset. PMID:29662918
Papież, Bartłomiej W; Franklin, James M; Heinrich, Mattias P; Gleeson, Fergus V; Brady, Michael; Schnabel, Julia A
2018-04-01
Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset.
Krol, Marieke; Brouwer, Werner B F; Severens, Johan L; Kaper, Janneke; Evers, Silvia M A A
2012-12-01
Productivity costs related to paid work are commonly calculated in economic evaluations of health technologies by multiplying the relevant number of work days lost with a wage rate estimate. It has been argued that actual productivity costs may either be lower or higher than current estimates due to compensation mechanisms and/or multiplier effects (related to team dependency and problems with finding good substitutes in cases of absenteeism). Empirical evidence on such mechanisms and their impact on productivity costs is scarce, however. This study aims to increase knowledge on how diminished productivity is compensated within firms. Moreover, it aims to explore how compensation and multiplier effects potentially affect productivity cost estimates. Absenteeism and compensation mechanisms were measured in a randomized trial among Dutch citizens examining the cost-effectiveness of reimbursement for smoking cessation treatment. Multiplier effects were extracted from published literature. Productivity costs were calculated applying the Friction Cost Approach. Regular estimates were subsequently adjusted for (i) compensation during regular working hours, (ii) job dependent multipliers and (iii) both compensation and multiplier effects. A total of 187 respondents included in the trial were useful for inclusion in this study, based on being in paid employment, having experienced absenteeism in the preceding six months and completing the questionnaire on absenteeism and compensation mechanisms. Over half of these respondents stated that their absenteeism was compensated during normal working hours by themselves or colleagues. Only counting productivity costs not compensated in regular working hours reduced the traditional estimate by 57%. Correcting for multiplier effects increased regular estimates by a quarter. Combining both impacts decreased traditional estimates by 29%. To conclude, large amounts of lost production are compensated in normal hours. Productivity costs estimates are strongly influenced by adjustment for compensation mechanisms and multiplier effects. The validity of such adjustments needs further examination, however. Copyright © 2012 Elsevier Ltd. All rights reserved.
Pal, Suvra; Balakrishnan, Narayanaswamy
2018-05-01
In this paper, we develop likelihood inference based on the expectation maximization algorithm for the Box-Cox transformation cure rate model assuming the lifetimes to follow a Weibull distribution. A simulation study is carried out to demonstrate the performance of the proposed estimation method. Through Monte Carlo simulations, we also study the effect of model misspecification on the estimate of cure rate. Finally, we analyze a well-known data on melanoma with the model and the inferential method developed here.
NASA Technical Reports Server (NTRS)
2008-01-01
Evaluation of Maximal Oxygen Uptake and Submaximal Estimates of VO2max Before, During, and After Long Duration International Space Station Missions (VO2max) will document changes in maximum oxygen uptake for crewmembers onboard the International Space Station (ISS) on long-duration missions, greater than 90 days. This investigation will establish the characteristics of VO2max during flight and assess the validity of the current methods of tracking aerobic capacity change during and following the ISS missions.
Modulational estimate for the maximal Lyapunov exponent in Fermi-Pasta-Ulam chains
NASA Astrophysics Data System (ADS)
Dauxois, Thierry; Ruffo, Stefano; Torcini, Alessandro
1997-12-01
In the framework of the Fermi-Pasta-Ulam (FPU) model, we show a simple method to give an accurate analytical estimation of the maximal Lyapunov exponent at high energy density. The method is based on the computation of the mean value of the modulational instability growth rates associated to unstable modes. Moreover, we show that the strong stochasticity threshold found in the β-FPU system is closely related to a transition in tangent space, the Lyapunov eigenvector being more localized in space at high energy.
Waldmann, Elisa; Vogt, Anja; Crispin, Alexander; Altenhofer, Julia; Riks, Ina; Parhofer, Klaus G
2017-04-01
In this study, we evaluated the effect of mipomersen in patients with severe LDL-hypercholesterolaemia and atherosclerosis, treated by lipid lowering drugs and regular lipoprotein apheresis. This prospective, randomized, controlled phase II single center trial enrolled 15 patients (9 males, 6 females; 59 ± 9 y, BMI 27 ± 4 kg/m 2 ) with established atherosclerosis, LDL-cholesterol ≥130 mg/dL (3.4 mmol/L) despite maximal possible drug therapy, and fulfilling German criteria for regular lipoprotein apheresis. All patients were on stable lipid lowering drug therapy and regular apheresis for >3 months. Patients randomized to treatment (n = 11) self-injected mipomersen 200 mg sc weekly, at day 4 after apheresis, for 26 weeks. Patients randomized to control (n = 4) continued apheresis without injection. The primary endpoint was the change in pre-apheresis LDL-cholesterol. Of the patients randomized to mipomersen, 3 discontinued the drug early (<12 weeks therapy) for side effects. For these, another 3 were recruited and randomized. Further, 4 patients discontinued mipomersen between 12 and 26 weeks for side effects (moderate to severe injection site reactions n = 3 and elevated liver enzymes n = 1). In those treated for >12 weeks, mipomersen reduced pre-apheresis LDL-cholesterol significantly by 22.6 ± 17.0%, from a baseline of 4.8 ± 1.2 mmol/L to 3.7 ± 0.9 mmol/L, while there was no significant change in the control group (+1.6 ± 9.3%), with the difference between the groups being significant (p=0.02). Mipomersen also decreased pre-apheresis lipoprotein(a) (Lp(a)) concentration from a median baseline of 40.2 mg/dL (32.5,71) by 16% (-19.4,13.6), though without significance (p=0.21). Mipomersen reduces LDL-cholesterol (significantly) and Lp(a) (non-significantly) in patients on maximal lipid-lowering drug therapy and regular apheresis, but is often associated with side effects. Copyright © 2017 Elsevier B.V. All rights reserved.
Regularity gradient estimates for weak solutions of singular quasi-linear parabolic equations
NASA Astrophysics Data System (ADS)
Phan, Tuoc
2017-12-01
This paper studies the Sobolev regularity for weak solutions of a class of singular quasi-linear parabolic problems of the form ut -div [ A (x , t , u , ∇u) ] =div [ F ] with homogeneous Dirichlet boundary conditions over bounded spatial domains. Our main focus is on the case that the vector coefficients A are discontinuous and singular in (x , t)-variables, and dependent on the solution u. Global and interior weighted W 1 , p (ΩT , ω)-regularity estimates are established for weak solutions of these equations, where ω is a weight function in some Muckenhoupt class of weights. The results obtained are even new for linear equations, and for ω = 1, because of the singularity of the coefficients in (x , t)-variables.
A Revision on Classical Solutions to the Cauchy Boltzmann Problem for Soft Potentials
NASA Astrophysics Data System (ADS)
Alonso, Ricardo J.; Gamba, Irene M.
2011-05-01
This short note complements the recent paper of the authors (Alonso, Gamba in J. Stat. Phys. 137(5-6):1147-1165, 2009). We revisit the results on propagation of regularity and stability using L p estimates for the gain and loss collision operators which had the exponent range misstated for the loss operator. We show here the correct range of exponents. We require a Lebesgue's exponent α>1 in the angular part of the collision kernel in order to obtain finiteness in some constants involved in the regularity and stability estimates. As a consequence the L p regularity associated to the Cauchy problem of the space inhomogeneous Boltzmann equation holds for a finite range of p≥1 explicitly determined.
Memory-efficient decoding of LDPC codes
NASA Technical Reports Server (NTRS)
Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon
2005-01-01
We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.
Decay, excitation, and ionization of lithium Rydberg states by blackbody radiation
NASA Astrophysics Data System (ADS)
Ovsiannikov, V. D.; Glukhov, I. L.
2010-09-01
Details of interaction between the blackbody radiation and neutral lithium atoms were studied in the temperature ranges T = 100-2000 K. The rates of thermally induced decays, excitations and ionization were calculated for S-, P- and D-series of Rydberg states in the Fues' model potential approach. The quantitative regularities for the states of the maximal rates of blackbody-radiation-induced processes were determined. Approximation formulas were proposed for analytical representation of the depopulation rates.
New tip design and shock wave pattern of electrohydraulic probes for endoureteral lithotripsy.
Vorreuther, R
1993-02-01
A new tip design of a 3.3F electrohydraulic probe for endoureteral lithotripsy was evaluated in comparison to a regular probe. The peak pressure, as well as the slope of the shock front, depend solely on the voltage. Increasing the capacity leads merely to broader pulses. A laser-like short high-pressure pulse has a greater impact on stone disintegration than a corresponding broader low-pressure pulse of the same energy. Using the regular probe, only positive pressures were obtained. Pressure distribution around the regular tip was approximately spherical, whereas the modified probe tip "beamed" the shock wave to a great extent. In addition, a negative-pressure half-cycle was added to the initial positive peak pressure, which resulted in a higher maximal pressure amplitude. The directed shock wave had a greater depth of penetration into a model stone. Thus, the ability of the new probe to destroy harder stones especially should be greater. The trauma to the ureter was reduced when touching the wall tangentially. No difference in the effect of the two probes was seen when placing the probe directly on the mucosa.
Managing a closed-loop supply chain inventory system with learning effects
NASA Astrophysics Data System (ADS)
Jauhari, Wakhid Ahmad; Dwicahyani, Anindya Rachma; Hendaryani, Oktiviandri; Kurdhi, Nughthoh Arfawi
2018-02-01
In this paper, we propose a closed-loop supply chain model consisting of a retailer and a manufacturer. We intend to investigate the impact of learning in regular production, remanufacturing and reworking. The customer demand is assumed deterministic and will be satisfied from both regular production and remanufacturing process. The return rate of used items depends on quality. We propose a mathematical model with the objective is to maximize the joint total profit by simultaneously determining the length of ordering cycle for the retailer and the number of regular production and remanufacturing cycle. The algorithm is suggested for finding the optimal solution. A numerical example is presented to illustrate the application of using a proposed model. The results show that the integrated model performs better in reducing total cost compared to the independent model. The total cost is most affected by the changes in the values of unit production cost and acceptable quality level. In addition, the changes in the defective items proportion and the fraction of holding costs significantly influence the retailer's ordering period.
Gabbett, T
2005-01-01
Objectives: To compare the physiological and anthropometric characteristics of specific playing positions and positional playing groups in junior rugby league players. Methods: Two hundred and forty junior rugby league players underwent measurements of standard anthropometry (body mass, height, sum of four skinfolds), muscular power (vertical jump), speed (10, 20, and 40 m sprint), agility (L run), and estimated maximal aerobic power (multi-stage fitness test) during the competitive phase of the season, after players had obtained a degree of match fitness. Results: Props were significantly (p<0.05) taller, heavier, and had greater skinfold thickness than all other positions. The halfback and centre positions were faster than props over 40 m. Halfbacks had significantly (p<0.05) greater estimated maximal aerobic power than props. When data were analysed according to positional similarities, it was found that the props positional group had lower 20 and 40 m speed, agility, and estimated maximal aerobic power than the hookers and halves and outside backs positional groups. Differences in the physiological and anthropometric characteristics of other individual playing positions and positional playing groups were uncommon. Conclusions: The results of this study demonstrate that few physiological and anthropometric differences exist among individual playing positions in junior rugby league players, although props are taller, heavier, have greater skinfold thickness, lower 20 and 40 m speed, agility, and estimated maximal aerobic power than other positional playing groups. These findings provide normative data and realistic performance standards for junior rugby league players competing in specific individual positions and positional playing groups. PMID:16118309
Carlsen, K H; Oseid, S; Sandnes, T; Trondskog, B; Røksund, O
1991-03-20
Geilomo hospital for children with asthma and allergy is situated 800 m above sea level in a non-polluted area in the central part of Norway. 31 children who were admitted to this hospital from different parts of Norway (mostly from the main cities) were studied for six weeks. They underwent physical training and daily measurements were taken of lung function and the effect of bronchodilators. The bronchial responsiveness of the children improved significantly from week 1 to week 6, as measured by reduction in lung function after sub-maximal running on a treadmill. There was significant improvement in daily symptom score, and in degree of obstruction as shown by physical examination. The children's improvement was probably the result of a stay in a mountainous area with very little air pollution or allergens, combined with regular planned physical activity, and regular medication and surveillance.
Early, regular breast-milk pumping may lead to early breast-milk feeding cessation.
Yourkavitch, Jennifer; Rasmussen, Kathleen M; Pence, Brian W; Aiello, Allison; Ennett, Susan; Bengtson, Angela M; Chetwynd, Ellen; Robinson, Whitney
2018-06-01
To estimate the effect of early, regular breast-milk pumping on time to breast-milk feeding (BMF) and exclusive BMF cessation, for working and non-working women. Using the Infant Feeding Practices Survey II (IFPS II), we estimated weighted hazard ratios (HR) for the effect of regular pumping (participant defined) compared with non-regular/not pumping, reported at month 2, on both time to BMF cessation (to 12 months) and time to exclusive BMF cessation (to 6 months), using inverse probability weights to control confounding. USA, 2005-2007. BMF (n 1624) and exclusively BMF (n 971) IFPS II participants at month 2. The weighted HR for time to BMF cessation was 1·62 (95 % CI 1·47, 1·78) and for time to exclusive BMF cessation was 1·14 (95 % CI 1·03, 1·25). Among non-working women, the weighted HR for time to BMF cessation was 2·05 (95 % CI 1·84, 2·28) and for time to exclusive BMF cessation was 1·10 (95 % CI 0·98, 1·22). Among working women, the weighted HR for time to BMF cessation was 0·90 (95 % CI 0·75, 1·07) and for time to exclusive BMF cessation was 1·14 (95 % CI 0·96, 1·36). Overall, regular pumpers were more likely to stop BMF and exclusive BMF than non-regular/non-pumpers. Non-working regular pumpers were more likely than non-regular/non-pumpers to stop BMF. There was no effect among working women. Early, regular pumpers may need specialized support to maintain BMF.
ERIC Educational Resources Information Center
Coakley, John
2010-01-01
Professional cost estimators are widely used by architects during the design phases of a project to provide preliminary cost estimates. These estimates may begin at the conceptual design phase and are prepared at regular intervals through the construction document phase. Estimating professionals are frequently tasked with "selling" the importance…
Exercise prescription for the elderly: current recommendations.
Mazzeo, R S; Tanaka, H
2001-01-01
The benefits for elderly individuals of regular participation in both cardiovascular and resistance-training programmes are great. Health benefits include a significant reduction in risk of coronary heart disease, diabetes mellitus and insulin resistance, hypertension and obesity as well as improvements in bone density, muscle mass, arterial compliance and energy metabolism. Additionally, increases in cardiovascular fitness (maximal oxygen consumption and endurance), muscle strength and overall functional capacity are forthcoming allowing elderly individuals to maintain their independence, increase levels of spontaneous physical activity and freely participate in activities associated with daily living. Taken together, these benefits associated with involvement in regular exercise can significantly improve the quality of life in elderly populations. It is noteworthy that the quality and quantity of exercise necessary to elicit important health benefits will differ from that needed to produce significant gains in fitness. This review describes the current recommendations for exercise prescriptions for the elderly for both cardiovascular and strength/resistance-training programmes. However, it must be noted that the benefits described are of little value if elderly individuals do not become involved in regular exercise regimens. Consequently, the major challenges facing healthcare professionals today concern: (i) the implementation of educational programmes designed to inform elderly individuals of the health and functional benefits associated with regular physical activity as well as how safe and effective such programmes can be; and (ii) design interventions that will both increase involvement in regular exercise as well as improve adherence and compliance to such programmes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheong, K; Lee, M; Kang, S
2014-06-01
Purpose: Despite the importance of accurately estimating the respiration regularity of a patient in motion compensation treatment, an effective and simply applicable method has rarely been reported. The authors propose a simple respiration regularity index based on parameters derived from a correspondingly simplified respiration model. Methods: In order to simplify a patient's breathing pattern while preserving the data's intrinsic properties, we defined a respiration model as a power of cosine form with a baseline drift. According to this respiration formula, breathing-pattern fluctuation could be explained using four factors: sample standard deviation of respiration period, sample standard deviation of amplitude andmore » the results of simple regression of the baseline drift (slope and standard deviation of residuals of a respiration signal. Overall irregularity (δ) was defined as a Euclidean norm of newly derived variable using principal component analysis (PCA) for the four fluctuation parameters. Finally, the proposed respiration regularity index was defined as ρ=ln(1+(1/ δ))/2, a higher ρ indicating a more regular breathing pattern. Subsequently, we applied it to simulated and clinical respiration signals from real-time position management (RPM; Varian Medical Systems, Palo Alto, CA) and investigated respiration regularity. Moreover, correlations between the regularity of the first session and the remaining fractions were investigated using Pearson's correlation coefficient. Results: The respiration regularity was determined based on ρ; patients with ρ<0.3 showed worse regularity than the others, whereas ρ>0.7 was suitable for respiratory-gated radiation therapy (RGRT). Fluctuations in breathing cycle and amplitude were especially determinative of ρ. If the respiration regularity of a patient's first session was known, it could be estimated through subsequent sessions. Conclusions: Respiration regularity could be objectively determined using a respiration regularity index, ρ. Such single-index testing of respiration regularity can facilitate determination of RGRT availability in clinical settings, especially for free-breathing cases. This work was supported by a Korea Science and Engineering Foundation (KOSEF) grant funded by the Korean Ministry of Science, ICT and Future Planning (No. 2013043498)« less
Regularized Semiparametric Estimation for Ordinary Differential Equations
Li, Yun; Zhu, Ji; Wang, Naisyin
2015-01-01
Ordinary differential equations (ODEs) are widely used in modeling dynamic systems and have ample applications in the fields of physics, engineering, economics and biological sciences. The ODE parameters often possess physiological meanings and can help scientists gain better understanding of the system. One key interest is thus to well estimate these parameters. Ideally, constant parameters are preferred due to their easy interpretation. In reality, however, constant parameters can be too restrictive such that even after incorporating error terms, there could still be unknown sources of disturbance that lead to poor agreement between observed data and the estimated ODE system. In this paper, we address this issue and accommodate short-term interferences by allowing parameters to vary with time. We propose a new regularized estimation procedure on the time-varying parameters of an ODE system so that these parameters could change with time during transitions but remain constants within stable stages. We found, through simulation studies, that the proposed method performs well and tends to have less variation in comparison to the non-regularized approach. On the theoretical front, we derive finite-sample estimation error bounds for the proposed method. Applications of the proposed method to modeling the hare-lynx relationship and the measles incidence dynamic in Ontario, Canada lead to satisfactory and meaningful results. PMID:26392639
NASA Astrophysics Data System (ADS)
Le, Nam Q.
2018-05-01
We obtain the Hölder regularity of time derivative of solutions to the dual semigeostrophic equations in two dimensions when the initial potential density is bounded away from zero and infinity. Our main tool is an interior Hölder estimate in two dimensions for an inhomogeneous linearized Monge-Ampère equation with right hand side being the divergence of a bounded vector field. As a further application of our Hölder estimate, we prove the Hölder regularity of the polar factorization for time-dependent maps in two dimensions with densities bounded away from zero and infinity. Our applications improve previous work by G. Loeper who considered the cases of densities sufficiently close to a positive constant.
NASA Astrophysics Data System (ADS)
Karimi, Milad; Moradlou, Fridoun; Hajipour, Mojtaba
2018-10-01
This paper is concerned with a backward heat conduction problem with time-dependent thermal diffusivity factor in an infinite "strip". This problem is drastically ill-posed which is caused by the amplified infinitely growth in the frequency components. A new regularization method based on the Meyer wavelet technique is developed to solve the considered problem. Using the Meyer wavelet technique, some new stable estimates are proposed in the Hölder and Logarithmic types which are optimal in the sense of given by Tautenhahn. The stability and convergence rate of the proposed regularization technique are proved. The good performance and the high-accuracy of this technique is demonstrated through various one and two dimensional examples. Numerical simulations and some comparative results are presented.
Multiple imputation of rainfall missing data in the Iberian Mediterranean context
NASA Astrophysics Data System (ADS)
Miró, Juan Javier; Caselles, Vicente; Estrela, María José
2017-11-01
Given the increasing need for complete rainfall data networks, in recent years have been proposed diverse methods for filling gaps in observed precipitation series, progressively more advanced that traditional approaches to overcome the problem. The present study has consisted in validate 10 methods (6 linear, 2 non-linear and 2 hybrid) that allow multiple imputation, i.e., fill at the same time missing data of multiple incomplete series in a dense network of neighboring stations. These were applied for daily and monthly rainfall in two sectors in the Júcar River Basin Authority (east Iberian Peninsula), which is characterized by a high spatial irregularity and difficulty of rainfall estimation. A classification of precipitation according to their genetic origin was applied as pre-processing, and a quantile-mapping adjusting as post-processing technique. The results showed in general a better performance for the non-linear and hybrid methods, highlighting that the non-linear PCA (NLPCA) method outperforms considerably the Self Organizing Maps (SOM) method within non-linear approaches. On linear methods, the Regularized Expectation Maximization method (RegEM) was the best, but far from NLPCA. Applying EOF filtering as post-processing of NLPCA (hybrid approach) yielded the best results.
Stability and Responsiveness in a Self-Organized Living Architecture
Garnier, Simon; Murphy, Tucker; Lutz, Matthew; Hurme, Edward; Leblanc, Simon; Couzin, Iain D.
2013-01-01
Robustness and adaptability are central to the functioning of biological systems, from gene networks to animal societies. Yet the mechanisms by which living organisms achieve both stability to perturbations and sensitivity to input are poorly understood. Here, we present an integrated study of a living architecture in which army ants interconnect their bodies to span gaps. We demonstrate that these self-assembled bridges are a highly effective means of maintaining traffic flow over unpredictable terrain. The individual-level rules responsible depend only on locally-estimated traffic intensity and the number of neighbours to which ants are attached within the structure. We employ a parameterized computational model to reveal that bridges are tuned to be maximally stable in the face of regular, periodic fluctuations in traffic. However analysis of the model also suggests that interactions among ants give rise to feedback processes that result in bridges being highly responsive to sudden interruptions in traffic. Subsequent field experiments confirm this prediction and thus the dual nature of stability and flexibility in living bridges. Our study demonstrates the importance of robust and adaptive modular architecture to efficient traffic organisation and reveals general principles regarding the regulation of form in biological self-assemblies. PMID:23555219
Unlocking Sensitivity for Visibility-based Estimators of the 21 cm Reionization Power Spectrum
NASA Astrophysics Data System (ADS)
Zhang, Yunfan Gerry; Liu, Adrian; Parsons, Aaron R.
2018-01-01
Radio interferometers designed to measure the cosmological 21 cm power spectrum require high sensitivity. Several modern low-frequency interferometers feature drift-scan antennas placed on a regular grid to maximize the number of instantaneously coherent (redundant) measurements. However, even for such maximum-redundancy arrays, significant sensitivity comes through partial coherence between baselines. Current visibility-based power-spectrum pipelines, though shown to ease control of systematics, lack the ability to make use of this partial redundancy. We introduce a method to leverage partial redundancy in such power-spectrum pipelines for drift-scan arrays. Our method cross-multiplies baseline pairs at a time lag and quantifies the sensitivity contributions of each pair of baselines. Using the configurations and beams of the 128-element Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER-128) and staged deployments of the Hydrogen Epoch of Reionization Array, we illustrate how our method applies to different arrays and predict the sensitivity improvements associated with pairing partially coherent baselines. As the number of antennas increases, we find partial redundancy to be of increasing importance in unlocking the full sensitivity of upcoming arrays.
Inverse Ising problem in continuous time: A latent variable approach
NASA Astrophysics Data System (ADS)
Donner, Christian; Opper, Manfred
2017-12-01
We consider the inverse Ising problem: the inference of network couplings from observed spin trajectories for a model with continuous time Glauber dynamics. By introducing two sets of auxiliary latent random variables we render the likelihood into a form which allows for simple iterative inference algorithms with analytical updates. The variables are (1) Poisson variables to linearize an exponential term which is typical for point process likelihoods and (2) Pólya-Gamma variables, which make the likelihood quadratic in the coupling parameters. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum likelihood estimate of network parameters. Using a third set of latent variables we extend the EM algorithm to sparse couplings via L1 regularization. Finally, we develop an efficient approximate Bayesian inference algorithm using a variational approach. We demonstrate the performance of our algorithms on data simulated from an Ising model. For data which are simulated from a more biologically plausible network with spiking neurons, we show that the Ising model captures well the low order statistics of the data and how the Ising couplings are related to the underlying synaptic structure of the simulated network.
ERIC Educational Resources Information Center
Casabianca, Jodi M.; Lewis, Charles
2015-01-01
Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…
An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models
ERIC Educational Resources Information Center
Lee, Taehun
2010-01-01
In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…
Optimal management of batteries in electric systems
Atcitty, Stanley; Butler, Paul C.; Corey, Garth P.; Symons, Philip C.
2002-01-01
An electric system including at least a pair of battery strings and an AC source minimizes the use and maximizes the efficiency of the AC source by using the AC source only to charge all battery strings at the same time. Then one or more battery strings is used to power the load while management, such as application of a finish charge, is provided to one battery string. After another charge cycle, the roles of the battery strings are reversed so that each battery string receives regular management.
Bisimulation equivalence of differential-algebraic systems
NASA Astrophysics Data System (ADS)
Megawati, Noorma Yulia; Schaft, Arjan van der
2018-01-01
In this paper, the notion of bisimulation relation for linear input-state-output systems is extended to general linear differential-algebraic (DAE) systems. Geometric control theory is used to derive a linear-algebraic characterisation of bisimulation relations, and an algorithm for computing the maximal bisimulation relation between two linear DAE systems. The general definition is specialised to the case where the matrix pencil sE - A is regular. Furthermore, by developing a one-sided version of bisimulation, characterisations of simulation and abstraction are obtained.
[Calculating the optimum size of a hemodialysis unit based on infrastructure potential].
Avila-Palomares, Paula; López-Cervantes, Malaquías; Durán-Arenas, Luis
2010-01-01
To estimate the optimum size for hemodialysis units to maximize production given capital constraints. A national study in Mexico was conducted in 2009. Three possible methods for estimating a units optimum size were analyzed: hemodialysis services production under monopolistic market, under a perfect competitive market and production maximization given capital constraints. The third method was considered best based on the assumptions made in this paper; an optimal size unit should have 16 dialyzers (15 active and one back up dialyzer) and a purifier system able to supply all. It also requires one nephrologist, five nurses per shift, considering four shifts per day. Empirical evidence shows serious inefficiencies in the operation of units throughout the country. Most units fail to maximize production due to not fully utilizing equipment and personnel, particularly their water purifier potential which happens to be the most expensive asset for these units.
Improving the Accuracy of Predicting Maximal Oxygen Consumption (VO2pk)
NASA Technical Reports Server (NTRS)
Downs, Meghan E.; Lee, Stuart M. C.; Ploutz-Snyder, Lori; Feiveson, Alan
2016-01-01
Maximal oxygen (VO2pk) is the maximum amount of oxygen that the body can use during intense exercise and is used for benchmarking endurance exercise capacity. The most accurate method to determineVO2pk requires continuous measurements of ventilation and gas exchange during an exercise test to maximal effort, which necessitates expensive equipment, a trained staff, and time to set-up the equipment. For astronauts, accurate VO2pk measures are important to assess mission critical task performance capabilities and to prescribe exercise intensities to optimize performance. Currently, astronauts perform submaximal exercise tests during flight to predict VO2pk; however, while submaximal VO2pk prediction equations provide reliable estimates of mean VO2pk for populations, they can be unacceptably inaccurate for a given individual. The error in current predictions and logistical limitations of measuring VO2pk, particularly during spaceflight, highlights the need for improved estimation methods.
Lin, Chen-Yen; Halabi, Susan
2017-01-01
We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox’s proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer. PMID:29326496
Lin, Chen-Yen; Halabi, Susan
2017-01-01
We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox's proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, R.A.; Bryden, N.A.; Polansky, M.M.
1986-03-05
To determine if degree of training effects urinary Cr losses, Cr excretion of 8 adult trained and 5 untrained runners was determined on rest days and following exercise at 90% of maximal oxygen uptake on a treadmill to exhaustion with 30 second exercise and 30 second rest periods. Subjects were fed a constant daily diet containing 9 ..mu..g of Cr per 1000 calories to minimize changes due to diet. Maximal oxygen consumption of the trained runners was in the good or above range based upon their age and that of the untrained runners was average or below. While consuming themore » control diet, basal urinary Cr excretion of subjects who exercise regularly was significantly lower than that of the sedentary control subjects, 0.09 +/- 0.01 and 0.21 +/- 0.03 ..mu..g/day (mean +/- SEM), respectively. Daily urinary Cr excretion of trained subjects was significantly higher on the day of a single exercise bout at 90% of maximal oxygen consumption compared to nonexercise days, 0.12 +/- 0.02 and 0.09 +/- 0.01 ..mu..g/day, respectively. Urinary Cr excretion of 5 untrained subjects was not altered following controlled exercise. These data demonstrate that basal urinary Cr excretion and excretion in response to exercise are related to maximal oxygen consumption and therefore degree of fitness.« less
ERIC Educational Resources Information Center
Tarnus, Evelyne; Catan, Aurelie; Verkindt, Chantal; Bourdon, Emmanuel
2011-01-01
The maximal rate of O[subscript 2] consumption (VO[subscript 2max]) constitutes one of the oldest fitness indexes established for the measure of cardiorespiratory fitness and aerobic performance. Procedures have been developed in which VO[subscript 2max]is estimated from physiological responses during submaximal exercise. Generally, VO[subscript…
A unified framework for group independent component analysis for multi-subject fMRI data
Guo, Ying; Pagnoni, Giuseppe
2008-01-01
Independent component analysis (ICA) is becoming increasingly popular for analyzing functional magnetic resonance imaging (fMRI) data. While ICA has been successfully applied to single-subject analysis, the extension of ICA to group inferences is not straightforward and remains an active topic of research. Current group ICA models, such as the GIFT (Calhoun et al., 2001) and tensor PICA (Beckmann and Smith, 2005), make different assumptions about the underlying structure of the group spatio-temporal processes and are thus estimated using algorithms tailored for the assumed structure, potentially leading to diverging results. To our knowledge, there are currently no methods for assessing the validity of different model structures in real fMRI data and selecting the most appropriate one among various choices. In this paper, we propose a unified framework for estimating and comparing group ICA models with varying spatio-temporal structures. We consider a class of group ICA models that can accommodate different group structures and include existing models, such as the GIFT and tensor PICA, as special cases. We propose a maximum likelihood (ML) approach with a modified Expectation-Maximization (EM) algorithm for the estimation of the proposed class of models. Likelihood ratio tests (LRT) are presented to compare between different group ICA models. The LRT can be used to perform model comparison and selection, to assess the goodness-of-fit of a model in a particular data set, and to test group differences in the fMRI signal time courses between subject subgroups. Simulation studies are conducted to evaluate the performance of the proposed method under varying structures of group spatio-temporal processes. We illustrate our group ICA method using data from an fMRI study that investigates changes in neural processing associated with the regular practice of Zen meditation. PMID:18650105
Wyse, Cathy; Cathcart, Andy; Sutherland, Rona; Ward, Susan; McMillan, Lesley; Gibson, Graham; Padgett, Miles; Skeldon, Kenneth
2005-06-01
Exercise-induced oxidative stress (EIOS) refers to a condition where the balance of free radical production and antioxidant systems is disturbed during exercise in favour of pro-oxidant free radicals. Breath ethane is a product of free radical-mediated oxidation of cell membrane lipids and is considered to be a reliable marker of oxidative stress. The heatshock protein, haem oxygenase, is induced by oxidative stress and degrades haemoglobin to bilirubin, with concurrent production of carbon monoxide (CO). The aim of this study was to investigate the effect of maximal exercise on exhaled ethane and CO in human, canine, and equine athletes. Human athletes (n = 8) performed a maximal exercise test on a treadmill, and canine (n = 12) and equine (n = 11) athletes exercised at gallop on a sand racetrack. Breath samples were taken at regular intervals during exercise in the human athletes, and immediately before and after exercise in the canine and equine athletes. Breath samples were stored in gas-impermeable bags for analysis of ethane by laser spectroscopy, and CO was measured directly using an electrochemical CO monitor. Maximal exercise was associated with significant increases in exhaled ethane in the human, equine, and canine athletes. Decreased concentrations of exhaled CO were detected after maximal exercise in the human athletes, but CO was rarely detectable in the canine and equine athletes. The ethane breath test allows non-invasive and real-time detection of oxidative stress, and this method will facilitate further investigation of the processes mediating EIOS in human and animal athletes.
Estimating parameter of influenza transmission using regularized least square
NASA Astrophysics Data System (ADS)
Nuraini, N.; Syukriah, Y.; Indratno, S. W.
2014-02-01
Transmission process of influenza can be presented in a mathematical model as a non-linear differential equations system. In this model the transmission of influenza is determined by the parameter of contact rate of the infected host and susceptible host. This parameter will be estimated using a regularized least square method where the Finite Element Method and Euler Method are used for approximating the solution of the SIR differential equation. The new infected data of influenza from CDC is used to see the effectiveness of the method. The estimated parameter represents the contact rate proportion of transmission probability in a day which can influence the number of infected people by the influenza. Relation between the estimated parameter and the number of infected people by the influenza is measured by coefficient of correlation. The numerical results show positive correlation between the estimated parameters and the infected people.
NASA Astrophysics Data System (ADS)
Qiu, Sihang; Chen, Bin; Wang, Rongxiao; Zhu, Zhengqiu; Wang, Yuan; Qiu, Xiaogang
2018-04-01
Hazardous gas leak accident has posed a potential threat to human beings. Predicting atmospheric dispersion and estimating its source become increasingly important in emergency management. Current dispersion prediction and source estimation models cannot satisfy the requirement of emergency management because they are not equipped with high efficiency and accuracy at the same time. In this paper, we develop a fast and accurate dispersion prediction and source estimation method based on artificial neural network (ANN), particle swarm optimization (PSO) and expectation maximization (EM). The novel method uses a large amount of pre-determined scenarios to train the ANN for dispersion prediction, so that the ANN can predict concentration distribution accurately and efficiently. PSO and EM are applied for estimating the source parameters, which can effectively accelerate the process of convergence. The method is verified by the Indianapolis field study with a SF6 release source. The results demonstrate the effectiveness of the method.
Ravier, Gilles; Bouzigon, Romain; Beliard, Samuel; Tordi, Nicolas; Grappe, Frederic
2018-04-04
Ravier, G, Bouzigon, R, Beliard, S, Tordi, N, and Grappe, F. Benefits of compression garments worn during handball-specific circuit on short-term fatigue in professional players. J Strength Cond Res XX(X): 000-000, 2016-The purpose of this study was to investigate the benefits of full-leg length compression garments (CGs) worn during a handball-specific circuit exercises on athletic performance and acute fatigue-induced changes in strength and muscle soreness in professional handball players. Eighteen men (mean ± SD: age 23.22 ± 4.97 years; body mass: 82.06 ± 9.69 kg; height: 184.61 ± 4.78 cm) completed 2 identical sessions either wearing regular gym short or CGs in a randomized crossover design. Exercise circuits of explosive activities included 3 periods of 12 minutes of sprints, jumps, and agility drills every 25 seconds. Before, immediately after and 24 hours postexercise, maximal voluntary knee extension (maximal voluntary contraction, MVC), rate of force development (RFD), and muscle soreness were assessed. During the handball-specific circuit sprint and jump performances were unchanged in both conditions. Immediately after performing the circuit exercises MVC, RFD, and PPT decreased significantly compared with preexercise with CGs and noncompression clothes. Decrement was similar in both conditions for RFD (effect size, ES = 0.40) and PPT for the soleus (ES = 0.86). However, wearing CGs attenuated decrement in MVC (p < 0.001) with a smaller decrease (ES = 1.53) in CGs compared with regular gym shorts condition (-5.4 vs. -18.7%, respectively). Full recovery was observed 24 hours postexercise in both conditions for muscle soreness, MVC, and RFD. These findings suggest that wearing CGs during a handball-specific circuit provides benefits on the impairment of the maximal muscle force characteristics and is likely to be worthwhile for handball players involved in activities such as tackles.
Estimation of reflectance from camera responses by the regularized local linear model.
Zhang, Wei-Feng; Tang, Gongguo; Dai, Dao-Qing; Nehorai, Arye
2011-10-01
Because of the limited approximation capability of using fixed basis functions, the performance of reflectance estimation obtained by traditional linear models will not be optimal. We propose an approach based on the regularized local linear model. Our approach performs efficiently and knowledge of the spectral power distribution of the illuminant and the spectral sensitivities of the camera is not needed. Experimental results show that the proposed method performs better than some well-known methods in terms of both reflectance error and colorimetric error. © 2011 Optical Society of America
Validity of Postexercise Measurements to Estimate Peak VO2 in 200-m and 400-m Maximal Swims.
Rodríguez, Ferran A; Chaverri, Diego; Iglesias, Xavier; Schuller, Thorsten; Hoffmann, Uwe
2017-06-01
To assess the validity of postexercise measurements to estimate oxygen uptake (V˙O 2 ) during swimming, we compared V˙O 2 measured directly during an all-out 200-m swim with measurements estimated during 200-m and 400-m maximal tests using several methods, including a recent heart rate (HR)/V˙O 2 modelling procedure. 25 elite swimmers performed a 200-m maximal swim where V˙O 2 was measured using a swimming snorkel connected to a gas analyzer. The criterion variable was V˙O 2 in the last 20 s of effort, which was compared with the following V˙O 2peak estimates: 1) first 20-s average; 2) linear backward extrapolation (BE) of the first 20 and 30 s, 3×20-s, 4×20-s, and 3×20-s or 4×20-s averages; 3) semilogarithmic BE at the same intervals; and 4) predicted V˙O 2peak using mathematical modelling of 0-20 s and 5-20 s during recovery. In 2 series of experiments, both of the HR/V˙O 2 modelled values most accurately predicted the V˙O 2peak (mean ∆=0.1-1.6%). The BE methods overestimated the criterion values by 4-14%, and the single 20-s measurement technique yielded an underestimation of 3.4%. Our results confirm that the HR/V˙O 2 modelling technique, used over a maximal 200-m or 400-m swim, is a valid and accurate procedure for assessing cardiorespiratory and metabolic fitness in competitive swimmers. © Georg Thieme Verlag KG Stuttgart · New York.
NASA Astrophysics Data System (ADS)
Rebillat, Marc; Schoukens, Maarten
2018-05-01
Linearity is a common assumption for many real-life systems, but in many cases the nonlinear behavior of systems cannot be ignored and must be modeled and estimated. Among the various existing classes of nonlinear models, Parallel Hammerstein Models (PHM) are interesting as they are at the same time easy to interpret as well as to estimate. One way to estimate PHM relies on the fact that the estimation problem is linear in the parameters and thus that classical least squares (LS) estimation algorithms can be used. In that area, this article introduces a regularized LS estimation algorithm inspired on some of the recently developed regularized impulse response estimation techniques. Another mean to estimate PHM consists in using parametric or non-parametric exponential sine sweeps (ESS) based methods. These methods (LS and ESS) are founded on radically different mathematical backgrounds but are expected to tackle the same issue. A methodology is proposed here to compare them with respect to (i) their accuracy, (ii) their computational cost, and (iii) their robustness to noise. Tests are performed on simulated systems for several values of methods respective parameters and of signal to noise ratio. Results show that, for a given set of data points, the ESS method is less demanding in computational resources than the LS method but that it is also less accurate. Furthermore, the LS method needs parameters to be set in advance whereas the ESS method is not subject to conditioning issues and can be fully non-parametric. In summary, for a given set of data points, ESS method can provide a first, automatic, and quick overview of a nonlinear system than can guide more computationally demanding and precise methods, such as the regularized LS one proposed here.
Abouleish, Amr E; Dexter, Franklin; Epstein, Richard H; Lubarsky, David A; Whitten, Charles W; Prough, Donald S
2003-04-01
Determination of operating room (OR) block allocation and case scheduling is often not based on maximizing OR efficiency, but rather on tradition and surgeon convenience. As a result, anesthesiology groups often incur additional labor costs. When negotiating financial support, heads of anesthesiology departments are often challenged to justify the subsidy necessary to offset these additional labor costs. In this study, we describe a method for calculating a statistically sound estimate of the excess labor costs incurred by an anesthesiology group because of inefficient OR allocation and case scheduling. OR information system and anesthesia staffing data for 1 yr were obtained from two university hospitals. Optimal OR allocation for each surgical service was determined by maximizing the efficiency of use of the OR staff. Hourly costs were converted to dollar amounts by using the nationwide median compensation for academic and private-practice anesthesia providers. Differences between actual costs and the optimal OR allocation were determined. For Hospital A, estimated annual excess labor costs were $1.6 million (95% confidence interval, $1.5-$1.7 million) and $2.0 million ($1.89-$2.05 million) when academic and private-practice compensation, respectively, was calculated. For Hospital B, excess labor costs were $1.0 million ($1.08-$1.17 million) and $1.4 million ($1.32-1.43 million) for academic and private-practice compensation, respectively. This study demonstrates a methodology for an anesthesiology group to estimate its excess labor costs. The group can then use these estimates when negotiating for subsidies with its hospital, medical school, or multispecialty medical group. We describe a new application for a previously reported statistical method to calculate operating room (OR) allocations to maximize OR efficiency. When optimal OR allocations and case scheduling are not implemented, the resulting increase in labor costs can be used in negotiations as a statistically sound estimate for the increased labor cost to the anesthesiology department.
Obtaining sparse distributions in 2D inverse problems.
Reci, A; Sederman, A J; Gladden, L F
2017-08-01
The mathematics of inverse problems has relevance across numerous estimation problems in science and engineering. L 1 regularization has attracted recent attention in reconstructing the system properties in the case of sparse inverse problems; i.e., when the true property sought is not adequately described by a continuous distribution, in particular in Compressed Sensing image reconstruction. In this work, we focus on the application of L 1 regularization to a class of inverse problems; relaxation-relaxation, T 1 -T 2 , and diffusion-relaxation, D-T 2 , correlation experiments in NMR, which have found widespread applications in a number of areas including probing surface interactions in catalysis and characterizing fluid composition and pore structures in rocks. We introduce a robust algorithm for solving the L 1 regularization problem and provide a guide to implementing it, including the choice of the amount of regularization used and the assignment of error estimates. We then show experimentally that L 1 regularization has significant advantages over both the Non-Negative Least Squares (NNLS) algorithm and Tikhonov regularization. It is shown that the L 1 regularization algorithm stably recovers a distribution at a signal to noise ratio<20 and that it resolves relaxation time constants and diffusion coefficients differing by as little as 10%. The enhanced resolving capability is used to measure the inter and intra particle concentrations of a mixture of hexane and dodecane present within porous silica beads immersed within a bulk liquid phase; neither NNLS nor Tikhonov regularization are able to provide this resolution. This experimental study shows that the approach enables discrimination between different chemical species when direct spectroscopic discrimination is impossible, and hence measurement of chemical composition within porous media, such as catalysts or rocks, is possible while still being stable to high levels of noise. Copyright © 2017. Published by Elsevier Inc.
Obtaining sparse distributions in 2D inverse problems
NASA Astrophysics Data System (ADS)
Reci, A.; Sederman, A. J.; Gladden, L. F.
2017-08-01
The mathematics of inverse problems has relevance across numerous estimation problems in science and engineering. L1 regularization has attracted recent attention in reconstructing the system properties in the case of sparse inverse problems; i.e., when the true property sought is not adequately described by a continuous distribution, in particular in Compressed Sensing image reconstruction. In this work, we focus on the application of L1 regularization to a class of inverse problems; relaxation-relaxation, T1-T2, and diffusion-relaxation, D-T2, correlation experiments in NMR, which have found widespread applications in a number of areas including probing surface interactions in catalysis and characterizing fluid composition and pore structures in rocks. We introduce a robust algorithm for solving the L1 regularization problem and provide a guide to implementing it, including the choice of the amount of regularization used and the assignment of error estimates. We then show experimentally that L1 regularization has significant advantages over both the Non-Negative Least Squares (NNLS) algorithm and Tikhonov regularization. It is shown that the L1 regularization algorithm stably recovers a distribution at a signal to noise ratio < 20 and that it resolves relaxation time constants and diffusion coefficients differing by as little as 10%. The enhanced resolving capability is used to measure the inter and intra particle concentrations of a mixture of hexane and dodecane present within porous silica beads immersed within a bulk liquid phase; neither NNLS nor Tikhonov regularization are able to provide this resolution. This experimental study shows that the approach enables discrimination between different chemical species when direct spectroscopic discrimination is impossible, and hence measurement of chemical composition within porous media, such as catalysts or rocks, is possible while still being stable to high levels of noise.
Finding specific RNA motifs: Function in a zeptomole world?
KNIGHT, ROB; YARUS, MICHAEL
2003-01-01
We have developed a new method for estimating the abundance of any modular (piecewise) RNA motif within a longer random region. We have used this method to estimate the size of the active motifs available to modern SELEX experiments (picomoles of unique sequences) and to a plausible RNA World (zeptomoles of unique sequences: 1 zmole = 602 sequences). Unexpectedly, activities such as specific isoleucine binding are almost certainly present in zeptomoles of molecules, and even ribozymes such as self-cleavage motifs may appear (depending on assumptions about the minimal structures). The number of specified nucleotides is not the only important determinant of a motif’s rarity: The number of modules into which it is divided, and the details of this division, are also crucial. We propose three maxims for easily isolated motifs: the Maxim of Minimization, the Maxim of Multiplicity, and the Maxim of the Median. These maxims together state that selected motifs should be small and composed of as many separate, equally sized modules as possible. For evenly divided motifs with four modules, the largest accessible activity in picomole scale (1–1000 pmole) pools of length 100 is about 34 nucleotides; while for zeptomole scale (1–1000 zmole) pools it is about 20 specific nucleotides (50% probability of occurrence). This latter figure includes some ribozymes and aptamers. Consequently, an RNA metabolism apparently could have begun with only zeptomoles of RNA molecules. PMID:12554865
Hu, Shiang; Yao, Dezhong; Valdes-Sosa, Pedro A
2018-01-01
The choice of reference for the electroencephalogram (EEG) is a long-lasting unsolved issue resulting in inconsistent usages and endless debates. Currently, both the average reference (AR) and the reference electrode standardization technique (REST) are two primary, apparently irreconcilable contenders. We propose a theoretical framework to resolve this reference issue by formulating both (a) estimation of potentials at infinity, and (b) determination of the reference, as a unified Bayesian linear inverse problem, which can be solved by maximum a posterior estimation. We find that AR and REST are very particular cases of this unified framework: AR results from biophysically non-informative prior; while REST utilizes the prior based on the EEG generative model. To allow for simultaneous denoising and reference estimation, we develop the regularized versions of AR and REST, named rAR and rREST, respectively. Both depend on a regularization parameter that is the noise to signal variance ratio. Traditional and new estimators are evaluated with this framework, by both simulations and analysis of real resting EEGs. Toward this end, we leverage the MRI and EEG data from 89 subjects which participated in the Cuban Human Brain Mapping Project. Generated artificial EEGs-with a known ground truth, show that relative error in estimating the EEG potentials at infinity is lowest for rREST. It also reveals that realistic volume conductor models improve the performances of REST and rREST. Importantly, for practical applications, it is shown that an average lead field gives the results comparable to the individual lead field. Finally, it is shown that the selection of the regularization parameter with Generalized Cross-Validation (GCV) is close to the "oracle" choice based on the ground truth. When evaluated with the real 89 resting state EEGs, rREST consistently yields the lowest GCV. This study provides a novel perspective to the EEG reference problem by means of a unified inverse solution framework. It may allow additional principled theoretical formulations and numerical evaluation of performance.
Holtzman, Tahl; Jörntell, Henrik
2011-01-01
Temporal coding of spike-times using oscillatory mechanisms allied to spike-time dependent plasticity could represent a powerful mechanism for neuronal communication. However, it is unclear how temporal coding is constructed at the single neuronal level. Here we investigate a novel class of highly regular, metronome-like neurones in the rat brainstem which form a major source of cerebellar afferents. Stimulation of sensory inputs evoked brief periods of inhibition that interrupted the regular firing of these cells leading to phase-shifted spike-time advancements and delays. Alongside phase-shifting, metronome cells also behaved as band-pass filters during rhythmic sensory stimulation, with maximal spike-stimulus synchronisation at frequencies close to the idiosyncratic firing frequency of each neurone. Phase-shifting and band-pass filtering serve to temporally align ensembles of metronome cells, leading to sustained volleys of near-coincident spike-times, thereby transmitting synchronised sensory information to downstream targets in the cerebellar cortex. PMID:22046297
Optimal Implementations for Reliable Circadian Clocks
NASA Astrophysics Data System (ADS)
Hasegawa, Yoshihiko; Arita, Masanori
2014-09-01
Circadian rhythms are acquired through evolution to increase the chances for survival through synchronizing with the daylight cycle. Reliable synchronization is realized through two trade-off properties: regularity to keep time precisely, and entrainability to synchronize the internal time with daylight. We find by using a phase model with multiple inputs that achieving the maximal limit of regularity and entrainability entails many inherent features of the circadian mechanism. At the molecular level, we demonstrate the role sharing of two light inputs, phase advance and delay, as is well observed in mammals. At the behavioral level, the optimal phase-response curve inevitably contains a dead zone, a time during which light pulses neither advance nor delay the clock. We reproduce the results of phase-controlling experiments entrained by two types of periodic light pulses. Our results indicate that circadian clocks are designed optimally for reliable clockwork through evolution.
Mixture models with entropy regularization for community detection in networks
NASA Astrophysics Data System (ADS)
Chang, Zhenhai; Yin, Xianjun; Jia, Caiyan; Wang, Xiaoyang
2018-04-01
Community detection is a key exploratory tool in network analysis and has received much attention in recent years. NMM (Newman's mixture model) is one of the best models for exploring a range of network structures including community structure, bipartite and core-periphery structures, etc. However, NMM needs to know the number of communities in advance. Therefore, in this study, we have proposed an entropy regularized mixture model (called EMM), which is capable of inferring the number of communities and identifying network structure contained in a network, simultaneously. In the model, by minimizing the entropy of mixing coefficients of NMM using EM (expectation-maximization) solution, the small clusters contained little information can be discarded step by step. The empirical study on both synthetic networks and real networks has shown that the proposed model EMM is superior to the state-of-the-art methods.
Orlowska-Kowalska, Teresa; Kaminski, Marcin
2014-01-01
The paper deals with the implementation of optimized neural networks (NNs) for state variable estimation of the drive system with an elastic joint. The signals estimated by NNs are used in the control structure with a state-space controller and additional feedbacks from the shaft torque and the load speed. High estimation quality is very important for the correct operation of a closed-loop system. The precision of state variables estimation depends on the generalization properties of NNs. A short review of optimization methods of the NN is presented. Two techniques typical for regularization and pruning methods are described and tested in detail: the Bayesian regularization and the Optimal Brain Damage methods. Simulation results show good precision of both optimized neural estimators for a wide range of changes of the load speed and the load torque, not only for nominal but also changed parameters of the drive system. The simulation results are verified in a laboratory setup.
Astley, H C; Abbott, E M; Azizi, E; Marsh, R L; Roberts, T J
2013-11-01
Maximal performance is an essential metric for understanding many aspects of an organism's biology, but it can be difficult to determine because a measured maximum may reflect only a peak level of effort, not a physiological limit. We used a unique opportunity provided by a frog jumping contest to evaluate the validity of existing laboratory estimates of maximum jumping performance in bullfrogs (Rana catesbeiana). We recorded video of 3124 bullfrog jumps over the course of the 4-day contest at the Calaveras County Jumping Frog Jubilee, and determined jump distance from these images and a calibration of the jump arena. Frogs were divided into two groups: 'rental' frogs collected by fair organizers and jumped by the general public, and frogs collected and jumped by experienced, 'professional' teams. A total of 58% of recorded jumps surpassed the maximum jump distance in the literature (1.295 m), and the longest jump was 2.2 m. Compared with rental frogs, professionally jumped frogs jumped farther, and the distribution of jump distances for this group was skewed towards long jumps. Calculated muscular work, historical records and the skewed distribution of jump distances all suggest that the longest jumps represent the true performance limit for this species. Using resampling, we estimated the probability of observing a given jump distance for various sample sizes, showing that large sample sizes are required to detect rare maximal jumps. These results show the importance of sample size, animal motivation and physiological conditions for accurate maximal performance estimates.
Parameter Estimation of Multiple Frequency-Hopping Signals with Two Sensors
Pan, Jin; Ma, Boyuan
2018-01-01
This paper essentially focuses on parameter estimation of multiple wideband emitting sources with time-varying frequencies, such as two-dimensional (2-D) direction of arrival (DOA) and signal sorting, with a low-cost circular synthetic array (CSA) consisting of only two rotating sensors. Our basic idea is to decompose the received data, which is a superimposition of phase measurements from multiple sources into separated groups and separately estimate the DOA associated with each source. Motivated by joint parameter estimation, we propose to adopt the expectation maximization (EM) algorithm in this paper; our method involves two steps, namely, the expectation-step (E-step) and the maximization (M-step). In the E-step, the correspondence of each signal with its emitting source is found. Then, in the M-step, the maximum-likelihood (ML) estimates of the DOA parameters are obtained. These two steps are iteratively and alternatively executed to jointly determine the DOAs and sort multiple signals. Closed-form DOA estimation formulae are developed by ML estimation based on phase data, which also realize an optimal estimation. Directional ambiguity is also addressed by another ML estimation method based on received complex responses. The Cramer-Rao lower bound is derived for understanding the estimation accuracy and performance comparison. The verification of the proposed method is demonstrated with simulations. PMID:29617323
A Regularized Linear Dynamical System Framework for Multivariate Time Series Analysis.
Liu, Zitao; Hauskrecht, Milos
2015-01-01
Linear Dynamical System (LDS) is an elegant mathematical framework for modeling and learning Multivariate Time Series (MTS). However, in general, it is difficult to set the dimension of an LDS's hidden state space. A small number of hidden states may not be able to model the complexities of a MTS, while a large number of hidden states can lead to overfitting. In this paper, we study learning methods that impose various regularization penalties on the transition matrix of the LDS model and propose a regularized LDS learning framework (rLDS) which aims to (1) automatically shut down LDSs' spurious and unnecessary dimensions, and consequently, address the problem of choosing the optimal number of hidden states; (2) prevent the overfitting problem given a small amount of MTS data; and (3) support accurate MTS forecasting. To learn the regularized LDS from data we incorporate a second order cone program and a generalized gradient descent method into the Maximum a Posteriori framework and use Expectation Maximization to obtain a low-rank transition matrix of the LDS model. We propose two priors for modeling the matrix which lead to two instances of our rLDS. We show that our rLDS is able to recover well the intrinsic dimensionality of the time series dynamics and it improves the predictive performance when compared to baselines on both synthetic and real-world MTS datasets.
Zhang, Jian-Hua; Böhme, Johann F
2007-11-01
In this paper we report an adaptive regularization network (ARN) approach to realizing fast blind separation of cerebral evoked potentials (EPs) from background electroencephalogram (EEG) activity with no need to make any explicit assumption on the statistical (or deterministic) signal model. The ARNs are proposed to construct nonlinear EEG and EP signal models. A novel adaptive regularization training (ART) algorithm is proposed to improve the generalization performance of the ARN. Two adaptive neural modeling methods based on the ARN are developed and their implementation and performance analysis are also presented. The computer experiments using simulated and measured visual evoked potential (VEP) data have shown that the proposed ARN modeling paradigm yields computationally efficient and more accurate VEP signal estimation owing to its intrinsic model-free and nonlinear processing characteristics.
NASA Astrophysics Data System (ADS)
Meresescu, Alina G.; Kowalski, Matthieu; Schmidt, Frédéric; Landais, François
2018-06-01
The Water Residence Time distribution is the equivalent of the impulse response of a linear system allowing the propagation of water through a medium, e.g. the propagation of rain water from the top of the mountain towards the aquifers. We consider the output aquifer levels as the convolution between the input rain levels and the Water Residence Time, starting with an initial aquifer base level. The estimation of Water Residence Time is important for a better understanding of hydro-bio-geochemical processes and mixing properties of wetlands used as filters in ecological applications, as well as protecting fresh water sources for wells from pollutants. Common methods of estimating the Water Residence Time focus on cross-correlation, parameter fitting and non-parametric deconvolution methods. Here we propose a 1D full-deconvolution, regularized, non-parametric inverse problem algorithm that enforces smoothness and uses constraints of causality and positivity to estimate the Water Residence Time curve. Compared to Bayesian non-parametric deconvolution approaches, it has a fast runtime per test case; compared to the popular and fast cross-correlation method, it produces a more precise Water Residence Time curve even in the case of noisy measurements. The algorithm needs only one regularization parameter to balance between smoothness of the Water Residence Time and accuracy of the reconstruction. We propose an approach on how to automatically find a suitable value of the regularization parameter from the input data only. Tests on real data illustrate the potential of this method to analyze hydrological datasets.
Hermassi, Souhail; Chelly, Mohamed Souhaiel; Fieseler, Georg; Bartels, Thomas; Schulze, Stephan; Delank, Karl-Stefan; Shephard, Roy J; Schwesig, René
2017-09-01
Background Team handball is an intense ball sport with specific requirements on technical skills, tactical understanding, and physical performance. The ability of handball players to develop explosive efforts (e. g. sprinting, jumping, changing direction) is crucial to success. Objective The purpose of this pilot study was to examine the effects of an in-season high-intensity strength training program on the physical performance of elite handball players. Materials and methods Twenty-two handball players (a single national-level Tunisian team) were randomly assigned to a control group (CG; n = 10) or a training group (TG; n = 12). At the beginning of the pilot study, all subjects performed a battery of motor tests: one repetition maximum (1-RM) half-squat test, a repeated sprint test [6 × (2 × 15 m) shuttle sprints], squat jumps, counter movement jumps (CMJ), and the Yo-Yo intermittent recovery test level 1. The TG additionally performed a maximal leg strength program twice a week for 10 weeks immediately before engaging in regular handball training. Each strength training session included half-squat exercises to strengthen the lower limbs (80 - 95 % of 1-RM, 1 - 3 repetitions, 3 - 6 sets, 3 - 4 min rest between sets). The control group underwent no additional strength training. The motor test battery was repeated at the end of the study interventions. Results In the TG, 3 parameters (maximal strength of lower limb: η² = 0.74; CMJ: η² = 0.70, and RSA best time: η² = 0.25) showed significant improvements, with large effect sizes (e. g. CMJ: d = 3.77). A reduction in performance for these same 3 parameters was observed in the CG (d = -0.24). Conclusions The results support our hypothesis that additional strength training twice a week enhances the maximal strength of the lower limbs and jumping or repeated sprinting performance. There was no evidence of shuttle sprints ahead of regular training compromising players' speed and endurance capacities. © Georg Thieme Verlag KG Stuttgart · New York.
NASA Technical Reports Server (NTRS)
Bugbee, B.; Monje, O.
1992-01-01
Plant scientists have sought to maximize the yield of food crops since the beginning of agriculture. There are numerous reports of record food and biomass yields (per unit area) in all major crop plants, but many of the record yield reports are in error because they exceed the maximal theoretical rates of the component processes. In this article, we review the component processes that govern yield limits and describe how each process can be individually measured. This procedure has helped us validate theoretical estimates and determine what factors limit yields in optimal environments.
A New Understanding for the Rain Rate retrieval of Attenuating Radars Measurement
NASA Astrophysics Data System (ADS)
Koner, P.; Battaglia, A.; Simmer, C.
2009-04-01
The retrieval of rain rate from the attenuated radar (e.g. Cloud Profiling Radar on board of CloudSAT in orbit since June 2006) is a challenging problem. ĹEcuyer and Stephens [1] underlined this difficulty (for rain rates larger than 1.5 mm/h) and suggested the need of additional information (like path-integrated attenuations (PIA) derived from surface reference techniques or precipitation water path estimated from co-located passive microwave radiometer) to constrain the retrieval. It is generally discussed based on the optimal estimation theory that there are no solutions without constraining the problem in a case of visible attenuation because there is no enough information content to solve the problem. However, when the problem is constrained by the additional measurement of PIA, there is a reasonable solution. This raises the spontaneous question: Is all information enclosed in this additional measurement? This also contradicts with the information theory because one measurement can introduce only one degree of freedom in the retrieval. Why is one degree of freedom so important in the above problem? This question cannot be explained using the estimation and information theories of OEM. On the other hand, Koner and Drummond [2] argued that the OEM is basically a regularization method, where a-priori covariance is used as a stabilizer and the regularization strength is determined by the choices of the a-priori and error covariance matrices. The regularization is required for the reduction of the condition number of Jacobian, which drives the noise injection from the measurement and inversion spaces to the state space in an ill-posed inversion. In this work, the above mentioned question will be discussed based on the regularization theory, error mitigation and eigenvalue mathematics. References 1. L'Ecuyer TS and Stephens G. An estimation based precipitation retrieval algorithm for attenuating radar. J. Appl. Met., 2002, 41, 272-85. 2. Koner PK, Drummond JR. A comparison of regularization techniques for atmospheric trace gases retrievals. JQSRT 2008; 109:514-26.
NASA Astrophysics Data System (ADS)
Grecu, M.; Tian, L.; Heymsfield, G. M.
2017-12-01
A major challenge in deriving accurate estimates of physical properties of falling snow particles from single frequency space- or airborne radar observations is that snow particles exhibit a large variety of shapes and their electromagnetic scattering characteristics are highly dependent on these shapes. Triple frequency (Ku-Ka-W) radar observations are expected to facilitate the derivation of more accurate snow estimates because specific snow particle shapes tend to have specific signatures in the associated two-dimensional dual-reflectivity-ratio (DFR) space. However, the derivation of accurate snow estimates from triple frequency radar observations is by no means a trivial task. This is because the radar observations can be subject to non-negligible attenuation (especially at W-band when super-cooled water is present), which may significantly impact the interpretation of the information in the DFR space. Moreover, the electromagnetic scattering properties of snow particles are computationally expensive to derive, which makes the derivation of reliable parameterizations usable in estimation methodologies challenging. In this study, we formulate an two-step Expectation Maximization (EM) methodology to derive accurate snow estimates in Extratropical Cyclones (ECTs) from triple frequency airborne radar observations. The Expectation (E) step consists of a least-squares triple frequency estimation procedure applied with given assumptions regarding the relationships between the density of snow particles and their sizes, while the Maximization (M) step consists of the optimization of the assumptions used in step E. The electromagnetic scattering properties of snow particles are derived using the Rayleigh-Gans approximation. The methodology is applied to triple frequency radar observations collected during the Olympic Mountains Experiment (OLYMPEX). Results show that snowfall estimates above the freezing level in ETCs consistent with the triple frequency radar observations as well as with independent rainfall estimates below the freezing level may be derived using the EM methodology formulated in the study.
NASA Astrophysics Data System (ADS)
Ishida, K.; Ohara, N.; Kavvas, M. L.; Chen, Z. Q.; Anderson, M. L.
2018-01-01
Impact of air temperature on the Maximum Precipitation (MP) estimation through change in moisture holding capacity of air was investigated. A series of previous studies have estimated the MP of 72-h basin-average precipitation over the American River watershed (ARW) in Northern California by means of the Maximum Precipitation (MP) estimation approach, which utilizes a physically-based regional atmospheric model. For the MP estimation, they have selected 61 severe storm events for the ARW, and have maximized them by means of the atmospheric boundary condition shifting (ABCS) and relative humidity maximization (RHM) methods. This study conducted two types of numerical experiments in addition to the MP estimation by the previous studies. First, the air temperature on the entire lateral boundaries of the outer model domain was increased uniformly by 0.0-8.0 °C with 0.5 °C increments for the two severest maximized historical storm events in addition to application of the ABCS + RHM method to investigate the sensitivity of the basin-average precipitation over the ARW to air temperature rise. In this investigation, a monotonous increase was found in the maximum 72-h basin-average precipitation over the ARW with air temperature rise for both of the storm events. The second numerical experiment used specific amounts of air temperature rise that is assumed to happen under future climate change conditions. Air temperature was increased by those specified amounts uniformly on the entire lateral boundaries in addition to application of the ABCS + RHM method to investigate the impact of air temperature on the MP estimate over the ARW under changing climate. The results in the second numerical experiment show that temperature increases in the future climate may amplify the MP estimate over the ARW. The MP estimate may increase by 14.6% in the middle of the 21st century and by 27.3% in the end of the 21st century compared to the historical period.
The Convergence Problems of Eigenfunction Expansions of Elliptic Differential Operators
NASA Astrophysics Data System (ADS)
Ahmedov, Anvarjon
2018-03-01
In the present research we investigate the problems concerning the almost everywhere convergence of multiple Fourier series summed over the elliptic levels in the classes of Liouville. The sufficient conditions for the almost everywhere convergence problems, which are most difficult problems in Harmonic analysis, are obtained. The methods of approximation by multiple Fourier series summed over elliptic curves are applied to obtain suitable estimations for the maximal operator of the spectral decompositions. Obtaining of such estimations involves very complicated calculations which depends on the functional structure of the classes of functions. The main idea on the proving the almost everywhere convergence of the eigenfunction expansions in the interpolation spaces is estimation of the maximal operator of the partial sums in the boundary classes and application of the interpolation Theorem of the family of linear operators. In the present work the maximal operator of the elliptic partial sums are estimated in the interpolation classes of Liouville and the almost everywhere convergence of the multiple Fourier series by elliptic summation methods are established. The considering multiple Fourier series as an eigenfunction expansions of the differential operators helps to translate the functional properties (for example smoothness) of the Liouville classes into Fourier coefficients of the functions which being expanded into such expansions. The sufficient conditions for convergence of the multiple Fourier series of functions from Liouville classes are obtained in terms of the smoothness and dimensions. Such results are highly effective in solving the boundary problems with periodic boundary conditions occurring in the spectral theory of differential operators. The investigations of multiple Fourier series in modern methods of harmonic analysis incorporates the wide use of methods from functional analysis, mathematical physics, modern operator theory and spectral decomposition. New method for the best approximation of the square-integrable function by multiple Fourier series summed over the elliptic levels are established. Using the best approximation, the Lebesgue constant corresponding to the elliptic partial sums is estimated. The latter is applied to obtain an estimation for the maximal operator in the classes of Liouville.
Optical tomography by means of regularized MLEM
NASA Astrophysics Data System (ADS)
Majer, Charles L.; Urbanek, Tina; Peter, Jörg
2015-09-01
To solve the inverse problem involved in fluorescence mediated tomography a regularized maximum likelihood expectation maximization (MLEM) reconstruction strategy is proposed. This technique has recently been applied to reconstruct galaxy clusters in astronomy and is adopted here. The MLEM algorithm is implemented as Richardson-Lucy (RL) scheme and includes entropic regularization and a floating default prior. Hence, the strategy is very robust against measurement noise and also avoids converging into noise patterns. Normalized Gaussian filtering with fixed standard deviation is applied for the floating default kernel. The reconstruction strategy is investigated using the XFM-2 homogeneous mouse phantom (Caliper LifeSciences Inc., Hopkinton, MA) with known optical properties. Prior to optical imaging, X-ray CT tomographic data of the phantom were acquire to provide structural context. Phantom inclusions were fit with various fluorochrome inclusions (Cy5.5) for which optical data at 60 projections over 360 degree have been acquired, respectively. Fluorochrome excitation has been accomplished by scanning laser point illumination in transmission mode (laser opposite to camera). Following data acquisition, a 3D triangulated mesh is derived from the reconstructed CT data which is then matched with the various optical projection images through 2D linear interpolation, correlation and Fourier transformation in order to assess translational and rotational deviations between the optical and CT imaging systems. Preliminary results indicate that the proposed regularized MLEM algorithm, when driven with a constant initial condition, yields reconstructed images that tend to be smoother in comparison to classical MLEM without regularization. Once the floating default prior is included this bias was significantly reduced.
77 FR 37638 - Noncommercial Educational Station Fundraising for Third-Party Non-Profit Organizations
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-22
... educational (NCE) broadcast stations to conduct on-air fundraising activities that interrupt regular... eliminate the need for NCE stations to seek a waiver of the Commission's rules to interrupt regular... Responses: 2,200 respondents/30,800 responses. Estimated Time per Response: 0.25 to 1.5 hours. Frequency of...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-25
... regular security audits and have been certified for operation. The CPSC observes all industry and Federal government best practices for network security. CPSC staff regularly analyzes its systems for vulnerabilities and malware, and monitor the network for real-time intrusion attempts. B. Estimated Burden The CPSC...
Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua
2016-11-21
Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed 'MPD-AwTTV'. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.
NASA Astrophysics Data System (ADS)
Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua
2016-11-01
Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed ‘MPD-AwTTV’. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.
Off-axis targets maximize bearing Fisher Information in broadband active sonar.
Kloepper, Laura N; Buck, John R; Liu, Yang; Nachtigall, Paul E
2018-01-01
Broadband active sonar systems estimate range from time delay and velocity from Doppler shift. Relatively little attention has been paid to how the received echo spectrum encodes information about the bearing of an object. This letter derives the bearing Fisher Information encoded in the frequency dependent transmitter beampattern. This leads to a counter-intuitive result: directing the sonar beam so that a target of interest is slightly off-axis maximizes the bearing information about the target. Beam aim data from a dolphin biosonar experiment agree closely with the angle predicted to maximize bearing information.
A Method for Evaluating Tuning Functions of Single Neurons based on Mutual Information Maximization
NASA Astrophysics Data System (ADS)
Brostek, Lukas; Eggert, Thomas; Ono, Seiji; Mustari, Michael J.; Büttner, Ulrich; Glasauer, Stefan
2011-03-01
We introduce a novel approach for evaluation of neuronal tuning functions, which can be expressed by the conditional probability of observing a spike given any combination of independent variables. This probability can be estimated out of experimentally available data. By maximizing the mutual information between the probability distribution of the spike occurrence and that of the variables, the dependence of the spike on the input variables is maximized as well. We used this method to analyze the dependence of neuronal activity in cortical area MSTd on signals related to movement of the eye and retinal image movement.
W-phase estimation of first-order rupture distribution for megathrust earthquakes
NASA Astrophysics Data System (ADS)
Benavente, Roberto; Cummins, Phil; Dettmer, Jan
2014-05-01
Estimating the rupture pattern for large earthquakes during the first hour after the origin time can be crucial for rapid impact assessment and tsunami warning. However, the estimation of coseismic slip distribution models generally involves complex methodologies that are difficult to implement rapidly. Further, while model parameter uncertainty can be crucial for meaningful estimation, they are often ignored. In this work we develop a finite fault inversion for megathrust earthquakes which rapidly generates good first order estimates and uncertainties of spatial slip distributions. The algorithm uses W-phase waveforms and a linear automated regularization approach to invert for rupture models of some recent megathrust earthquakes. The W phase is a long period (100-1000 s) wave which arrives together with the P wave. Because it is fast, has small amplitude and a long-period character, the W phase is regularly used to estimate point source moment tensors by the NEIC and PTWC, among others, within an hour of earthquake occurrence. We use W-phase waveforms processed in a manner similar to that used for such point-source solutions. The inversion makes use of 3 component W-phase records retrieved from the Global Seismic Network. The inverse problem is formulated by a multiple time window method, resulting in a linear over-parametrized problem. The over-parametrization is addressed by Tikhonov regularization and regularization parameters are chosen according to the discrepancy principle by grid search. Noise on the data is addressed by estimating the data covariance matrix from data residuals. The matrix is obtained by starting with an a priori covariance matrix and then iteratively updating the matrix based on the residual errors of consecutive inversions. Then, a covariance matrix for the parameters is computed using a Bayesian approach. The application of this approach to recent megathrust earthquakes produces models which capture the most significant features of their slip distributions. Also, reliable solutions are generally obtained with data in a 30-minute window following the origin time, suggesting that a real-time system could obtain solutions in less than one hour following the origin time.
De Graaf, G; Van Hove, G; Haveman, M
2014-07-01
In the Netherlands, as in many other countries, there are indications of an inclusive school policy for children with Down syndrome. However, there is a lack of studies that evaluate to what extent this policy has actually succeeded in supporting the mainstreaming of these students. For the period 1984-2011, the number of children with Down syndrome entering regular education and the percentage of children still in regular education after 1-7 years were estimated on basis of samples from the database of the Dutch Down Syndrome Foundation. These estimations were combined with historical demographic data on the total number of children with Down syndrome in primary school age. Validity of the model was examined by comparison of the model-based estimations of numbers and percentages in regular education with relevant available empirical data from the Dutch Ministry of Education and from Dutch special schools. The percentage of all children with Down syndrome in the age range 4-13 in regular primary education has risen from 1% or 2% (at the very most about 20 children) in 1986-1987, to 10% (about 140 children) in 1991-1992, to 25% (about 400) in 1996-1997, to 35% (about 650) in 2001-2002 and to 37% (about 800) since 2005-2006. The proportional increase stopped in recent years. During the 1980s and 1990s, clearly more and more children with Down syndrome were in regular education, being supported by the then existing ad hoc regulations aimed at providing extra support in regular education. In the Netherlands, in 2003, these temporary regulations were transformed into structural legislation for children with disabilities. With regard to the mainstreaming of students with Down syndrome, the 2003 legislation has consolidated the situation. However, as percentages in regular education stayed fairly constant after 2000, it has failed to boost the mainstreaming of children with Down syndrome. The results of this study are discussed in the context of national and international legislation and educational policy. © 2013 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.
Partial knowledge, entropy, and estimation
MacQueen, James; Marschak, Jacob
1975-01-01
In a growing body of literature, available partial knowledge is used to estimate the prior probability distribution p≡(p1,...,pn) by maximizing entropy H(p)≡-Σpi log pi, subject to constraints on p which express that partial knowledge. The method has been applied to distributions of income, of traffic, of stock-price changes, and of types of brand-article purchases. We shall respond to two justifications given for the method: (α) It is “conservative,” and therefore good, to maximize “uncertainty,” as (uniquely) represented by the entropy parameter. (β) One should apply the mathematics of statistical thermodynamics, which implies that the most probable distribution has highest entropy. Reason (α) is rejected. Reason (β) is valid when “complete ignorance” is defined in a particular way and both the constraint and the estimator's loss function are of certain kinds. PMID:16578733
Beer volatile compounds and their application to low-malt beer fermentation.
Kobayashi, Michiko; Shimizu, Hiroshi; Shioya, Suteaki
2008-10-01
Low-malt beers, in which the amount of wort is adjusted to less than two-thirds of that in regular beer, are popular in the Japanese market because the flavor of low-malt beer is similar to that of regular beer but the price lesser than that of regular beer. There are few published articles about low-malt beer. However, in the production process, there are many similarities between low-malt and regular beer, e.g., the yeast used in low-malt beer fermentation is the same as that used for regular beer. Furthermore, many investigations into regular beer are applicable to low-malt beer production. In this review, we focus on production of volatile compounds, and various studies that are applicable to regular and low-malt beer. In particular, information about metabolism of volatile compounds in yeast cells during fermentation, volatile compound measurement and estimation methods, and control of volatile compound production are discussed in this review, which concentrates on studies published in the last 5-6 years.
Regularized Chapman-Enskog expansion for scalar conservation laws
NASA Technical Reports Server (NTRS)
Schochet, Steven; Tadmor, Eitan
1990-01-01
Rosenau has recently proposed a regularized version of the Chapman-Enskog expansion of hydrodynamics. This regularized expansion resembles the usual Navier-Stokes viscosity terms at law wave-numbers, but unlike the latter, it has the advantage of being a bounded macroscopic approximation to the linearized collision operator. The behavior of Rosenau regularization of the Chapman-Enskog expansion (RCE) is studied in the context of scalar conservation laws. It is shown that thie RCE model retains the essential properties of the usual viscosity approximation, e.g., existence of traveling waves, monotonicity, upper-Lipschitz continuity..., and at the same time, it sharpens the standard viscous shock layers. It is proved that the regularized RCE approximation converges to the underlying inviscid entropy solution as its mean-free-path epsilon approaches 0, and the convergence rate is estimated.
2017-12-01
MANAGEMENT : MAXIMIZING THE INFLUENCE OF EXTERNAL SPONSORS OVER AFFILIATED ARMED GROUPS by Anders C. Hamlin December 2017 Thesis Co...burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching...existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding this
Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.
2008-05-01
The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.
NASA Astrophysics Data System (ADS)
Barbosa, Tiago M.; Coelho, Eduarda
2017-07-01
The aim was to run a case study of the biomechanics of a wheelchair sprinter racing the 100 m final at the 2016 Paralympic Games. Stroke kinematics was measured by video analysis in each 20 m split. Race kinetics was estimated by employing an analytical model that encompasses the computation of the rolling friction, drag, energy output and energy input. A maximal average speed of 6.97 m s-1 was reached in the last split. It was estimated that the contributions of the rolling friction and drag force would account for 54% and 46% of the total resistance at maximal speed, respectively. Energy input and output increased over the event. However, we failed to note a steady state or any impairment of the energy input and output in the last few metres of the race. Data suggest that the 100 m is too short an event for the sprinter to be able to achieve his maximal power in such a distance.
Effect of core stability training on throwing velocity in female handball players.
Saeterbakken, Atle H; van den Tillaar, Roland; Seiler, Stephen
2011-03-01
The purpose was to study the effect of a sling exercise training (SET)-based core stability program on maximal throwing velocity among female handball players. Twenty-four female high-school handball players (16.6 ± 0.3 years, 63 ± 6 kg, and 169 ± 7 cm) participated and were initially divided into a SET training group (n = 14) and a control group (CON, n = 10). Both groups performed their regular handball training for 6 weeks. In addition, twice a week, the SET group performed a progressive core stability-training program consisting of 6 unstable closed kinetic chain exercises. Maximal throwing velocity was measured before and after the training period using photocells. Maximal throwing velocity significantly increased 4.9% from 17.9 ± 0.5 to 18.8 ± 0.4 m·s in the SET group after the training period (p < 0.01), but was unchanged in the control group (17.1 ± 0.4 vs. 16.9 ± 0.4 m·s). These results suggest that core stability training using unstable, closed kinetic chain movements can significantly improve maximal throwing velocity. A stronger and more stable lumbopelvic-hip complex may contribute to higher rotational velocity in multisegmental movements. Strength coaches can incorporate exercises exposing the joints for destabilization force during training in closed kinetic chain exercises. This may encourage an effective neuromuscular pattern and increase force production and can improve a highly specific performance task such as throwing.
Expectation maximization for hard X-ray count modulation profiles
NASA Astrophysics Data System (ADS)
Benvenuto, F.; Schwartz, R.; Piana, M.; Massone, A. M.
2013-07-01
Context. This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) instrument. Aims: Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized to analyze count modulation profiles in solar hard X-ray imaging based on rotating modulation collimators. Methods: The algorithm described in this paper solves the maximum likelihood problem iteratively and encodes a positivity constraint into the iterative optimization scheme. The result is therefore a classical expectation maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, at the same time, a very satisfactory Cash-statistic (C-statistic). Results: The method is applied to both reproduce synthetic flaring configurations and reconstruct images from experimental data corresponding to three real events. In this second case, the performance of expectation maximization, when compared to Pixon image reconstruction, shows a comparable accuracy and a notably reduced computational burden; when compared to CLEAN, shows a better fidelity with respect to the measurements with a comparable computational effectiveness. Conclusions: If optimally stopped, expectation maximization represents a very reliable method for image reconstruction in the RHESSI context when count modulation profiles are used as input data.
Neuromuscular fatigue following constant versus variable-intensity endurance cycling in triathletes.
Lepers, R; Theurel, J; Hausswirth, C; Bernard, T
2008-07-01
The aim of this study was to determine whether or not variable power cycling produced greater neuromuscular fatigue of knee extensor muscles than constant power cycling at the same mean power output. Eight male triathletes (age: 33+/-5 years, mass: 74+/-4 kg, VO2max: 62+/-5 mL kg(-1) min(-1), maximal aerobic power: 392+/-17 W) performed two 30 min trials on a cycle ergometer in a random order. Cycling exercise was performed either at a constant power output (CP) corresponding to 75% of the maximal aerobic power (MAP) or a variable power output (VP) with alternating +/-15%, +/-5%, and +/-10% of 75% MAP approximately every 5 min. Maximal voluntary contraction (MVC) torque, maximal voluntary activation level and excitation-contraction coupling process of knee extensor muscles were evaluated before and immediately after the exercise using the technique of electrically evoked contractions (single and paired stimulations). Oxygen uptake, ventilation and heart rate were also measured at regular intervals during the exercise. Averaged metabolic variables were not significantly different between the two conditions. Similarly, reductions in MVC torque (approximately -11%, P<0.05) after cycling were not different (P>0.05) between CP and VP trials. The magnitude of central and peripheral fatigue was also similar at the end of the two cycling exercises. It is concluded that, following 30 min of endurance cycling, semi-elite triathletes experienced no additional neuromuscular fatigue by varying power (from +/-5% to 15%) compared with a protocol that involved a constant power.
Data Unfolding with Wiener-SVD Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, W.; Li, X.; Qian, X.
Here, data unfolding is a common analysis technique used in HEP data analysis. Inspired by the deconvolution technique in the digital signal processing, a new unfolding technique based on the SVD technique and the well-known Wiener filter is introduced. The Wiener-SVD unfolding approach achieves the unfolding by maximizing the signal to noise ratios in the effective frequency domain given expectations of signal and noise and is free from regularization parameter. Through a couple examples, the pros and cons of the Wiener-SVD approach as well as the nature of the unfolded results are discussed.
Critical spaces for quasilinear parabolic evolution equations and applications
NASA Astrophysics Data System (ADS)
Prüss, Jan; Simonett, Gieri; Wilke, Mathias
2018-02-01
We present a comprehensive theory of critical spaces for the broad class of quasilinear parabolic evolution equations. The approach is based on maximal Lp-regularity in time-weighted function spaces. It is shown that our notion of critical spaces coincides with the concept of scaling invariant spaces in case that the underlying partial differential equation enjoys a scaling invariance. Applications to the vorticity equations for the Navier-Stokes problem, convection-diffusion equations, the Nernst-Planck-Poisson equations in electro-chemistry, chemotaxis equations, the MHD equations, and some other well-known parabolic equations are given.
Data Unfolding with Wiener-SVD Method
Tang, W.; Li, X.; Qian, X.; ...
2017-10-04
Here, data unfolding is a common analysis technique used in HEP data analysis. Inspired by the deconvolution technique in the digital signal processing, a new unfolding technique based on the SVD technique and the well-known Wiener filter is introduced. The Wiener-SVD unfolding approach achieves the unfolding by maximizing the signal to noise ratios in the effective frequency domain given expectations of signal and noise and is free from regularization parameter. Through a couple examples, the pros and cons of the Wiener-SVD approach as well as the nature of the unfolded results are discussed.
Reach distance but not judgment error is associated with falls in older people.
Butler, Annie A; Lord, Stephen R; Fitzpatrick, Richard C
2011-08-01
Reaching is a vital action requiring precise motor coordination and attempting to reach for objects that are too far away can destabilize balance and result in falls and injury. This could be particularly important for many elderly people with age-related loss of sensorimotor function and a reduced ability to recover balance. Here, we investigate the interaction between reaching ability, errors in judging reach, and the incidence of falling (retrospectively and prospectively) in a large cohort of older people. Participants (n = 415, 70-90 years) had to estimate the furthest distance they could reach to retrieve a broomstick hanging in front of them. In an iterative dialog with the experimenter, the stick was moved until it was at the furthest distance they estimated to be reached successfully. At this point, participants were asked to attempt to retrieve the stick. Actual maximal reach was then measured. The difference between attempted reach and actual maximal reach provided a measure of judgment error. One-year retrospective fall rates were obtained at initial assessment and prospective falls were monitored by monthly calendar. Participants with poor maximal reach attempted shorter reaches than those who had good reaching ability. Those with the best reaching ability most accurately judged their maximal reach, whereas poor performers were dichotomous and either underestimated or overestimated their reach with few judging exactly. Fall rates were significantly associated with reach distance but not with reach judgment error. Maximal reach but not error in perceived reach is associated with falls in older people.
Heydari, Payam; Varmazyar, Sakineh; Variani, Ali Safari; Hashemi, Fariba; Ataei, Seyed Sajad
2017-10-01
Test of maximal oxygen consumption is the gold standard for measuring cardio-pulmonary fitness. This study aimed to determine correlation of Gerkin, Queen's College, George, and Jackson methods in estimating maximal oxygen consumption, and demographic factors affecting maximal oxygen consumption. This descriptive cross-sectional study was conducted in a census of medical emergency students (n=57) in Qazvin University of Medical Sciences in 2016. The subjects firstly completed the General Health Questionnaire (PAR-Q) and demographic characteristics. Then eligible subjects were assessed using exercise tests of Gerkin treadmill, Queen's College steps and non-exercise George, and Jackson. Data analysis was carried out using independent t-test, one way analysis of variance and Pearson correlation in the SPSS software. The mean age of participants was 21.69±4.99 years. The mean of maximal oxygen consumption using Gerkin, Queen's College, George, and Jackson tests was 4.17, 3.36, 3.64, 3.63 liters per minute, respectively. Pearson statistical test showed a significant correlation among fours tests. George and Jackson tests had the greatest correlation (r=0.85, p>0.001). Results of tests of one-way analysis of variance and t-test showed a significant relationship between independent variable of weight and height in four tests, and dependent variable of maximal oxygen consumption. Also, there was a significant relationship between variable of body mass index in two tests of Gerkin and Queen's College and variable of exercise hours per week with the George and Jackson tests (p>0.001). Given the obtained correlation, these tests have the potential to replace each other as necessary, so that the non-exercise Jackson test can be used instead of the Gerkin test.
Genomic predictors of the maximal O2 uptake response to standardized exercise training programs
Sarzynski, Mark A.; Rice, Treva K.; Kraus, William E.; Church, Timothy S.; Sung, Yun Ju; Rao, D. C.; Rankinen, Tuomo
2011-01-01
Low cardiorespiratory fitness is a powerful predictor of morbidity and cardiovascular mortality. In 473 sedentary adults, all whites, from 99 families of the Health, Risk Factors, Exercise Training, and Genetics (HERITAGE) Family Study, the heritability of gains in maximal O2 uptake (V̇o2max) after exposure to a standardized 20-wk exercise program was estimated at 47%. A genome-wide association study based on 324,611 single-nucleotide polymorphisms (SNPs) was undertaken to identify SNPs associated with improvements in V̇o2max Based on single-SNP analysis, 39 SNPs were associated with the gains with P < 1.5 × 10−4. Stepwise multiple regression analysis of the 39 SNPs identified a panel of 21 SNPs that accounted for 49% of the variance in V̇o2max trainability. Subjects who carried ≤9 favorable alleles at these 21 SNPs improved their V̇o2max by 221 ml/min, whereas those who carried ≥19 of these alleles gained, on average, 604 ml/min. The strongest association was with rs6552828, located in the acyl-CoA synthase long-chain member 1 (ACSL1) gene, which accounted by itself for ∼6% of the training response of V̇o2max. The genes nearest to the SNPs that were the strongest predictors were PR domain-containing 1 with ZNF domain (PRDM1); glutamate receptor, ionotropic, N-methyl-d-aspartate 3A (GRIN3A); K+ channel, voltage gated, subfamily H, member 8 (KCNH8); and zinc finger protein of the cerebellum 4 (ZIC4). The association with the SNP nearest to ZIC4 was replicated in 40- to 65-yr-old, sedentary, overweight, and dyslipidemic subjects trained in Studies of a Targeted Risk Reduction Intervention Through Defined Exercise (STRRIDE; n = 183). Two SNPs were replicated in sedentary obese white women exercise trained in the Dose Response to Exercise (DREW) study (n = 112): rs1956197 near dishevelled associated activator of morphogenesis 1 (DAAM1) and rs17117533 in the vicinity of necdin (NDN). The association of SNPs rs884736 in the calmodulin-binding transcription activator 1 (CAMTA1) locus and rs17581162 ∼68 kb upstream from regulator of G protein signaling 18 (RGS18) with the gains in V̇o2max in HERITAGE whites were replicated in HERITAGE blacks (n = 247). These genomic predictors of the response of V̇o2max to regular exercise provide new targets for the study of the biology of fitness and its adaptation to regular exercise. Large-scale replication studies are warranted. PMID:21183627
ERIC Educational Resources Information Center
Mott, Willie J., Comp.
This manual brings together in one package basic knowledges and facts about estimating to assist the building trades instructor in developing his instructional program. Teaching estimating as a separate unit, or integrating it into a regular program are given as two different instructional approaches. After a description of the estimating process…
NASA Astrophysics Data System (ADS)
Murillo, Sergio; Pattichis, Marios; Soliz, Peter; Barriga, Simon; Loizou, C. P.; Pattichis, C. S.
2010-03-01
Motion estimation from digital video is an ill-posed problem that requires a regularization approach. Regularization introduces a smoothness constraint that can reduce the resolution of the velocity estimates. The problem is further complicated for ultrasound videos (US), where speckle noise levels can be significant. Motion estimation using optical flow models requires the modification of several parameters to satisfy the optical flow constraint as well as the level of imposed smoothness. Furthermore, except in simulations or mostly unrealistic cases, there is no ground truth to use for validating the velocity estimates. This problem is present in all real video sequences that are used as input to motion estimation algorithms. It is also an open problem in biomedical applications like motion analysis of US of carotid artery (CA) plaques. In this paper, we study the problem of obtaining reliable ultrasound video motion estimates for atherosclerotic plaques for use in clinical diagnosis. A global optimization framework for motion parameter optimization is presented. This framework uses actual carotid artery motions to provide optimal parameter values for a variety of motions and is tested on ten different US videos using two different motion estimation techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeng, Dong; Zhang, Xinyu; Bian, Zhaoying, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn
Purpose: Cerebral perfusion computed tomography (PCT) imaging as an accurate and fast acute ischemic stroke examination has been widely used in clinic. Meanwhile, a major drawback of PCT imaging is the high radiation dose due to its dynamic scan protocol. The purpose of this work is to develop a robust perfusion deconvolution approach via structure tensor total variation (STV) regularization (PD-STV) for estimating an accurate residue function in PCT imaging with the low-milliampere-seconds (low-mAs) data acquisition. Methods: Besides modeling the spatio-temporal structure information of PCT data, the STV regularization of the present PD-STV approach can utilize the higher order derivativesmore » of the residue function to enhance denoising performance. To minimize the objective function, the authors propose an effective iterative algorithm with a shrinkage/thresholding scheme. A simulation study on a digital brain perfusion phantom and a clinical study on an old infarction patient were conducted to validate and evaluate the performance of the present PD-STV approach. Results: In the digital phantom study, visual inspection and quantitative metrics (i.e., the normalized mean square error, the peak signal-to-noise ratio, and the universal quality index) assessments demonstrated that the PD-STV approach outperformed other existing approaches in terms of the performance of noise-induced artifacts reduction and accurate perfusion hemodynamic maps (PHM) estimation. In the patient data study, the present PD-STV approach could yield accurate PHM estimation with several noticeable gains over other existing approaches in terms of visual inspection and correlation analysis. Conclusions: This study demonstrated the feasibility and efficacy of the present PD-STV approach in utilizing STV regularization to improve the accuracy of residue function estimation of cerebral PCT imaging in the case of low-mAs.« less
NASA Astrophysics Data System (ADS)
Uieda, Leonardo; Barbosa, Valéria C. F.
2017-01-01
Estimating the relief of the Moho from gravity data is a computationally intensive nonlinear inverse problem. What is more, the modelling must take the Earths curvature into account when the study area is of regional scale or greater. We present a regularized nonlinear gravity inversion method that has a low computational footprint and employs a spherical Earth approximation. To achieve this, we combine the highly efficient Bott's method with smoothness regularization and a discretization of the anomalous Moho into tesseroids (spherical prisms). The computational efficiency of our method is attained by harnessing the fact that all matrices involved are sparse. The inversion results are controlled by three hyperparameters: the regularization parameter, the anomalous Moho density-contrast, and the reference Moho depth. We estimate the regularization parameter using the method of hold-out cross-validation. Additionally, we estimate the density-contrast and the reference depth using knowledge of the Moho depth at certain points. We apply the proposed method to estimate the Moho depth for the South American continent using satellite gravity data and seismological data. The final Moho model is in accordance with previous gravity-derived models and seismological data. The misfit to the gravity and seismological data is worse in the Andes and best in oceanic areas, central Brazil and Patagonia, and along the Atlantic coast. Similarly to previous results, the model suggests a thinner crust of 30-35 km under the Andean foreland basins. Discrepancies with the seismological data are greatest in the Guyana Shield, the central Solimões and Amazonas Basins, the Paraná Basin, and the Borborema province. These differences suggest the existence of crustal or mantle density anomalies that were unaccounted for during gravity data processing.
Verloock, Leen; Joseph, Wout; Gati, Azeddine; Varsier, Nadège; Flach, Björn; Wiart, Joe; Martens, Luc
2013-06-01
An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on downlink band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2×2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders.
MRI Estimates of Brain Iron Concentration in Normal Aging Using Quantitative Susceptibility Mapping
Bilgic, Berkin; Pfefferbaum, Adolf; Rohlfing, Torsten; Sullivan, Edith V.; Adalsteinsson, Elfar
2011-01-01
Quantifying tissue iron concentration in vivo is instrumental for understanding the role of iron in physiology and in neurological diseases associated with abnormal iron distribution. Herein, we use recently-developed Quantitative Susceptibility Mapping (QSM) methodology to estimate the tissue magnetic susceptibility based on MRI signal phase. To investigate the effect of different regularization choices, we implement and compare ℓ1 and ℓ2 norm regularized QSM algorithms. These regularized approaches solve for the underlying magnetic susceptibility distribution, a sensitive measure of the tissue iron concentration, that gives rise to the observed signal phase. Regularized QSM methodology also involves a pre-processing step that removes, by dipole fitting, unwanted background phase effects due to bulk susceptibility variations between air and tissue and requires data acquisition only at a single field strength. For validation, performances of the two QSM methods were measured against published estimates of regional brain iron from postmortem and in vivo data. The in vivo comparison was based on data previously acquired using Field-Dependent Relaxation Rate Increase (FDRI), an estimate of MRI relaxivity enhancement due to increased main magnetic field strength, requiring data acquired at two different field strengths. The QSM analysis was based on susceptibility-weighted images acquired at 1.5T, whereas FDRI analysis used Multi-Shot Echo-Planar Spin Echo images collected at 1.5T and 3.0T. Both datasets were collected in the same healthy young and elderly adults. The in vivo estimates of regional iron concentration comported well with published postmortem measurements; both QSM approaches yielded the same rank ordering of iron concentration by brain structure, with the lowest in white matter and the highest in globus pallidus. Further validation was provided by comparison of the in vivo measurements, ℓ1-regularized QSM versus FDRI and ℓ2-regularized QSM versus FDRI, which again yielded perfect rank ordering of iron by brain structure. The final means of validation was to assess how well each in vivo method detected known age-related differences in regional iron concentrations measured in the same young and elderly healthy adults. Both QSM methods and FDRI were consistent in identifying higher iron concentrations in striatal and brain stem ROIs (i.e., caudate nucleus, putamen, globus pallidus, red nucleus, and substantia nigra) in the older than in the young group. The two QSM methods appeared more sensitive in detecting age differences in brain stem structures as they revealed differences of much higher statistical significance between the young and elderly groups than did FDRI. However, QSM values are influenced by factors such as the myelin content, whereas FDRI is a more specific indicator of iron content. Hence, FDRI demonstrated higher specificity to iron yet yielded noisier data despite longer scan times and lower spatial resolution than QSM. The robustness, practicality, and demonstrated ability of predicting the change in iron deposition in adult aging suggest that regularized QSM algorithms using single-field-strength data are possible alternatives to tissue iron estimation requiring two field strengths. PMID:21925274
The unsaturated flow in porous media with dynamic capillary pressure
NASA Astrophysics Data System (ADS)
Milišić, Josipa-Pina
2018-05-01
In this paper we consider a degenerate pseudoparabolic equation for the wetting saturation of an unsaturated two-phase flow in porous media with dynamic capillary pressure-saturation relationship where the relaxation parameter depends on the saturation. Following the approach given in [13] the existence of a weak solution is proved using Galerkin approximation and regularization techniques. A priori estimates needed for passing to the limit when the regularization parameter goes to zero are obtained by using appropriate test-functions, motivated by the fact that considered PDE allows a natural generalization of the classical Kullback entropy. Finally, a special care was given in obtaining an estimate of the mixed-derivative term by combining the information from the capillary pressure with the obtained a priori estimates on the saturation.
Zonal wavefront estimation using an array of hexagonal grating patterns
NASA Astrophysics Data System (ADS)
Pathak, Biswajit; Boruah, Bosanta R.
2014-10-01
Accuracy of Shack-Hartmann type wavefront sensors depends on the shape and layout of the lenslet array that samples the incoming wavefront. It has been shown that an array of gratings followed by a focusing lens provide a substitution for the lensslet array. Taking advantage of the computer generated holography technique, any arbitrary diffraction grating aperture shape, size or pattern can be designed with little penalty for complexity. In the present work, such a holographic technique is implemented to design regular hexagonal grating array to have zero dead space between grating patterns, eliminating the possibility of leakage of wavefront during the estimation of the wavefront. Tessellation of regular hexagonal shape, unlike other commonly used shapes, also reduces the estimation error by incorporating more number of neighboring slope values at an equal separation.
Multiuser TOA Estimation Algorithm in DS-CDMA Sparse Channel for Radiolocation
NASA Astrophysics Data System (ADS)
Kim, Sunwoo
This letter considers multiuser time delay estimation in a sparse channel environment for radiolocation. The generalized successive interference cancellation (GSIC) algorithm is used to eliminate the multiple access interference (MAI). To adapt GSIC to sparse channels the alternating maximization (AM) algorithm is considered, and the continuous time delay of each path is estimated without requiring a priori known data sequences.
Leander, Jacob; Lundh, Torbjörn; Jirstrand, Mats
2014-05-01
In this paper we consider the problem of estimating parameters in ordinary differential equations given discrete time experimental data. The impact of going from an ordinary to a stochastic differential equation setting is investigated as a tool to overcome the problem of local minima in the objective function. Using two different models, it is demonstrated that by allowing noise in the underlying model itself, the objective functions to be minimized in the parameter estimation procedures are regularized in the sense that the number of local minima is reduced and better convergence is achieved. The advantage of using stochastic differential equations is that the actual states in the model are predicted from data and this will allow the prediction to stay close to data even when the parameters in the model is incorrect. The extended Kalman filter is used as a state estimator and sensitivity equations are provided to give an accurate calculation of the gradient of the objective function. The method is illustrated using in silico data from the FitzHugh-Nagumo model for excitable media and the Lotka-Volterra predator-prey system. The proposed method performs well on the models considered, and is able to regularize the objective function in both models. This leads to parameter estimation problems with fewer local minima which can be solved by efficient gradient-based methods. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, Jie; Wang, Wilson; Ma, Fai
2011-07-01
System current state estimation (or condition monitoring) and future state prediction (or failure prognostics) constitute the core elements of condition-based maintenance programs. For complex systems whose internal state variables are either inaccessible to sensors or hard to measure under normal operational conditions, inference has to be made from indirect measurements using approaches such as Bayesian learning. In recent years, the auxiliary particle filter (APF) has gained popularity in Bayesian state estimation; the APF technique, however, has some potential limitations in real-world applications. For example, the diversity of the particles may deteriorate when the process noise is small, and the variance of the importance weights could become extremely large when the likelihood varies dramatically over the prior. To tackle these problems, a regularized auxiliary particle filter (RAPF) is developed in this paper for system state estimation and forecasting. This RAPF aims to improve the performance of the APF through two innovative steps: (1) regularize the approximating empirical density and redraw samples from a continuous distribution so as to diversify the particles; and (2) smooth out the rather diffused proposals by a rejection/resampling approach so as to improve the robustness of particle filtering. The effectiveness of the proposed RAPF technique is evaluated through simulations of a nonlinear/non-Gaussian benchmark model for state estimation. It is also implemented for a real application in the remaining useful life (RUL) prediction of lithium-ion batteries.
Févotte, Cédric; Bertin, Nancy; Durrieu, Jean-Louis
2009-03-01
This letter presents theoretical, algorithmic, and experimental results about nonnegative matrix factorization (NMF) with the Itakura-Saito (IS) divergence. We describe how IS-NMF is underlaid by a well-defined statistical model of superimposed gaussian components and is equivalent to maximum likelihood estimation of variance parameters. This setting can accommodate regularization constraints on the factors through Bayesian priors. In particular, inverse-gamma and gamma Markov chain priors are considered in this work. Estimation can be carried out using a space-alternating generalized expectation-maximization (SAGE) algorithm; this leads to a novel type of NMF algorithm, whose convergence to a stationary point of the IS cost function is guaranteed. We also discuss the links between the IS divergence and other cost functions used in NMF, in particular, the Euclidean distance and the generalized Kullback-Leibler (KL) divergence. As such, we describe how IS-NMF can also be performed using a gradient multiplicative algorithm (a standard algorithm structure in NMF) whose convergence is observed in practice, though not proven. Finally, we report a furnished experimental comparative study of Euclidean-NMF, KL-NMF, and IS-NMF algorithms applied to the power spectrogram of a short piano sequence recorded in real conditions, with various initializations and model orders. Then we show how IS-NMF can successfully be employed for denoising and upmix (mono to stereo conversion) of an original piece of early jazz music. These experiments indicate that IS-NMF correctly captures the semantics of audio and is better suited to the representation of music signals than NMF with the usual Euclidean and KL costs.
NASA Technical Reports Server (NTRS)
Monk, T. H.; Petrie, S. R.; Hayes, A. J.; Kupfer, D. J.
1994-01-01
A diary-like instrument to measure lifestyle regularity (the 'Social Rhythm Metric'-SRM) was given to 96 subjects (48 women, 48 men), 39 of whom repeated the study after at least one year, with additional objective measures of rest/activity. Lifestyle regularity as measured by the SRM related to age, morningness, subjective sleep quality and time-of-day variations in alertness, but not to gender, extroversion or neuroticism. Statistically significant test-retest correlations of about 0.4 emerged for SRM scores over the 12-30 month delay. Diary-based estimates of bedtime and waketime appeared fairly reliable. In a further study of healthy young men, 4 high SRM scorers ('regular') had a deeper nocturnal body temperature trough than 5 low SRM scorers ('irregular'), suggesting a better functioning circadian system in the 'regular' group.
NASA Astrophysics Data System (ADS)
Klein, Iris M.; Rousseau, Alain N.; Frigon, Anne; Freudiger, Daphné; Gagnon, Patrick
2016-06-01
Probable maximum snow accumulation (PMSA) is one of the key variables used to estimate the spring probable maximum flood (PMF). A robust methodology for evaluating the PMSA is imperative so the ensuing spring PMF is a reasonable estimation. This is of particular importance in times of climate change (CC) since it is known that solid precipitation in Nordic landscapes will in all likelihood change over the next century. In this paper, a PMSA methodology based on simulated data from regional climate models is developed. Moisture maximization represents the core concept of the proposed methodology; precipitable water being the key variable. Results of stationarity tests indicate that CC will affect the monthly maximum precipitable water and, thus, the ensuing ratio to maximize important snowfall events. Therefore, a non-stationary approach is used to describe the monthly maximum precipitable water. Outputs from three simulations produced by the Canadian Regional Climate Model were used to give first estimates of potential PMSA changes for southern Quebec, Canada. A sensitivity analysis of the computed PMSA was performed with respect to the number of time-steps used (so-called snowstorm duration) and the threshold for a snowstorm to be maximized or not. The developed methodology is robust and a powerful tool to estimate the relative change of the PMSA. Absolute results are in the same order of magnitude as those obtained with the traditional method and observed data; but are also found to depend strongly on the climate projection used and show spatial variability.
Optimal secondary source position in exterior spherical acoustical holophony
NASA Astrophysics Data System (ADS)
Pasqual, A. M.; Martin, V.
2012-02-01
Exterior spherical acoustical holophony is a branch of spatial audio reproduction that deals with the rendering of a given free-field radiation pattern (the primary field) by using a compact spherical loudspeaker array (the secondary source). More precisely, the primary field is known on a spherical surface surrounding the primary and secondary sources and, since the acoustic fields are described in spherical coordinates, they are naturally subjected to spherical harmonic analysis. Besides, the inverse problem of deriving optimal driving signals from a known primary field is ill-posed because the secondary source cannot radiate high-order spherical harmonics efficiently, especially in the low-frequency range. As a consequence, a standard least-squares solution will overload the transducers if the primary field contains such harmonics. Here, this is avoided by discarding the strongly decaying spherical waves, which are identified through inspection of the radiation efficiency curves of the secondary source. However, such an unavoidable regularization procedure increases the least-squares error, which also depends on the position of the secondary source. This paper deals with the above-mentioned questions in the context of far-field directivity reproduction at low and medium frequencies. In particular, an optimal secondary source position is sought, which leads to the lowest reproduction error in the least-squares sense without overloading the transducers. In order to address this issue, a regularization quality factor is introduced to evaluate the amount of regularization required. It is shown that the optimal position improves significantly the holophonic reconstruction and maximizes the regularization quality factor (minimizes the amount of regularization), which is the main general contribution of this paper. Therefore, this factor can also be used as a cost function to obtain the optimal secondary source position.
NASA Astrophysics Data System (ADS)
Cheong, Kwang-Ho; Lee, MeYeon; Kang, Sei-Kwon; Yoon, Jai-Woong; Park, SoAh; Hwang, Taejin; Kim, Haeyoung; Kim, KyoungJu; Han, Tae Jin; Bae, Hoonsik
2015-01-01
Despite the considerable importance of accurately estimating the respiration regularity of a patient in motion compensation treatment, not to mention the necessity of maintaining that regularity through the following sessions, an effective and simply applicable method by which those goals can be accomplished has rarely been reported. The authors herein propose a simple respiration regularity index based on parameters derived from a correspondingly simplified respiration model. In order to simplify a patient's breathing pattern while preserving the data's intrinsic properties, we defined a respiration model as a cos4( ω( t) · t) wave form with a baseline drift. According to this respiration formula, breathing-pattern fluctuation could be explained using four factors: the sample standard deviation of respiration period ( s f ), the sample standard deviation of amplitude ( s a ) and the results of a simple regression of the baseline drift (slope as β, and standard deviation of residuals as σ r ) of a respiration signal. The overall irregularity ( δ) was defined as , where is a variable newly-derived by using principal component analysis (PCA) for the four fluctuation parameters and has two principal components ( ω 1, ω 2). The proposed respiration regularity index was defined as ρ = ln(1 + (1/ δ))/2, a higher ρ indicating a more regular breathing pattern. We investigated its clinical relevance by comparing it with other known parameters. Subsequently, we applied it to 110 respiration signals acquired from five liver and five lung cancer patients by using real-time position management (RPM; Varian Medical Systems, Palo Alto, CA). Correlations between the regularity of the first session and the remaining fractions were investigated using Pearson's correlation coefficient. Additionally, the respiration regularity was compared between the liver and lung cancer patient groups. The respiration regularity was determined based on ρ; patients with ρ < 0.3 showed worse regularity than the others whereas ρ > 0.7 was suitable for respiratory-gated radiation therapy (RGRT). Fluctuations in the breathing cycle and the amplitude were especially determinative of ρ. If the respiration regularity of a patient's first session was known, it could be estimated through subsequent sessions. Notably, the breathing patterns of the lung cancer patients were more irregular than those of the liver cancer patients. Respiration regularity could be objectively determined by using a composite index, ρ. Such a single-index testing of respiration regularity can facilitate determination of RGRT availability in clinical settings, especially for free-breathing cases.
Hu, Shiang; Yao, Dezhong; Valdes-Sosa, Pedro A.
2018-01-01
The choice of reference for the electroencephalogram (EEG) is a long-lasting unsolved issue resulting in inconsistent usages and endless debates. Currently, both the average reference (AR) and the reference electrode standardization technique (REST) are two primary, apparently irreconcilable contenders. We propose a theoretical framework to resolve this reference issue by formulating both (a) estimation of potentials at infinity, and (b) determination of the reference, as a unified Bayesian linear inverse problem, which can be solved by maximum a posterior estimation. We find that AR and REST are very particular cases of this unified framework: AR results from biophysically non-informative prior; while REST utilizes the prior based on the EEG generative model. To allow for simultaneous denoising and reference estimation, we develop the regularized versions of AR and REST, named rAR and rREST, respectively. Both depend on a regularization parameter that is the noise to signal variance ratio. Traditional and new estimators are evaluated with this framework, by both simulations and analysis of real resting EEGs. Toward this end, we leverage the MRI and EEG data from 89 subjects which participated in the Cuban Human Brain Mapping Project. Generated artificial EEGs—with a known ground truth, show that relative error in estimating the EEG potentials at infinity is lowest for rREST. It also reveals that realistic volume conductor models improve the performances of REST and rREST. Importantly, for practical applications, it is shown that an average lead field gives the results comparable to the individual lead field. Finally, it is shown that the selection of the regularization parameter with Generalized Cross-Validation (GCV) is close to the “oracle” choice based on the ground truth. When evaluated with the real 89 resting state EEGs, rREST consistently yields the lowest GCV. This study provides a novel perspective to the EEG reference problem by means of a unified inverse solution framework. It may allow additional principled theoretical formulations and numerical evaluation of performance. PMID:29780302
ERIC Educational Resources Information Center
Syracuse Univ., NY. Div. of Special Education and Rehabilitation.
An evaluation of the costs of serving handicapped children in Head Start was based on information collected in conjunction with on-site visits to regular Head Start programs, experimental programs, and specially selected model preschool programs, and from questionnaires completed by 1,353 grantees and delegate agencies of regular Head Start…
1983-03-09
that maximize electromagnetic compatibility potential. -- Providing direct assistance on an reimbursable basis to DOD and other Government agencies on...value, we estimated that reimburs - able real estate expenses would average about $6,458 rather than $4,260 included in the Air Force estimate. When the...of estimated reimbursement was assumed to be necessary to encourage the relocation of more professional employees and increase their estimated
Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D
2002-07-01
Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.
Mapping intra-urban transmission risk of dengue fever with big hourly cellphone data.
Mao, Liang; Yin, Ling; Song, Xiaoqing; Mei, Shujiang
2016-10-01
Cellphone tracking has been recently integrated into risk assessment of disease transmission, because travel behavior of disease carriers can be depicted in unprecedented details. Still in its infancy, such an integration has been limited to: 1) risk assessment only at national and provincial scales, where intra-urban human movements are neglected, and 2) using irregularly logged cellphone data that miss numerous user movements. Furthermore, few risk assessments have considered positional uncertainty of cellphone data. This study proposed a new framework for mapping intra-urban disease risk with regularly logged cellphone tracking data, taking the dengue fever in Shenzhen city as an example. Hourly tracking records of 5.85 million cellphone users, combined with the random forest classification and mosquito activities, were utilized to estimate the local transmission risk of dengue fever and the importation risk through travels. Stochastic simulations were further employed to quantify the uncertainty of risk. The resultant maps suggest targeted interventions to maximally reduce dengue cases exported to other places, as well as appropriate interventions to contain risk in places that import them. Given the popularity of cellphone use in urbanized areas, this framework can be adopted by other cities to design spatio-temporally resolved programs for disease control. Copyright © 2016 Elsevier B.V. All rights reserved.
Hopkins, Melanie J; Smith, Andrew B
2015-03-24
How ecological and morphological diversity accrues over geological time has been much debated by paleobiologists. Evidence from the fossil record suggests that many clades reach maximal diversity early in their evolutionary history, followed by a decline in evolutionary rates as ecological space fills or due to internal constraints. Here, we apply recently developed methods for estimating rates of morphological evolution during the post-Paleozoic history of a major invertebrate clade, the Echinoidea. Contrary to expectation, rates of evolution were lowest during the initial phase of diversification following the Permo-Triassic mass extinction and increased over time. Furthermore, although several subclades show high initial rates and net decreases in rates of evolution, consistent with "early bursts" of morphological diversification, at more inclusive taxonomic levels, these bursts appear as episodic peaks. Peak rates coincided with major shifts in ecological morphology, primarily associated with innovations in feeding strategies. Despite having similar numbers of species in today's oceans, regular echinoids have accrued far less morphological diversity than irregular echinoids due to lower intrinsic rates of morphological evolution and less morphological innovation, the latter indicative of constrained or bounded evolution. These results indicate that rates of evolution are extremely heterogenous through time and their interpretation depends on the temporal and taxonomic scale of analysis.
Context-aware adaptive spelling in motor imagery BCI
NASA Astrophysics Data System (ADS)
Perdikis, S.; Leeb, R.; Millán, J. d. R.
2016-06-01
Objective. This work presents a first motor imagery-based, adaptive brain-computer interface (BCI) speller, which is able to exploit application-derived context for improved, simultaneous classifier adaptation and spelling. Online spelling experiments with ten able-bodied users evaluate the ability of our scheme, first, to alleviate non-stationarity of brain signals for restoring the subject’s performances, second, to guide naive users into BCI control avoiding initial offline BCI calibration and, third, to outperform regular unsupervised adaptation. Approach. Our co-adaptive framework combines the BrainTree speller with smooth-batch linear discriminant analysis adaptation. The latter enjoys contextual assistance through BrainTree’s language model to improve online expectation-maximization maximum-likelihood estimation. Main results. Our results verify the possibility to restore single-sample classification and BCI command accuracy, as well as spelling speed for expert users. Most importantly, context-aware adaptation performs significantly better than its unsupervised equivalent and similar to the supervised one. Although no significant differences are found with respect to the state-of-the-art PMean approach, the proposed algorithm is shown to be advantageous for 30% of the users. Significance. We demonstrate the possibility to circumvent supervised BCI recalibration, saving time without compromising the adaptation quality. On the other hand, we show that this type of classifier adaptation is not as efficient for BCI training purposes.
Context-aware adaptive spelling in motor imagery BCI.
Perdikis, S; Leeb, R; Millán, J D R
2016-06-01
This work presents a first motor imagery-based, adaptive brain-computer interface (BCI) speller, which is able to exploit application-derived context for improved, simultaneous classifier adaptation and spelling. Online spelling experiments with ten able-bodied users evaluate the ability of our scheme, first, to alleviate non-stationarity of brain signals for restoring the subject's performances, second, to guide naive users into BCI control avoiding initial offline BCI calibration and, third, to outperform regular unsupervised adaptation. Our co-adaptive framework combines the BrainTree speller with smooth-batch linear discriminant analysis adaptation. The latter enjoys contextual assistance through BrainTree's language model to improve online expectation-maximization maximum-likelihood estimation. Our results verify the possibility to restore single-sample classification and BCI command accuracy, as well as spelling speed for expert users. Most importantly, context-aware adaptation performs significantly better than its unsupervised equivalent and similar to the supervised one. Although no significant differences are found with respect to the state-of-the-art PMean approach, the proposed algorithm is shown to be advantageous for 30% of the users. We demonstrate the possibility to circumvent supervised BCI recalibration, saving time without compromising the adaptation quality. On the other hand, we show that this type of classifier adaptation is not as efficient for BCI training purposes.
Regularized estimation of Euler pole parameters
NASA Astrophysics Data System (ADS)
Aktuğ, Bahadir; Yildirim, Ömer
2013-07-01
Euler vectors provide a unified framework to quantify the relative or absolute motions of tectonic plates through various geodetic and geophysical observations. With the advent of space geodesy, Euler parameters of several relatively small plates have been determined through the velocities derived from the space geodesy observations. However, the available data are usually insufficient in number and quality to estimate both the Euler vector components and the Euler pole parameters reliably. Since Euler vectors are defined globally in an Earth-centered Cartesian frame, estimation with the limited geographic coverage of the local/regional geodetic networks usually results in highly correlated vector components. In the case of estimating the Euler pole parameters directly, the situation is even worse, and the position of the Euler pole is nearly collinear with the magnitude of the rotation rate. In this study, a new method, which consists of an analytical derivation of the covariance matrix of the Euler vector in an ideal network configuration, is introduced and a regularized estimation method specifically tailored for estimating the Euler vector is presented. The results show that the proposed method outperforms the least squares estimation in terms of the mean squared error.
77 FR 61008 - Request for Comments Under the Paperwork Reduction Act, Section 3506
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-05
... questions to be addressed, data sharing maximizes the public benefit achieved through research investments... to reduce paperwork and respondent burden, invites the general public and other Federal agencies to...; and Estimated Total Annual Burden Hours Requested: 63. The annual cost to respondents is estimated at...
Dexter, Franklin; Epstein, Richard H; Dutton, Richard P; Kordylewski, Hubert; Ledolter, Johannes; Rosenberg, Henry; Hindman, Bradley J
2016-12-01
Anesthesiologists providing care during off hours (ie, weekends or holidays, or cases started during the evening or late afternoon) are more likely to care for patients at greater risk of sustaining major adverse events than when they work during regular hours (eg, Monday through Friday, from 7:00 AM to 2:59 PM). We consider the logical inconsistency of using subspecialty teams during regular hours but not during weekends or evenings. We analyzed data from the Anesthesia Quality Institute's National Anesthesia Clinical Outcomes Registry (NACOR). Among the hospitals in the United States, we estimated the average number of common types of anesthesia procedures (ie, diversity measured as inverse of Herfindahl index), and the average difference in the number of common procedures between 2 off-hours periods (regular hours versus weekends, and regular hours versus evenings). We also used NACOR data to estimate the average similarity in the distributions of procedures between regular hours and weekends and between regular hours and evenings in US facilities. Results are reported as mean ± standard error of the mean among 399 facilities nationwide with weekend cases. The distributions of common procedures were moderately similar (ie, not large, <.8) between regular hours and evenings (similarity index .59 ± .01) and between regular hours and weekends (similarity index, .55 ± .02). For most facilities, the number of common procedures differed by <5 procedures between regular hours and evenings (74.4% of facilities, P < .0001) and between regular hours and weekends (64.7% of facilities, P < .0001). The average number of common procedures was 13.59 ± .12 for regular hours, 13.12 ± .13 for evenings, and 9.43 ± .13 for weekends. The pairwise differences by facility were .13 ± .07 procedures (P = .090) between regular hours and evenings and 3.37 ± .12 procedures (P < .0001) between regular hours and weekends. In contrast, the differences were -5.18 ± .12 and 7.59 ± .13, respectively, when calculated using nationally pooled data. This was because the numbers of common procedures were 32.23 ± .05, 37.41 ± .11, and 24.64 ± .12 for regular hours, evenings, and weekends, respectively (ie, >2x the number of common procedures calculated by facility). The numbers of procedures commonly performed at most facilities are fewer in number than those that are commonly performed nationally. Thus, decisions on anesthesia specialization should be based on quantitative analysis of local data rather than national recommendations using pooled data. By facility, the number of different procedures that take place during regular hours and off hours (diversity) is essentially the same, but there is only moderate similarity in the procedures performed. Thus, at many facilities, anesthesiologists who work principally within a single specialty during regular work hours will likely not have substantial contemporary experience with many procedures performed during off hours.
The Role of Personality in a Regular Cognitive Monitoring Program.
Sadeq, Nasreen A; Valdes, Elise G; Harrison Bush, Aryn L; Andel, Ross
2018-02-20
This study examines the role of personality in cognitive performance, adherence, and satisfaction with regular cognitive self-monitoring. One hundred fifty-seven cognitively healthy older adults, age 55+, completed the 44-item Big-Five Inventory and were subsequently engaged in online monthly cognitive monitoring using the Cogstate Brief Battery for up to 35 months (M=14 mo, SD=7 mo). The test measures speed and accuracy in reaction time, visual learning, and working memory tasks. Neuroticism, although not related to cognitive performance overall (P>0.05), was related to a greater increase in accuracy (estimate=0.07, P=0.04) and speed (estimate=-0.09, P=0.03) on One Card Learning. Greater conscientiousness was related to faster overall speed on Detection (estimate=-1.62, P=0.02) and a significant rate of improvement in speed on One Card Learning (estimate=-0.10, P<0.03). No differences in satisfaction or adherence to monthly monitoring as a function of neuroticism or conscientiousness were observed. Participants volunteering for regular cognitive monitoring may be quite uniform in terms of personality traits, with personality traits playing a relatively minor role in adherence and satisfaction. The more neurotic may exhibit better accuracy and improve in speed with time, whereas the more conscientious may perform faster overall and improve in speed on some tasks, but the effects appear small.
NASA Astrophysics Data System (ADS)
Zhai, Liang; Li, Shuang; Zou, Bin; Sang, Huiyong; Fang, Xin; Xu, Shan
2018-05-01
Considering the spatial non-stationary contributions of environment variables to PM2.5 variations, the geographically weighted regression (GWR) modeling method has been using to estimate PM2.5 concentrations widely. However, most of the GWR models in reported studies so far were established based on the screened predictors through pretreatment correlation analysis, and this process might cause the omissions of factors really driving PM2.5 variations. This study therefore developed a best subsets regression (BSR) enhanced principal component analysis-GWR (PCA-GWR) modeling approach to estimate PM2.5 concentration by fully considering all the potential variables' contributions simultaneously. The performance comparison experiment between PCA-GWR and regular GWR was conducted in the Beijing-Tianjin-Hebei (BTH) region over a one-year-period. Results indicated that the PCA-GWR modeling outperforms the regular GWR modeling with obvious higher model fitting- and cross-validation based adjusted R2 and lower RMSE. Meanwhile, the distribution map of PM2.5 concentration from PCA-GWR modeling also clearly depicts more spatial variation details in contrast to the one from regular GWR modeling. It can be concluded that the BSR enhanced PCA-GWR modeling could be a reliable way for effective air pollution concentration estimation in the coming future by involving all the potential predictor variables' contributions to PM2.5 variations.
Verveer, P. J; Gemkow, M. J; Jovin, T. M
1999-01-01
We have compared different image restoration approaches for fluorescence microscopy. The most widely used algorithms were classified with a Bayesian theory according to the assumed noise model and the type of regularization imposed. We considered both Gaussian and Poisson models for the noise in combination with Tikhonov regularization, entropy regularization, Good's roughness and without regularization (maximum likelihood estimation). Simulations of fluorescence confocal imaging were used to examine the different noise models and regularization approaches using the mean squared error criterion. The assumption of a Gaussian noise model yielded only slightly higher errors than the Poisson model. Good's roughness was the best choice for the regularization. Furthermore, we compared simulated confocal and wide-field data. In general, restored confocal data are superior to restored wide-field data, but given sufficient higher signal level for the wide-field data the restoration result may rival confocal data in quality. Finally, a visual comparison of experimental confocal and wide-field data is presented.
ERIC Educational Resources Information Center
Enders, Craig K.; Peugh, James L.
2004-01-01
Two methods, direct maximum likelihood (ML) and the expectation maximization (EM) algorithm, can be used to obtain ML parameter estimates for structural equation models with missing data (MD). Although the 2 methods frequently produce identical parameter estimates, it may be easier to satisfy missing at random assumptions using EM. However, no…
The ARM Best Estimate 2-dimensional Gridded Surface
Xie,Shaocheng; Qi, Tang
2015-06-15
The ARM Best Estimate 2-dimensional Gridded Surface (ARMBE2DGRID) data set merges together key surface measurements at the Southern Great Plains (SGP) sites and interpolates the data to a regular 2D grid to facilitate data application. Data from the original site locations can be found in the ARM Best Estimate Station-based Surface (ARMBESTNS) data set.
Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.
Sun, Shiliang; Xie, Xijiong
2016-09-01
Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.
Ivanov, J.; Miller, R.D.; Markiewicz, R.D.; Xia, J.
2008-01-01
We apply the P-wave refraction-tomography method to seismic data collected with a landstreamer. Refraction-tomography inversion solutions were determined using regularization parameters that provided the most realistic near-surface solutions that best matched the dipping layer structure of nearby outcrops. A reasonably well matched solution was obtained using an unusual set of optimal regularization parameters. In comparison, the use of conventional regularization parameters did not provide as realistic results. Thus, we consider that even if there is only qualitative a-priori information about a site (i.e., visual) - in the case of the East Canyon Dam, Utah - it might be possible to minimize the refraction nonuniqueness by estimating the most appropriate regularization parameters.
A Joint Multitarget Estimator for the Joint Target Detection and Tracking Filter
2015-06-27
function is the information theoretic part of the problem and aims for entropy maximization, while the second one arises from the constraint in the...objective functions in conflict. The first objective function is the information theo- retic part of the problem and aims for entropy maximization...theory. For the sake of completeness and clarity, we also summarize how each concept is utilized later. Entropy : A random variable is statistically
Transformative Advances in DDDAS with Application to Space Weather Monitoring
2015-10-01
subsystems, including power and communications subsystems. In addition, a study of photovoltaic power generation constraints due to spacecraft solar...estimation. Automatica, 23:775–778, 1987. [61] D. Y. Lee, J. W. Cutler, J. Mancewicz, and A. J. Ridley. Maximizing photovoltaic power generation of a space...Maximizing photovoltaic power generation of a space-dart configured satellite. Acta Astronautica, 111:283–299, 2015. A. V. Morozov, A. J. Ridley, D. S
15 CFR 90.8 - Evidence required.
Code of Federal Regulations, 2014 CFR
2014-01-01
..., DEPARTMENT OF COMMERCE PROCEDURE FOR CHALLENGING POPULATION ESTIMATES § 90.8 Evidence required. (a) The... the criteria, standards, and regular processes the Census Bureau employs to generate the population... uses a cohort-component of change method to produce population estimates. Each year, the components of...
ADAPTIVE FINITE ELEMENT MODELING TECHNIQUES FOR THE POISSON-BOLTZMANN EQUATION
HOLST, MICHAEL; MCCAMMON, JAMES ANDREW; YU, ZEYUN; ZHOU, YOUNGCHENG; ZHU, YUNRONG
2011-01-01
We consider the design of an effective and reliable adaptive finite element method (AFEM) for the nonlinear Poisson-Boltzmann equation (PBE). We first examine the two-term regularization technique for the continuous problem recently proposed by Chen, Holst, and Xu based on the removal of the singular electrostatic potential inside biomolecules; this technique made possible the development of the first complete solution and approximation theory for the Poisson-Boltzmann equation, the first provably convergent discretization, and also allowed for the development of a provably convergent AFEM. However, in practical implementation, this two-term regularization exhibits numerical instability. Therefore, we examine a variation of this regularization technique which can be shown to be less susceptible to such instability. We establish a priori estimates and other basic results for the continuous regularized problem, as well as for Galerkin finite element approximations. We show that the new approach produces regularized continuous and discrete problems with the same mathematical advantages of the original regularization. We then design an AFEM scheme for the new regularized problem, and show that the resulting AFEM scheme is accurate and reliable, by proving a contraction result for the error. This result, which is one of the first results of this type for nonlinear elliptic problems, is based on using continuous and discrete a priori L∞ estimates to establish quasi-orthogonality. To provide a high-quality geometric model as input to the AFEM algorithm, we also describe a class of feature-preserving adaptive mesh generation algorithms designed specifically for constructing meshes of biomolecular structures, based on the intrinsic local structure tensor of the molecular surface. All of the algorithms described in the article are implemented in the Finite Element Toolkit (FETK), developed and maintained at UCSD. The stability advantages of the new regularization scheme are demonstrated with FETK through comparisons with the original regularization approach for a model problem. The convergence and accuracy of the overall AFEM algorithm is also illustrated by numerical approximation of electrostatic solvation energy for an insulin protein. PMID:21949541
Reis, Victor M.; Silva, António J.; Ascensão, António; Duarte, José A.
2005-01-01
The present study intended to verify if the inclusion of intensities above lactate threshold (LT) in the VO2/running speed regression (RSR) affects the estimation error of accumulated oxygen deficit (AOD) during a treadmill running performed by endurance-trained subjects. Fourteen male endurance-trained runners performed a sub maximal treadmill running test followed by an exhaustive supra maximal test 48h later. The total energy demand (TED) and the AOD during the supra maximal test were calculated from the RSR established on first testing. For those purposes two regressions were used: a complete regression (CR) including all available sub maximal VO2 measurements and a sub threshold regression (STR) including solely the VO2 values measured during exercise intensities below LT. TED mean values obtained with CR and STR were not significantly different under the two conditions of analysis (177.71 ± 5.99 and 174.03 ± 6.53 ml·kg-1, respectively). Also the mean values of AOD obtained with CR and STR did not differ under the two conditions (49.75 ± 8.38 and 45.8 9 ± 9.79 ml·kg-1, respectively). Moreover, the precision of those estimations was also similar under the two procedures. The mean error for TED estimation was 3.27 ± 1.58 and 3.41 ± 1.85 ml·kg-1 (for CR and STR, respectively) and the mean error for AOD estimation was 5.03 ± 0.32 and 5.14 ± 0.35 ml·kg-1 (for CR and STR, respectively). The results indicated that the inclusion of exercise intensities above LT in the RSR does not improve the precision of the AOD estimation in endurance-trained runners. However, the use of STR may induce an underestimation of AOD comparatively to the use of CR. Key Points It has been suggested that the inclusion of exercise intensities above the lactate threshold in the VO2/power regression can significantly affect the estimation of the energy cost and, thus, the estimation of the AOD. However data on the precision of those AOD measurements is rarely provided. We have evaluated the effects of the inclusion of those exercise intensities on the AOD precision. The results have indicated that the inclusion of exercise intensities above the lactate threshold in the VO2/running speed regression does not improve the precision of AOD estimation in endurance-trained runners. However, the use of sub threshold regressions may induce an underestimation of AOD comparatively to the use of complete regressions. PMID:24501560
Zonal wavefront estimation using an array of hexagonal grating patterns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pathak, Biswajit, E-mail: b.pathak@iitg.ernet.in, E-mail: brboruah@iitg.ernet.in; Boruah, Bosanta R., E-mail: b.pathak@iitg.ernet.in, E-mail: brboruah@iitg.ernet.in
2014-10-15
Accuracy of Shack-Hartmann type wavefront sensors depends on the shape and layout of the lenslet array that samples the incoming wavefront. It has been shown that an array of gratings followed by a focusing lens provide a substitution for the lensslet array. Taking advantage of the computer generated holography technique, any arbitrary diffraction grating aperture shape, size or pattern can be designed with little penalty for complexity. In the present work, such a holographic technique is implemented to design regular hexagonal grating array to have zero dead space between grating patterns, eliminating the possibility of leakage of wavefront during themore » estimation of the wavefront. Tessellation of regular hexagonal shape, unlike other commonly used shapes, also reduces the estimation error by incorporating more number of neighboring slope values at an equal separation.« less
The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.
Pang, Haotian; Liu, Han; Vanderbei, Robert
2014-02-01
We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.
When to initiate integrative neuromuscular training to reduce sports-related injuries in youth?
Myer, Gregory D.; Faigenbaum, Avery D.; Ford, Kevin R.; Best, Thomas M.; Bergeron, Michael F.; Hewett, Timothy E.
2011-01-01
Regular participation in organized youth sports does not ensure adequate exposure to skill- and health-related fitness activities; and sport training without preparatory conditioning does not appear to reduce risk of injury in young athletes. Recent trends indicate that widespread participation in organized youth sports is occurring at a younger age, especially in girls. Current public health recommendations developed to promote muscle strengthening and bone building activities for youth aged 6 and older, along with increased involvement in competitive sport activities at younger ages, has increased interest and concern from parents, clinicians, coaches and teachers regarding the optimal age to encourage and integrate more specialized physical training into youth development programs. This review synthesizes the latest literature and expert opinion regarding when to initiate neuromuscular conditioning in youth and presents a how to integrative training conceptual model that could maximize the potential health-related benefits for children by reducing sports-related injury risk and encouraging lifelong regular physical activity. PMID:21623307
Forecasting long-range atmospheric transport episodes of polychlorinated biphenyls using FLEXPART
NASA Astrophysics Data System (ADS)
Halse, Anne Karine; Eckhardt, Sabine; Schlabach, Martin; Stohl, Andreas; Breivik, Knut
2013-06-01
The analysis of concentrations of persistent organic pollutants (POPs) in ambient air is costly and can only be done for a limited number of samples. It is thus beneficial to maximize the information content of the samples analyzed via a targeted observation strategy. Using polychlorinated biphenyls (PCBs) as an example, a forecasting system to predict and evaluate long-range atmospheric transport (LRAT) episodes of POPs at a remote site in southern Norway has been developed. The system uses the Lagrangian particle transport model FLEXPART, and can be used for triggering extra ("targeted") sampling when LRAT episodes are predicted to occur. The system was evaluated by comparing targeted samples collected over 12-25 h during individual LRAT episodes with monitoring samples regularly collected over one day per week throughout a year. Measured concentrations in all targeted samples were above the 75th percentile of the concentrations obtained from the regular monitoring program and included the highest measured values of all samples. This clearly demonstrates the success of the targeted sampling strategy.
Regular and Chaotic Quantum Dynamics of Two-Level Atoms in a Selfconsistent Radiation Field
NASA Technical Reports Server (NTRS)
Konkov, L. E.; Prants, S. V.
1996-01-01
Dynamics of two-level atoms interacting with their own radiation field in a single-mode high-quality resonator is considered. The dynamical system consists of two second-order differential equations, one for the atomic SU(2) dynamical-group parameter and another for the field strength. With the help of the maximal Lyapunov exponent for this set, we numerically investigate transitions from regularity to deterministic quantum chaos in such a simple model. Increasing the collective coupling constant b is identical with 8(pi)N(sub 0)(d(exp 2))/hw, we observed for initially unexcited atoms a usual sharp transition to chaos at b(sub c) approx. equal to 1. If we take the dimensionless individual Rabi frequency a = Omega/2w as a control parameter, then a sequence of order-to-chaos transitions has been observed starting with the critical value a(sub c) approx. equal to 0.25 at the same initial conditions.
Parameter estimation in plasmonic QED
NASA Astrophysics Data System (ADS)
Jahromi, H. Rangani
2018-03-01
We address the problem of parameter estimation in the presence of plasmonic modes manipulating emitted light via the localized surface plasmons in a plasmonic waveguide at the nanoscale. The emitter that we discuss is the nitrogen vacancy centre (NVC) in diamond modelled as a qubit. Our goal is to estimate the β factor measuring the fraction of emitted energy captured by waveguide surface plasmons. The best strategy to obtain the most accurate estimation of the parameter, in terms of the initial state of the probes and different control parameters, is investigated. In particular, for two-qubit estimation, it is found although we may achieve the best estimation at initial instants by using the maximally entangled initial states, at long times, the optimal estimation occurs when the initial state of the probes is a product one. We also find that decreasing the interqubit distance or increasing the propagation length of the plasmons improve the precision of the estimation. Moreover, decrease of spontaneous emission rate of the NVCs retards the quantum Fisher information (QFI) reduction and therefore the vanishing of the QFI, measuring the precision of the estimation, is delayed. In addition, if the phase parameter of the initial state of the two NVCs is equal to πrad, the best estimation with the two-qubit system is achieved when initially the NVCs are maximally entangled. Besides, the one-qubit estimation has been also analysed in detail. Especially, we show that, using a two-qubit probe, at any arbitrary time, enhances considerably the precision of estimation in comparison with one-qubit estimation.
Lee, Sung Soo; Kang, Sunghwun
2015-01-01
[Purpose] The aim of the study was to clarify the effects of regular exercise on lipid profiles and serum adipokines in Korean children. [Subjects and Methods] Subjects were divided into controls (n=10), children who were obese (n=10), and children with type 2 diabetes mellitus (n=10). Maximal oxygen uptake (VO2max), body composition, lipid profiles, glucagon, insulin and adipokines (leptin, resistin, visfatin and retinol binding protein 4) were measured before to and after a 12-week exercise program. [Results] Body weight, body mass index, and percentage body fat were significantly higher in the obese and diabetes groups compared with the control group. Total cholesterol, triglycerides, low-density lipoprotein cholesterol and glycemic control levels were significantly decreased after the exercise program in the obese and diabetes groups, while high-density lipoprotein cholesterol levels were significantly increased. Adipokines were higher in the obese and diabetes groups compared with the control group prior to the exercise program, and were significantly lower following completion. [Conclusion] These results suggest that regular exercise has positive effects on obesity and type 2 diabetes mellitus in Korean children by improving glycemic control and reducing body weight, thereby lowering cardiovascular risk factors and adipokine levels. PMID:26180345
Aralis, Hilary; Brookmeyer, Ron
2017-01-01
Multistate models provide an important method for analyzing a wide range of life history processes including disease progression and patient recovery following medical intervention. Panel data consisting of the states occupied by an individual at a series of discrete time points are often used to estimate transition intensities of the underlying continuous-time process. When transition intensities depend on the time elapsed in the current state and back transitions between states are possible, this intermittent observation process presents difficulties in estimation due to intractability of the likelihood function. In this manuscript, we present an iterative stochastic expectation-maximization algorithm that relies on a simulation-based approximation to the likelihood function and implement this algorithm using rejection sampling. In a simulation study, we demonstrate the feasibility and performance of the proposed procedure. We then demonstrate application of the algorithm to a study of dementia, the Nun Study, consisting of intermittently-observed elderly subjects in one of four possible states corresponding to intact cognition, impaired cognition, dementia, and death. We show that the proposed stochastic expectation-maximization algorithm substantially reduces bias in model parameter estimates compared to an alternative approach used in the literature, minimal path estimation. We conclude that in estimating intermittently observed semi-Markov models, the proposed approach is a computationally feasible and accurate estimation procedure that leads to substantial improvements in back transition estimates.
Heterogeneous responses of human limbs to infused adrenergic agonists: a gravitational effect?
NASA Technical Reports Server (NTRS)
Pawelczyk, James A.; Levine, Benjamin D.
2002-01-01
Unlike quadrupeds, the legs of humans are regularly exposed to elevated pressures relative to the arms. We hypothesized that this "dependent hypertension" would be associated with altered adrenergic responsiveness. Isoproterenol (0.75-24 ng x 100 ml limb volume-1 x min-1) and phenylephrine (0.025-0.8 microg x 100 ml limb volume-1 x min-1) were infused incrementally in the brachial and femoral arteries of 12 normal volunteers; changes in limb blood flow were quantified by using strain-gauge plethysmography. Compared with the forearm, baseline calf vascular resistance was greater (38.8 +/- 2.5 vs. 26.9 +/- 2.0 mmHg x 100 ml x min x ml-1; P < 0.001) and maximal conductance was lower (46.1 +/- 11.9 vs. 59.4 +/- 13.4 ml x ml-1 x min-1 x mmHg-1; P < 0.03). Vascular conductance did not differ between the two limbs during isoproterenol infusions, whereas decreases in vascular conductance were greater in the calf than the forearm during phenylephrine infusions (P < 0.001). With responses normalized to maximal conductance, the half-maximal response for phenylephrine was significantly less for the calf than the forearm (P < 0.001), whereas the half-maximal response for isoproterenol did not differ between limbs. We conclude that alpha1- but not beta-adrenergic-receptor responsiveness in human limbs is nonuniform. The relatively greater response to alpha1-adrenergic-receptor stimulation in the calf may represent an adaptive mechanism that limits blood pooling and capillary filtration in the legs during standing.
Low External Workloads Are Related to Higher Injury Risk in Professional Male Basketball Games.
Caparrós, Toni; Casals, Martí; Solana, Álvaro; Peña, Javier
2018-06-01
The primary purpose of this study was to identify potential risk factors for sports injuries in professional basketball. An observational retrospective cohort study involving a male professional basketball team, using game tracking data was conducted during three consecutive seasons. Thirty-three professional basketball players took part in this study. A total of 29 time-loss injuries were recorded during regular season games, accounting for 244 total missed games with a mean of 16.26 ± 15.21 per player and season. The tracking data included the following variables: minutes played, physiological load, physiological intensity, mechanical load, mechanical intensity, distance covered, walking maximal speed, maximal speed, sprinting maximal speed, maximal speed, average offensive speed, average defensive speed, level one acceleration, level two acceleration, level three acceleration, level four acceleration, level one deceleration, level two deceleration, level three deceleration, level four deceleration, player efficiency rating and usage percentage. The influence of demographic characteristics, tracking data and performance factors on the risk of injury was investigated using multivariate analysis with their incidence rate ratios (IRRs). Athletes with less or equal than 3 decelerations per game (IRR, 4.36; 95% CI, 1.78-10.6) and those running less or equal than 1.3 miles per game (lower workload) (IRR, 6.42 ; 95% CI, 2.52-16.3) had a higher risk of injury during games (p < 0.01 in both cases). Therefore, unloaded players have a higher risk of injury. Adequate management of training loads might be a relevant factor to reduce the likelihood of injury according to individual profiles.
Wavelet-promoted sparsity for non-invasive reconstruction of electrical activity of the heart.
Cluitmans, Matthijs; Karel, Joël; Bonizzi, Pietro; Volders, Paul; Westra, Ronald; Peeters, Ralf
2018-05-12
We investigated a novel sparsity-based regularization method in the wavelet domain of the inverse problem of electrocardiography that aims at preserving the spatiotemporal characteristics of heart-surface potentials. In three normal, anesthetized dogs, electrodes were implanted around the epicardium and body-surface electrodes were attached to the torso. Potential recordings were obtained simultaneously on the body surface and on the epicardium. A CT scan was used to digitize a homogeneous geometry which consisted of the body-surface electrodes and the epicardial surface. A novel multitask elastic-net-based method was introduced to regularize the ill-posed inverse problem. The method simultaneously pursues a sparse wavelet representation in time-frequency and exploits correlations in space. Performance was assessed in terms of quality of reconstructed epicardial potentials, estimated activation and recovery time, and estimated locations of pacing, and compared with performance of Tikhonov zeroth-order regularization. Results in the wavelet domain obtained higher sparsity than those in the time domain. Epicardial potentials were non-invasively reconstructed with higher accuracy than with Tikhonov zeroth-order regularization (p < 0.05), and recovery times were improved (p < 0.05). No significant improvement was found in terms of activation times and localization of origin of pacing. Next to improved estimation of recovery isochrones, which is important when assessing substrate for cardiac arrhythmias, this novel technique opens potentially powerful opportunities for clinical application, by allowing to choose wavelet bases that are optimized for specific clinical questions. Graphical Abstract The inverse problem of electrocardiography is to reconstruct heart-surface potentials from recorded bodysurface electrocardiograms (ECGs) and a torso-heart geometry. However, it is ill-posed and solving it requires additional constraints for regularization. We introduce a regularization method that simultaneously pursues a sparse wavelet representation in time-frequency and exploits correlations in space. Our approach reconstructs epicardial (heart-surface) potentials with higher accuracy than common methods. It also improves the reconstruction of recovery isochrones, which is important when assessing substrate for cardiac arrhythmias. This novel technique opens potentially powerful opportunities for clinical application, by allowing to choose wavelet bases that are optimized for specific clinical questions.
NASA Astrophysics Data System (ADS)
Fan, Tong-liang; Wen, Yu-cang; Kadri, Chaibou
Orthogonal frequency-division multiplexing (OFDM) is robust against frequency selective fading because of the increase of the symbol duration. However, the time-varying nature of the channel causes inter-carrier interference (ICI) which destroys the orthogonal of sub-carriers and degrades the system performance severely. To alleviate the detrimental effect of ICI, there is a need for ICI mitigation within one OFDM symbol. We propose an iterative Inter-Carrier Interference (ICI) estimation and cancellation technique for OFDM systems based on regularized constrained total least squares. In the proposed scheme, ICI aren't treated as additional additive white Gaussian noise (AWGN). The effect of Inter-Carrier Interference (ICI) and inter-symbol interference (ISI) on channel estimation is regarded as perturbation of channel. We propose a novel algorithm for channel estimation o based on regularized constrained total least squares. Computer simulations show that significant improvement can be obtained by the proposed scheme in fast fading channels.
"Ersatz" and "hybrid" NMR spectral estimates using the filter diagonalization method.
Ridge, Clark D; Shaka, A J
2009-03-12
The filter diagonalization method (FDM) is an efficient and elegant way to make a spectral estimate purely in terms of Lorentzian peaks. As NMR spectral peaks of liquids conform quite well to this model, the FDM spectral estimate can be accurate with far fewer time domain points than conventional discrete Fourier transform (DFT) processing. However, noise is not efficiently characterized by a finite number of Lorentzian peaks, or by any other analytical form, for that matter. As a result, noise can affect the FDM spectrum in different ways than it does the DFT spectrum, and the effect depends on the dimensionality of the spectrum. Regularization to suppress (or control) the influence of noise to give an "ersatz", or EFDM, spectrum is shown to sometimes miss weak features, prompting a more conservative implementation of filter diagonalization. The spectra obtained, called "hybrid" or HFDM spectra, are acquired by using regularized FDM to obtain an "infinite time" spectral estimate and then adding to it the difference between the DFT of the data and the finite time FDM estimate, over the same time interval. HFDM has a number of advantages compared to the EFDM spectra, where all features must be Lorentzian. They also show better resolution than DFT spectra. The HFDM spectrum is a reliable and robust way to try to extract more information from noisy, truncated data records and is less sensitive to the choice of regularization parameter. In multidimensional NMR of liquids, HFDM is a conservative way to handle the problems of noise, truncation, and spectral peaks that depart significantly from the model of a multidimensional Lorentzian peak.
NASA Astrophysics Data System (ADS)
Fan, Qingbiao; Xu, Caijun; Yi, Lei; Liu, Yang; Wen, Yangmao; Yin, Zhi
2017-10-01
When ill-posed problems are inverted, the regularization process is equivalent to adding constraint equations or prior information from a Bayesian perspective. The veracity of the constraints (or the regularization matrix R) significantly affects the solution, and a smoothness constraint is usually added in seismic slip inversions. In this paper, an adaptive smoothness constraint (ASC) based on the classic Laplacian smoothness constraint (LSC) is proposed. The ASC not only improves the smoothness constraint, but also helps constrain the slip direction. A series of experiments are conducted in which different magnitudes of noise are imposed and different densities of observation are assumed, and the results indicated that the ASC was superior to the LSC. Using the proposed ASC, the Helmert variance component estimation method is highlighted as the best for selecting the regularization parameter compared with other methods, such as generalized cross-validation or the mean squared error criterion method. The ASC may also benefit other ill-posed problems in which a smoothness constraint is required.
Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan
2017-12-20
A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.
The Elite Athlete and Strenuous Exercise in Pregnancy.
Pivarnik, James M; Szymanski, Linda M; Conway, Michelle R
2016-09-01
Highly trained women continue to exercise during pregnancy, but there is little information available to guide them, and their health care providers, in how to maximize performance without jeopardizing the maternal-fetal unit. Available evidence focusing on average women who perform regular vigorous exercise suggests that this activity is helpful in preventing several maladies of pregnancy, with little to no evidence of harm. However, some studies have shown that there may be a limit to how intense an elite performer should exercise during pregnancy. Health care providers should monitor these women athletes carefully, to build trust and understanding.
Evaluation of γ-Induced Apoptosis in Human Peripheral Blood Lymphocytes
NASA Astrophysics Data System (ADS)
Baranova, Elena; Boreyko, Alla; Ravnachka, Ivanka; Saveleva, Maria
2010-01-01
Several experiments have been performed to study regularities in the induction of apoptotic cells in human lymphocytes by 60Co γ-rays at different times after irradiation. Apoptosis induction by 60Co γ-rays in human lymphocytes in different cell cycle phases (G0, S, G1, and G2) has been studied. The maximal apoptosis output in lymphocyte cells was observed in the S phase. Modifying effect of replicative and reparative DNA synthesis inhibitors—1- β -D-arabinofuranosylcytosine (Ara-C) and hydroxyurea (Hu)—on the kinetics of 60Co γ-rays induced apoptosis in human lymphocytes has been studied.
Slope Estimation in Noisy Piecewise Linear Functions✩
Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy
2014-01-01
This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure. PMID:25419020
Slope Estimation in Noisy Piecewise Linear Functions.
Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy
2015-03-01
This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure.
Estimated maximal and current brain volume predict cognitive ability in old age
Royle, Natalie A.; Booth, Tom; Valdés Hernández, Maria C.; Penke, Lars; Murray, Catherine; Gow, Alan J.; Maniega, Susana Muñoz; Starr, John; Bastin, Mark E.; Deary, Ian J.; Wardlaw, Joanna M.
2013-01-01
Brain tissue deterioration is a significant contributor to lower cognitive ability in later life; however, few studies have appropriate data to establish how much influence prior brain volume and prior cognitive performance have on this association. We investigated the associations between structural brain imaging biomarkers, including an estimate of maximal brain volume, and detailed measures of cognitive ability at age 73 years in a large (N = 620), generally healthy, community-dwelling population. Cognitive ability data were available from age 11 years. We found positive associations (r) between general cognitive ability and estimated brain volume in youth (male, 0.28; females, 0.12), and in measured brain volume in later life (males, 0.27; females, 0.26). Our findings show that cognitive ability in youth is a strong predictor of estimated prior and measured current brain volume in old age but that these effects were the same for both white and gray matter. As 1 of the largest studies of associations between brain volume and cognitive ability with normal aging, this work contributes to the wider understanding of how some early-life factors influence cognitive aging. PMID:23850342
NASA Astrophysics Data System (ADS)
Wels, Michael; Zheng, Yefeng; Huber, Martin; Hornegger, Joachim; Comaniciu, Dorin
2011-06-01
We describe a fully automated method for tissue classification, which is the segmentation into cerebral gray matter (GM), cerebral white matter (WM), and cerebral spinal fluid (CSF), and intensity non-uniformity (INU) correction in brain magnetic resonance imaging (MRI) volumes. It combines supervised MRI modality-specific discriminative modeling and unsupervised statistical expectation maximization (EM) segmentation into an integrated Bayesian framework. While both the parametric observation models and the non-parametrically modeled INUs are estimated via EM during segmentation itself, a Markov random field (MRF) prior model regularizes segmentation and parameter estimation. Firstly, the regularization takes into account knowledge about spatial and appearance-related homogeneity of segments in terms of pairwise clique potentials of adjacent voxels. Secondly and more importantly, patient-specific knowledge about the global spatial distribution of brain tissue is incorporated into the segmentation process via unary clique potentials. They are based on a strong discriminative model provided by a probabilistic boosting tree (PBT) for classifying image voxels. It relies on the surrounding context and alignment-based features derived from a probabilistic anatomical atlas. The context considered is encoded by 3D Haar-like features of reduced INU sensitivity. Alignment is carried out fully automatically by means of an affine registration algorithm minimizing cross-correlation. Both types of features do not immediately use the observed intensities provided by the MRI modality but instead rely on specifically transformed features, which are less sensitive to MRI artifacts. Detailed quantitative evaluations on standard phantom scans and standard real-world data show the accuracy and robustness of the proposed method. They also demonstrate relative superiority in comparison to other state-of-the-art approaches to this kind of computational task: our method achieves average Dice coefficients of 0.93 ± 0.03 (WM) and 0.90 ± 0.05 (GM) on simulated mono-spectral and 0.94 ± 0.02 (WM) and 0.92 ± 0.04 (GM) on simulated multi-spectral data from the BrainWeb repository. The scores are 0.81 ± 0.09 (WM) and 0.82 ± 0.06 (GM) and 0.87 ± 0.05 (WM) and 0.83 ± 0.12 (GM) for the two collections of real-world data sets—consisting of 20 and 18 volumes, respectively—provided by the Internet Brain Segmentation Repository.
Wels, Michael; Zheng, Yefeng; Huber, Martin; Hornegger, Joachim; Comaniciu, Dorin
2011-06-07
We describe a fully automated method for tissue classification, which is the segmentation into cerebral gray matter (GM), cerebral white matter (WM), and cerebral spinal fluid (CSF), and intensity non-uniformity (INU) correction in brain magnetic resonance imaging (MRI) volumes. It combines supervised MRI modality-specific discriminative modeling and unsupervised statistical expectation maximization (EM) segmentation into an integrated Bayesian framework. While both the parametric observation models and the non-parametrically modeled INUs are estimated via EM during segmentation itself, a Markov random field (MRF) prior model regularizes segmentation and parameter estimation. Firstly, the regularization takes into account knowledge about spatial and appearance-related homogeneity of segments in terms of pairwise clique potentials of adjacent voxels. Secondly and more importantly, patient-specific knowledge about the global spatial distribution of brain tissue is incorporated into the segmentation process via unary clique potentials. They are based on a strong discriminative model provided by a probabilistic boosting tree (PBT) for classifying image voxels. It relies on the surrounding context and alignment-based features derived from a probabilistic anatomical atlas. The context considered is encoded by 3D Haar-like features of reduced INU sensitivity. Alignment is carried out fully automatically by means of an affine registration algorithm minimizing cross-correlation. Both types of features do not immediately use the observed intensities provided by the MRI modality but instead rely on specifically transformed features, which are less sensitive to MRI artifacts. Detailed quantitative evaluations on standard phantom scans and standard real-world data show the accuracy and robustness of the proposed method. They also demonstrate relative superiority in comparison to other state-of-the-art approaches to this kind of computational task: our method achieves average Dice coefficients of 0.93 ± 0.03 (WM) and 0.90 ± 0.05 (GM) on simulated mono-spectral and 0.94 ± 0.02 (WM) and 0.92 ± 0.04 (GM) on simulated multi-spectral data from the BrainWeb repository. The scores are 0.81 ± 0.09 (WM) and 0.82 ± 0.06 (GM) and 0.87 ± 0.05 (WM) and 0.83 ± 0.12 (GM) for the two collections of real-world data sets-consisting of 20 and 18 volumes, respectively-provided by the Internet Brain Segmentation Repository.
UWB channel estimation using new generating TR transceivers
Nekoogar, Faranak [San Ramon, CA; Dowla, Farid U [Castro Valley, CA; Spiridon, Alex [Palo Alto, CA; Haugen, Peter C [Livermore, CA; Benzel, Dave M [Livermore, CA
2011-06-28
The present invention presents a simple and novel channel estimation scheme for UWB communication systems. As disclosed herein, the present invention maximizes the extraction of information by incorporating a new generation of transmitted-reference (Tr) transceivers that utilize a single reference pulse(s) or a preamble of reference pulses to provide improved channel estimation while offering higher Bit Error Rate (BER) performance and data rates without diluting the transmitter power.
Optimal joint detection and estimation that maximizes ROC-type curves
Wunderlich, Adam; Goossens, Bart; Abbey, Craig K.
2017-01-01
Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation. PMID:27093544
Optimal Joint Detection and Estimation That Maximizes ROC-Type Curves.
Wunderlich, Adam; Goossens, Bart; Abbey, Craig K
2016-09-01
Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation.
Structured penalties for functional linear models-partially empirical eigenvectors for regression.
Randolph, Timothy W; Harezlak, Jaroslaw; Feng, Ziding
2012-01-01
One of the challenges with functional data is incorporating geometric structure, or local correlation, into the analysis. This structure is inherent in the output from an increasing number of biomedical technologies, and a functional linear model is often used to estimate the relationship between the predictor functions and scalar responses. Common approaches to the problem of estimating a coefficient function typically involve two stages: regularization and estimation. Regularization is usually done via dimension reduction, projecting onto a predefined span of basis functions or a reduced set of eigenvectors (principal components). In contrast, we present a unified approach that directly incorporates geometric structure into the estimation process by exploiting the joint eigenproperties of the predictors and a linear penalty operator. In this sense, the components in the regression are 'partially empirical' and the framework is provided by the generalized singular value decomposition (GSVD). The form of the penalized estimation is not new, but the GSVD clarifies the process and informs the choice of penalty by making explicit the joint influence of the penalty and predictors on the bias, variance and performance of the estimated coefficient function. Laboratory spectroscopy data and simulations are used to illustrate the concepts.
Nguyen, N; Milanfar, P; Golub, G
2001-01-01
In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.
Ti, Lianping; Socías, María Eugenia; Wood, Evan; Milloy, M-J; Nosova, Ekaterina; DeBeck, Kora; Kerr, Thomas
2018-01-01
Background & aims People who inject drugs (PWID) living with hepatitis C virus (HCV) infection often experience barriers to accessing HCV treatment and care. New, safer and more effective direct-acting antiviral-based therapies offer an opportunity to scale-up HCV-related services. Methadone maintenance therapy (MMT) programs have been shown to be effective in linking PWID to health and support services, largely in the context of HIV. The objective of the study was to examine the relationship between being enrolled in MMT and having access to regular physician care regarding HCV among HCV antibody-positive PWID in Vancouver, Canada. Design Three prospective cohort studies of people who use illicit drugs. Setting Vancouver, Canada. Participants We restricted the study sample to 1627 HCV-positive PWID between September 2005 and May 2015. Measurements A marginal structural model using inverse probability of treatment weights was used to estimate the longitudinal relationship between being enrolled in MMT and having a regular HCV physician and/or specialist. Findings In total, 1357 (83.4%) reported having access to regular physician care regarding HCV at least once during the study period. A marginal structural model estimated a 2.12 (95% confidence interval [CI]: 1.77–2.20) greater odds of having a regular HCV physician among participants enrolled in MMT compared to those not enrolled. Conclusions HCV-positive PWID who enrolled in MMT were more likely to report access to regular physician care regarding HCV compared to those not enrolled in MMT. These findings demonstrate that opioid agonist treatment may be helpful in linking PWID to HCV care, and highlight the need to better engage people who use drugs in substance use care, when appropriate. PMID:29579073
Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration
Doherty, John E.; Hunt, Randall J.
2010-01-01
Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.
Yaesoubi, Reza; Roberts, Stephen D
2010-12-01
A health purchaser's willingness-to-pay (WTP) for health is defined as the amount of money the health purchaser (e.g. a health maximizing public agency or a profit maximizing health insurer) is willing to spend for an additional unit of health. In this paper, we propose a game-theoretic framework for estimating a health purchaser's WTP for health in markets where the health purchaser offers a menu of medical interventions, and each individual in the population selects the intervention that maximizes her prospect. We discuss how the WTP for health can be employed to determine medical guidelines, and to price new medical technologies, such that the health purchaser is willing to implement them. The framework further introduces a measure for WTP for expansion, defined as the amount of money the health purchaser is willing to pay per person in the population served by the health provider to increase the consumption level of the intervention by one percent without changing the intervention price. This measure can be employed to find how much to invest in expanding a medical program through opening new facilities, advertising, etc. Applying the proposed framework to colorectal cancer screening tests, we estimate the WTP for health and the WTP for expansion of colorectal cancer screening tests for the 2005 US population.
Li, Laquan; Wang, Jian; Lu, Wei; Tan, Shan
2016-01-01
Accurate tumor segmentation from PET images is crucial in many radiation oncology applications. Among others, partial volume effect (PVE) is recognized as one of the most important factors degrading imaging quality and segmentation accuracy in PET. Taking into account that image restoration and tumor segmentation are tightly coupled and can promote each other, we proposed a variational method to solve both problems simultaneously in this study. The proposed method integrated total variation (TV) semi-blind de-convolution and Mumford-Shah segmentation with multiple regularizations. Unlike many existing energy minimization methods using either TV or L2 regularization, the proposed method employed TV regularization over tumor edges to preserve edge information, and L2 regularization inside tumor regions to preserve the smooth change of the metabolic uptake in a PET image. The blur kernel was modeled as anisotropic Gaussian to address the resolution difference in transverse and axial directions commonly seen in a clinic PET scanner. The energy functional was rephrased using the Γ-convergence approximation and was iteratively optimized using the alternating minimization (AM) algorithm. The performance of the proposed method was validated on a physical phantom and two clinic datasets with non-Hodgkin’s lymphoma and esophageal cancer, respectively. Experimental results demonstrated that the proposed method had high performance for simultaneous image restoration, tumor segmentation and scanner blur kernel estimation. Particularly, the recovery coefficients (RC) of the restored images of the proposed method in the phantom study were close to 1, indicating an efficient recovery of the original blurred images; for segmentation the proposed method achieved average dice similarity indexes (DSIs) of 0.79 and 0.80 for two clinic datasets, respectively; and the relative errors of the estimated blur kernel widths were less than 19% in the transversal direction and 7% in the axial direction. PMID:28603407
Lim, Jun-Seok; Pang, Hee-Suk
2016-01-01
In this paper an [Formula: see text]-regularized recursive total least squares (RTLS) algorithm is considered for the sparse system identification. Although recursive least squares (RLS) has been successfully applied in sparse system identification, the estimation performance in RLS based algorithms becomes worse, when both input and output are contaminated by noise (the error-in-variables problem). We proposed an algorithm to handle the error-in-variables problem. The proposed [Formula: see text]-RTLS algorithm is an RLS like iteration using the [Formula: see text] regularization. The proposed algorithm not only gives excellent performance but also reduces the required complexity through the effective inversion matrix handling. Simulations demonstrate the superiority of the proposed [Formula: see text]-regularized RTLS for the sparse system identification setting.
Assessing the resolution-dependent utility of tomograms for geostatistics
Day-Lewis, F. D.; Lane, J.W.
2004-01-01
Geophysical tomograms are used increasingly as auxiliary data for geostatistical modeling of aquifer and reservoir properties. The correlation between tomographic estimates and hydrogeologic properties is commonly based on laboratory measurements, co-located measurements at boreholes, or petrophysical models. The inferred correlation is assumed uniform throughout the interwell region; however, tomographic resolution varies spatially due to acquisition geometry, regularization, data error, and the physics underlying the geophysical measurements. Blurring and inversion artifacts are expected in regions traversed by few or only low-angle raypaths. In the context of radar traveltime tomography, we derive analytical models for (1) the variance of tomographic estimates, (2) the spatially variable correlation with a hydrologic parameter of interest, and (3) the spatial covariance of tomographic estimates. Synthetic examples demonstrate that tomograms of qualitative value may have limited utility for geostatistics; moreover, the imprint of regularization may preclude inference of meaningful spatial statistics from tomograms.
2012-01-01
Background Symmetry and regularity of gait are essential outcomes of gait retraining programs, especially in lower-limb amputees. This study aims presenting an algorithm to automatically compute symmetry and regularity indices, and assessing the minimum number of strides for appropriate evaluation of gait symmetry and regularity through autocorrelation of acceleration signals. Methods Ten transfemoral amputees (AMP) and ten control subjects (CTRL) were studied. Subjects wore an accelerometer and were asked to walk for 70 m at their natural speed (twice). Reference values of step and stride regularity indices (Ad1 and Ad2) were obtained by autocorrelation analysis of the vertical and antero-posterior acceleration signals, excluding initial and final strides. The Ad1 and Ad2 coefficients were then computed at different stages by analyzing increasing portions of the signals (considering both the signals cleaned by initial and final strides, and the whole signals). At each stage, the difference between Ad1 and Ad2 values and the corresponding reference values were compared with the minimum detectable difference, MDD, of the index. If that difference was less than MDD, it was assumed that the portion of signal used in the analysis was of sufficient length to allow reliable estimation of the autocorrelation coefficient. Results All Ad1 and Ad2 indices were lower in AMP than in CTRL (P < 0.0001). Excluding initial and final strides from the analysis, the minimum number of strides needed for reliable computation of step symmetry and stride regularity was about 2.2 and 3.5, respectively. Analyzing the whole signals, the minimum number of strides increased to about 15 and 20, respectively. Conclusions Without the need to identify and eliminate the phases of gait initiation and termination, twenty strides can provide a reasonable amount of information to reliably estimate gait regularity in transfemoral amputees. PMID:22316184
The quantitative genetics of maximal and basal rates of oxygen consumption in mice.
Dohm, M R; Hayes, J P; Garland, T
2001-01-01
A positive genetic correlation between basal metabolic rate (BMR) and maximal (VO(2)max) rate of oxygen consumption is a key assumption of the aerobic capacity model for the evolution of endothermy. We estimated the genetic (V(A), additive, and V(D), dominance), prenatal (V(N)), and postnatal common environmental (V(C)) contributions to individual differences in metabolic rates and body mass for a genetically heterogeneous laboratory strain of house mice (Mus domesticus). Our breeding design did not allow the simultaneous estimation of V(D) and V(N). Regardless of whether V(D) or V(N) was assumed, estimates of V(A) were negative under the full models. Hence, we fitted reduced models (e.g., V(A) + V(N) + V(E) or V(A) + V(E)) and obtained new variance estimates. For reduced models, narrow-sense heritability (h(2)(N)) for BMR was <0.1, but estimates of h(2)(N) for VO(2)max were higher. When estimated with the V(A) + V(E) model, the additive genetic covariance between VO(2)max and BMR was positive and statistically different from zero. This result offers tentative support for the aerobic capacity model for the evolution of vertebrate energetics. However, constraints imposed on the genetic model may cause our estimates of additive variance and covariance to be biased, so our results should be interpreted with caution and tested via selection experiments. PMID:11560903
Local regularity for time-dependent tug-of-war games with varying probabilities
NASA Astrophysics Data System (ADS)
Parviainen, Mikko; Ruosteenoja, Eero
2016-07-01
We study local regularity properties of value functions of time-dependent tug-of-war games. For games with constant probabilities we get local Lipschitz continuity. For more general games with probabilities depending on space and time we obtain Hölder and Harnack estimates. The games have a connection to the normalized p (x , t)-parabolic equation ut = Δu + (p (x , t) - 2) Δ∞N u.
ERIC Educational Resources Information Center
Norlander, Torsten; Moas, Leif; Archer, Trevor
2005-01-01
The present study examined whether a short but regularly used program of relaxation, applied to Primary and Secondary school children, could (a) reduce noise levels (in decibels), (b) reduce pupils' experienced stress levels, and (c) increase the pupils' ability to concentrate, as measured by teachers' estimates. Noise levels in 5 classrooms (84…
Self-reported physical activity among blacks: estimates from national surveys.
Whitt-Glover, Melicia C; Taylor, Wendell C; Heath, Gregory W; Macera, Caroline A
2007-11-01
National surveillance data provide population-level estimates of physical activity participation, but generally do not include detailed subgroup analyses, which could provide a better understanding of physical activity among subgroups. This paper presents a descriptive analysis of self-reported regular physical activity among black adults using data from the 2003 Behavioral Risk Factor Surveillance System (n=19,189), the 2004 National Health Interview Survey (n=4263), and the 1999-2004 National Health and Nutrition Examination Survey (n=3407). Analyses were conducted between January and March 2006. Datasets were analyzed separately to estimate the proportion of black adults meeting national physical activity recommendations overall and stratified by gender and other demographic subgroups. The proportion of black adults reporting regular PA ranged from 24% to 36%. Regular physical activity was highest among men; younger age groups; highest education and income groups; those who were employed and married; overweight, but not obese, men; and normal-weight women. This pattern was consistent across surveys. The observed physical activity patterns were consistent with national trends. The data suggest that older black adults and those with low education and income levels are at greatest risk for inactive lifestyles and may require additional attention in efforts to increase physical activity in black adults. The variability across datasets reinforces the need for objective measures in national surveys.
NASA Astrophysics Data System (ADS)
Li, Lei; Yu, Long; Yang, Kecheng; Li, Wei; Li, Kai; Xia, Min
2018-04-01
The multiangle dynamic light scattering (MDLS) technique can better estimate particle size distributions (PSDs) than single-angle dynamic light scattering. However, determining the inversion range, angular weighting coefficients, and scattering angle combination is difficult but fundamental to the reconstruction for both unimodal and multimodal distributions. In this paper, we propose a self-adapting regularization method called the wavelet iterative recursion nonnegative Tikhonov-Phillips-Twomey (WIRNNT-PT) algorithm. This algorithm combines a wavelet multiscale strategy with an appropriate inversion method and could self-adaptively optimize several noteworthy issues containing the choices of the weighting coefficients, the inversion range and the optimal inversion method from two regularization algorithms for estimating the PSD from MDLS measurements. In addition, the angular dependence of the MDLS for estimating the PSDs of polymeric latexes is thoroughly analyzed. The dependence of the results on the number and range of measurement angles was analyzed in depth to identify the optimal scattering angle combination. Numerical simulations and experimental results for unimodal and multimodal distributions are presented to demonstrate both the validity of the WIRNNT-PT algorithm and the angular dependence of MDLS and show that the proposed algorithm with a six-angle analysis in the 30-130° range can be satisfactorily applied to retrieve PSDs from MDLS measurements.
Vukicevic, Arso M; Zelic, Ksenija; Jovicic, Gordana; Djuric, Marija; Filipovic, Nenad
2015-05-01
The aim of this study was to use Finite Element Analysis (FEA) to estimate the influence of various mastication loads and different tooth treatments (composite restoration and endodontic treatment) on dentine fatigue. The analysis of fatigue behaviour of human dentine in intact and composite restored teeth with root-canal-treatment using FEA and fatigue theory was performed. Dentine fatigue behaviour was analysed in three virtual models: intact, composite-restored and endodontically-treated tooth. Volumetric change during the polymerization of composite was modelled by thermal expansion in a heat transfer analysis. Low and high shrinkage stresses were obtained by varying the linear shrinkage of composite. Mastication forces were applied occlusally with the load of 100, 150 and 200N. Assuming one million cycles, Fatigue Failure Index (FFI) was determined using Goodman's criterion while residual fatigue lifetime assessment was performed using Paris-power law. The analysis of the Goodman diagram gave both maximal allowed crack size and maximal number of cycles for the given stress ratio. The size of cracks was measured on virtual models. For the given conditions, fatigue-failure is not likely to happen neither in the intact tooth nor in treated teeth with low shrinkage stress. In the cases of high shrinkage stress, crack length was much larger than the maximal allowed crack and failure occurred with 150 and 200N loads. The maximal allowed crack size was slightly lower in the tooth with root canal treatment which induced somewhat higher FFI than in the case of tooth with only composite restoration. Main factors that lead to dentine fatigue are levels of occlusal load and polymerization stress. However, root canal treatment has small influence on dentine fatigue. The methodology proposed in this study provides a new insight into the fatigue behaviour of teeth after dental treatments. Furthermore, it estimates maximal allowed crack size and maximal number of cycles for a specific case. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kane, V.E.
1982-01-01
A class of goodness-of-fit estimators is found to provide a useful alternative in certain situations to the standard maximum likelihood method which has some undesirable estimation characteristics for estimation from the three-parameter lognormal distribution. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Filliben tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Robustness of the procedures are examined and example data sets analyzed.
Santaniello, Sabato; McCarthy, Michelle M; Montgomery, Erwin B; Gale, John T; Kopell, Nancy; Sarma, Sridevi V
2015-02-10
High-frequency deep brain stimulation (HFS) is clinically recognized to treat parkinsonian movement disorders, but its mechanisms remain elusive. Current hypotheses suggest that the therapeutic merit of HFS stems from increasing the regularity of the firing patterns in the basal ganglia (BG). Although this is consistent with experiments in humans and animal models of Parkinsonism, it is unclear how the pattern regularization would originate from HFS. To address this question, we built a computational model of the cortico-BG-thalamo-cortical loop in normal and parkinsonian conditions. We simulated the effects of subthalamic deep brain stimulation both proximally to the stimulation site and distally through orthodromic and antidromic mechanisms for several stimulation frequencies (20-180 Hz) and, correspondingly, we studied the evolution of the firing patterns in the loop. The model closely reproduced experimental evidence for each structure in the loop and showed that neither the proximal effects nor the distal effects individually account for the observed pattern changes, whereas the combined impact of these effects increases with the stimulation frequency and becomes significant for HFS. Perturbations evoked proximally and distally propagate along the loop, rendezvous in the striatum, and, for HFS, positively overlap (reinforcement), thus causing larger poststimulus activation and more regular patterns in striatum. Reinforcement is maximal for the clinically relevant 130-Hz stimulation and restores a more normal activity in the nuclei downstream. These results suggest that reinforcement may be pivotal to achieve pattern regularization and restore the neural activity in the nuclei downstream and may stem from frequency-selective resonant properties of the loop.
Santaniello, Sabato; McCarthy, Michelle M.; Montgomery, Erwin B.; Gale, John T.; Kopell, Nancy; Sarma, Sridevi V.
2015-01-01
High-frequency deep brain stimulation (HFS) is clinically recognized to treat parkinsonian movement disorders, but its mechanisms remain elusive. Current hypotheses suggest that the therapeutic merit of HFS stems from increasing the regularity of the firing patterns in the basal ganglia (BG). Although this is consistent with experiments in humans and animal models of Parkinsonism, it is unclear how the pattern regularization would originate from HFS. To address this question, we built a computational model of the cortico-BG-thalamo-cortical loop in normal and parkinsonian conditions. We simulated the effects of subthalamic deep brain stimulation both proximally to the stimulation site and distally through orthodromic and antidromic mechanisms for several stimulation frequencies (20–180 Hz) and, correspondingly, we studied the evolution of the firing patterns in the loop. The model closely reproduced experimental evidence for each structure in the loop and showed that neither the proximal effects nor the distal effects individually account for the observed pattern changes, whereas the combined impact of these effects increases with the stimulation frequency and becomes significant for HFS. Perturbations evoked proximally and distally propagate along the loop, rendezvous in the striatum, and, for HFS, positively overlap (reinforcement), thus causing larger poststimulus activation and more regular patterns in striatum. Reinforcement is maximal for the clinically relevant 130-Hz stimulation and restores a more normal activity in the nuclei downstream. These results suggest that reinforcement may be pivotal to achieve pattern regularization and restore the neural activity in the nuclei downstream and may stem from frequency-selective resonant properties of the loop. PMID:25624501
Opinion evolution influenced by informed agents
NASA Astrophysics Data System (ADS)
Fan, Kangqi; Pedrycz, Witold
2016-11-01
Guiding public opinions toward a pre-set target by informed agents can be a strategy adopted in some practical applications. The informed agents are common agents who are employed or chosen to spread the pre-set opinion. In this work, we propose a social judgment based opinion (SJBO) dynamics model to explore the opinion evolution under the influence of informed agents. The SJBO model distinguishes between inner opinions and observable choices, and incorporates both the compromise between similar opinions and the repulsion between dissimilar opinions. Three choices (support, opposition, and remaining undecided) are considered in the SJBO model. Using the SJBO model, both the inner opinions and the observable choices can be tracked during the opinion evolution process. The simulation results indicate that if the exchanges of inner opinions among agents are not available, the effect of informed agents is mainly dependent on the characteristics of regular agents, including the assimilation threshold, decay threshold, and initial opinions. Increasing the assimilation threshold and decay threshold can improve the guiding effectiveness of informed agents. Moreover, if the initial opinions of regular agents are close to null, the full and unanimous consensus at the pre-set opinion can be realized, indicating that, to maximize the influence of informed agents, the guidance should be started when regular agents have little knowledge about a subject under consideration. If the regular agents have had clear opinions, the full and unanimous consensus at the pre-set opinion cannot be achieved. However, the introduction of informed agents can make the majority of agents choose the pre-set opinion.
NASA Astrophysics Data System (ADS)
Makarova, A. N.; Makarov, E. I.; Zakharov, N. S.
2018-03-01
In the article, the issue of correcting engineering servicing regularity on the basis of actual dependability data of cars in operation is considered. The purpose of the conducted research is to increase dependability of transport-technological machines by correcting engineering servicing regularity. The subject of the research is the mechanism of engineering servicing regularity influence on reliability measure. On the basis of the analysis of researches carried out before, a method of nonparametric estimation of car failure measure according to actual time-to-failure data was chosen. A possibility of describing the failure measure dependence on engineering servicing regularity by various mathematical models is considered. It is proven that the exponential model is the most appropriate for that purpose. The obtained results can be used as a separate method of engineering servicing regularity correction with certain operational conditions taken into account, as well as for the technical-economical and economical-stochastic methods improvement. Thus, on the basis of the conducted researches, a method of engineering servicing regularity correction of transport-technological machines in the operational process was developed. The use of that method will allow decreasing the number of failures.
Maximizing mitigation benefits : project summary.
DOT National Transportation Integrated Search
2016-04-30
The research team: : - Reviewed methods, techniques, and : processes at select state DOTs for estimating : mitigations costs for the following states: : Arizona, California, Colorado, Florida, New : York, North Carolina, Ohio, Oregon, : Pennsylvania,...
Aziz, Muhammad Hammad; Schneider, Frank; Clausen, Sven; Blank, Elena; Herskind, Carsten; Afzal, Muhammad; Wenz, Frederik
2011-12-16
Radiation induced secondary cancers are a rare but severe late effect after breast conserving therapy. Intraoperative radiotherapy (IORT) is increasingly used during breast conserving surgery. The purpose of this analysis was to estimate secondary cancer risks after IORT compared to other modalities of breast radiotherapy (APBI - accelerated partial breast irradiation, EBRT - external beam radiotherapy). Computer-tomography scans of an anthropomorphic phantom were acquired with an INTRABEAM IORT applicator (diameter 4 cm) in the outer quadrant of the breast and transferred via DICOM to the treatment planning system. Ipsilateral breast, contralateral breast, ipsilateral lung, contralateral lung, spine and heart were contoured. An INTRABEAM source (50 kV) was defined with the tip of the drift tube at the center of the spherical applicator. A dose of 20 Gy at 0 mm depth from the applicator surface was prescribed for IORT and 34 Gy (5 days × 2 × 3.4 Gy) at 10 mm depth for APBI. For EBRT a total dose of 50 Gy in 2 Gy fractions was planned using two tangential fields with wedges. The mean and maximal doses, DVHs and volumes receiving more than 0.1 Gy and 4 Gy of organs at risk (OAR) were calculated and compared. The life time risk for secondary cancers was estimated according to NCRP report 116. IORT delivered the lowest maximal doses to contralateral breast (< 0.3 Gy), ipsilateral (1.8 Gy) and contralateral lung (< 0.3 Gy), heart (1 Gy) and spine (< 0.3 Gy). In comparison, maximal doses for APBI were 2-5 times higher. EBRT delivered a maximal dose of 10.4 Gy to the contralateral breast and 53 Gy to the ipsilateral lung. OAR volumes receiving more than 4 Gy were 0% for IORT, < 2% for APBI and up to 10% for EBRT (ipsilateral lung). The estimated risk for secondary cancer in the respective OAR is considerably lower after IORT and/or APBI as compared to EBRT. The calculations for maximal doses and volumes of OAR suggest that the risk of secondary cancer induction after IORT is lower than compared to APBI and EBRT.
NASA Astrophysics Data System (ADS)
Tesfagiorgis, Kibrewossen B.
Satellite Precipitation Estimates (SPEs) may be the only available source of information for operational hydrologic and flash flood prediction due to spatial limitations of radar and gauge products in mountainous regions. The present work develops an approach to seamlessly blend satellite, available radar, climatological and gauge precipitation products to fill gaps in ground-based radar precipitation field. To mix different precipitation products, the error of any of the products relative to each other should be removed. For bias correction, the study uses a new ensemble-based method which aims to estimate spatially varying multiplicative biases in SPEs using a radar-gauge precipitation product. Bias factors were calculated for a randomly selected sample of rainy pixels in the study area. Spatial fields of estimated bias were generated taking into account spatial variation and random errors in the sampled values. In addition to biases, sometimes there is also spatial error between the radar and satellite precipitation estimates; one of them has to be geometrically corrected with reference to the other. A set of corresponding raining points between SPE and radar products are selected to apply linear registration using a regularized least square technique to minimize the dislocation error in SPEs with respect to available radar products. A weighted Successive Correction Method (SCM) is used to make the merging between error corrected satellite and radar precipitation estimates. In addition to SCM, we use a combination of SCM and Bayesian spatial method for merging the rain gauges and climatological precipitation sources with radar and SPEs. We demonstrated the method using two satellite-based, CPC Morphing (CMORPH) and Hydro-Estimator (HE), two radar-gauge based, Stage-II and ST-IV, a climatological product PRISM and rain gauge dataset for several rain events from 2006 to 2008 over different geographical locations of the United States. Results show that: (a) the method of ensembles helped reduce biases in SPEs significantly; (b) the SCM method in combination with the Bayesian spatial model produced a precipitation product in good agreement with independent measurements .The study implies that using the available radar pixels surrounding the gap area, rain gauge, PRISM and satellite products, a radar like product is achievable over radar gap areas that benefits the operational meteorology and hydrology community.
75 FR 28550 - Proposed Information Collection; Comment Request; Delivery Verification Procedure
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-21
... the commodities shipped to the U.S. were in fact received. This procedure increases the effectiveness... Review: Regular submission. Affected Public: Business or other for-profit organizations. Estimated Number.... Estimated Total Annual Cost to Public: $0. IV. Request for Comments Comments are invited on: (a) Whether the...
Estimating the Volumes of Solid Figures with Curved Surfaces.
ERIC Educational Resources Information Center
Cohen, Donald
1991-01-01
Several examples of solid figures that calculus students can use to exercise their skills at estimating volume are presented. Although these figures are bounded by surfaces that are portions of regular cylinders, it is interesting to note that their volumes can be expressed as rational numbers. (JJK)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard
Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less
Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard; ...
2017-06-06
Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less
The cost of uniqueness in groundwater model calibration
NASA Astrophysics Data System (ADS)
Moore, Catherine; Doherty, John
2006-04-01
Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The "cost of uniqueness" is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, in turn, can lead to erroneous predictions made by a model that is ostensibly "well calibrated". Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration parameter covariance matrices shows that the latter often possess a much smaller spectral bandwidth than the former. It is also demonstrated that, as an inevitable consequence of the fact that a calibrated model cannot replicate every detail of the true system, model-to-measurement residuals can show a high degree of spatial correlation, a fact which must be taken into account when assessing these residuals either qualitatively, or quantitatively in the exploration of model predictive uncertainty. These principles are demonstrated using a synthetic case in which spatial parameter definition is based on pilot points, and calibration is implemented using both zones of piecewise constancy and constrained minimization regularization.
Classification VIA Information-Theoretic Fusion of Vector-Magnetic and Acoustic Sensor Data
2007-04-01
10) where tBsBtBsBtBsBtsB zzyyxx, . (11) The operation in (10) may be viewed as a vector matched- filter on to estimate )(tB CPARv . In summary...choosing to maximize the classification information in Y are described in Section 3.2. A 3.2. Maximum mutual information ( MMI ) features We begin with a...review of several desirable properties of features that maximize a mutual information ( MMI ) criterion. Then we review a particular algorithm [2
A test of ecological optimality for semiarid vegetation. M.S. Thesis
NASA Technical Reports Server (NTRS)
Salvucci, Guido D.; Eagleson, Peter S.; Turner, Edmund K.
1992-01-01
Three ecological optimality hypotheses which have utility in parameter reduction and estimation in a climate-soil-vegetation water balance model are reviewed and tested. The first hypothesis involves short term optimization of vegetative canopy density through equilibrium soil moisture maximization. The second hypothesis involves vegetation type selection again through soil moisture maximization, and the third involves soil genesis through plant induced modification of soil hydraulic properties to values which result in a maximum rate of biomass productivity.
Effects of Training on the Estimation of Muscular Moment in Submaximal Exercise
ERIC Educational Resources Information Center
Leverrier, Celine; Gauthier, Antoine; Nicolas, Arnaud; Molinaro, Corinne
2011-01-01
The purpose of this study was to observe the effects of a submaximal isometric training program on estimation capacity at 25, 50, and 75% of maximal contraction in isometric action and at two angular velocities. The second purpose was to study the variability of isometric action. To achieve these purposes, participants carried out an isokinetic…
The effect of lifelong exercise dose on cardiovascular function during exercise
Carrick-Ranson, Graeme; Hastings, Jeffrey L.; Bhella, Paul S.; Fujimoto, Naoki; Shibata, Shigeki; Palmer, M. Dean; Boyd, Kara; Livingston, Sheryl; Dijk, Erika
2014-01-01
An increased “dose” of endurance exercise training is associated with a greater maximal oxygen uptake (V̇o2max), a larger left ventricular (LV) mass, and improved heart rate and blood pressure control. However, the effect of lifelong exercise dose on metabolic and hemodynamic response during exercise has not been previously examined. We performed a cross-sectional study on 101 (69 men) seniors (60 yr and older) focusing on lifelong exercise frequency as an index of exercise dose. These included 27 who had performed ≤2 exercise sessions/wk (sedentary), 25 who performed 2–3 sessions/wk (casual), 24 who performed 4–5 sessions/wk (committed) and 25 who performed ≥6 sessions/wk plus regular competitions (Masters athletes) over at least the last 25 yr. Oxygen uptake and hemodynamics [cardiac output, stroke volume (SV)] were collected at rest, two levels of steady-state submaximal exercise, and maximal exercise. Doppler ultrasound measures of LV diastolic filling were assessed at rest and during LV loading (saline infusion) to simulate increased LV filling. Body composition, total blood volume, and heart rate recovery after maximal exercise were also examined. V̇o2max increased in a dose-dependent manner (P < 0.05). At maximal exercise, cardiac output and SV were largest in committed exercisers and Masters athletes (P < 0.05), while arteriovenous oxygen difference was greater in all trained groups (P < 0.05). At maximal exercise, effective arterial elastance, an index of ventricular-arterial coupling, was lower in committed exercisers and Masters athletes (P < 0.05). Doppler measures of LV filling were not enhanced at any condition, irrespective of lifelong exercise frequency. These data suggest that performing four or more weekly endurance exercise sessions over a lifetime results in significant gains in V̇o2max, SV, and heart rate regulation during exercise; however, improved SV regulation during exercise is not coupled with favorable effects on LV filling, even when the heart is fully loaded. PMID:24458750
Effects of Strength Training on Postpubertal Adolescent Distance Runners.
Blagrove, Richard C; Howe, Louis P; Cushion, Emily J; Spence, Adam; Howatson, Glyn; Pedlar, Charles R; Hayes, Philip R
2018-06-01
Strength training activities have consistently been shown to improve running economy (RE) and neuromuscular characteristics, such as force-producing ability and maximal speed, in adult distance runners. However, the effects on adolescent (<18 yr) runners remains elusive. This randomized control trial aimed to examine the effect of strength training on several important physiological and neuromuscular qualities associated with distance running performance. Participants (n = 25, 13 female, 17.2 ± 1.2 yr) were paired according to their sex and RE and randomly assigned to a 10-wk strength training group (STG) or a control group who continued their regular training. The STG performed twice weekly sessions of plyometric, sprint, and resistance training in addition to their normal running. Outcome measures included body mass, maximal oxygen uptake (V˙O2max), speed at V˙O2max, RE (quantified as energy cost), speed at fixed blood lactate concentrations, 20-m sprint, and maximal voluntary contraction during an isometric quarter-squat. Eighteen participants (STG: n = 9, 16.1 ± 1.1 yr; control group: n = 9, 17.6 ± 1.2 yr) completed the study. The STG displayed small improvements (3.2%-3.7%; effect size (ES), 0.31-0.51) in RE that were inferred as "possibly beneficial" for an average of three submaximal speeds. Trivial or small changes were observed for body composition variables, V˙O2max and speed at V˙O2max; however, the training period provided likely benefits to speed at fixed blood lactate concentrations in both groups. Strength training elicited a very likely benefit and a possible benefit to sprint time (ES, 0.32) and maximal voluntary contraction (ES, 0.86), respectively. Ten weeks of strength training added to the program of a postpubertal distance runner was highly likely to improve maximal speed and enhances RE by a small extent, without deleterious effects on body composition or other aerobic parameters.
Interval Running Training Improves Cognitive Flexibility and Aerobic Power of Young Healthy Adults.
Venckunas, Tomas; Snieckus, Audrius; Trinkunas, Eugenijus; Baranauskiene, Neringa; Solianik, Rima; Juodsnukis, Antanas; Streckis, Vytautas; Kamandulis, Sigitas
2016-08-01
Venckunas, T, Snieckus, A, Trinkunas, E, Baranauskiene, N, Solianik, R, Juodsnukis, A, Streckis, V, and Kamandulis, S. Interval running training improves cognitive flexibility and aerobic power of young healthy adults. J Strength Cond Res 30(8): 2114-2121, 2016-The benefits of regular physical exercise may well extend beyond the reduction of chronic diseases risk and augmentation of working capacity, to many other aspects of human well-being, including improved cognitive functioning. Although the effects of moderate intensity continuous training on cognitive performance are relatively well studied, the benefits of interval training have not been investigated in this respect so far. The aim of the current study was to assess whether 7 weeks of interval running training is effective at improving both aerobic fitness and cognitive performance. For this purpose, 8 young dinghy sailors (6 boys and 2 girls) completed the interval running program with 200 m and 2,000 m running performance, cycling maximal oxygen uptake, and cognitive function was measured before and after the intervention. The control group consisted of healthy age-matched subjects (8 boys and 2 girls) who continued their active lifestyle and were tested in the same way as the experimental group, but did not complete any regular training. In the experimental group, 200 m and 2,000 m running performance and cycling maximal oxygen uptake increased together with improved results on cognitive flexibility tasks. No changes in the results of short-term and working memory tasks were observed in the experimental group, and no changes in any of the measured indices were evident in the controls. In conclusion, 7 weeks of interval running training improved running performance and cycling aerobic power, and were sufficient to improve the ability to adjust behavior to changing demands in young active individuals.
Aminiaghdam, Soran; Rode, Christian; Müller, Roy; Blickhan, Reinhard
2017-02-01
Pronograde trunk orientation in small birds causes prominent intra-limb asymmetries in the leg function. As yet, it is not clear whether these asymmetries induced by the trunk reflect general constraints on the leg function regardless of the specific leg architecture or size of the species. To address this, we instructed 12 human volunteers to walk at a self-selected velocity with four postures: regular erect, or with 30 deg, 50 deg and maximal trunk flexion. In addition, we simulated the axial leg force (along the line connecting hip and centre of pressure) using two simple models: spring and damper in series, and parallel spring and damper. As trunk flexion increases, lower limb joints become more flexed during stance. Similar to birds, the associated posterior shift of the hip relative to the centre of mass leads to a shorter leg at toe-off than at touchdown, and to a flatter angle of attack and a steeper leg angle at toe-off. Furthermore, walking with maximal trunk flexion induces right-skewed vertical and horizontal ground reaction force profiles comparable to those in birds. Interestingly, the spring and damper in series model provides a superior prediction of the axial leg force across trunk-flexed gaits compared with the parallel spring and damper model; in regular erect gait, the damper does not substantially improve the reproduction of the human axial leg force. In conclusion, mimicking the pronograde locomotion of birds by bending the trunk forward in humans causes a leg function similar to that of birds despite the different morphology of the segmented legs. © 2017. Published by The Company of Biologists Ltd.
Haugen, Thomas; Tønnessen, Espen; Øksenholt, Øyvind; Haugen, Fredrik Lie; Paulsen, Gøran; Enoksen, Eystein; Seiler, Stephen
2015-01-01
The aims of the present study were to compare the effects of 1) training at 90 and 100% sprint velocity and 2) supervised versus unsupervised sprint training on soccer-specific physical performance in junior soccer players. Young, male soccer players (17 ±1 yr, 71 ±10 kg, 180 ±6 cm) were randomly assigned to four different treatment conditions over a 7-week intervention period. A control group (CON, n=9) completed regular soccer training according to their teams’ original training plans. Three training groups performed a weekly repeated-sprint training session in addition to their regular soccer training sessions performed at A) 100% intensity without supervision (100UNSUP, n=13), B) 90% of maximal sprint velocity with supervision (90SUP, n=10) or C) 90% of maximal sprint velocity without supervision (90UNSUP, n=13). Repetitions x distance for the sprint-training sessions were 15x20 m for 100UNSUP and 30x20 m for 90SUP and 90UNSUP. Single-sprint performance (best time from 15x20 m sprints), repeated-sprint performance (mean time over 15x20 m sprints), countermovement jump and Yo-Yo Intermittent Recovery Level 1 (Yo-Yo IR1) were assessed during pre-training and post-training tests. No significant differences in performance outcomes were observed across groups. 90SUP improved Yo-Yo IR1 by a moderate margin compared to controls, while all other effect magnitudes were trivial or small. In conclusion, neither weekly sprint training at 90 or 100% velocity, nor supervised sprint training enhanced soccer-specific physical performance in junior soccer players. PMID:25798601
NASA Astrophysics Data System (ADS)
Obulesu, O.; Rama Mohan Reddy, A., Dr; Mahendra, M.
2017-08-01
Detecting regular and efficient cyclic models is the demanding activity for data analysts due to unstructured, vigorous and enormous raw information produced from web. Many existing approaches generate large candidate patterns in the occurrence of huge and complex databases. In this work, two novel algorithms are proposed and a comparative examination is performed by considering scalability and performance parameters. The first algorithm is, EFPMA (Extended Regular Model Detection Algorithm) used to find frequent sequential patterns from the spatiotemporal dataset and the second one is, ETMA (Enhanced Tree-based Mining Algorithm) for detecting effective cyclic models with symbolic database representation. EFPMA is an algorithm grows models from both ends (prefixes and suffixes) of detected patterns, which results in faster pattern growth because of less levels of database projection compared to existing approaches such as Prefixspan and SPADE. ETMA uses distinct notions to store and manage transactions data horizontally such as segment, sequence and individual symbols. ETMA exploits a partition-and-conquer method to find maximal patterns by using symbolic notations. Using this algorithm, we can mine cyclic models in full-series sequential patterns including subsection series also. ETMA reduces the memory consumption and makes use of the efficient symbolic operation. Furthermore, ETMA only records time-series instances dynamically, in terms of character, series and section approaches respectively. The extent of the pattern and proving efficiency of the reducing and retrieval techniques from synthetic and actual datasets is a really open & challenging mining problem. These techniques are useful in data streams, traffic risk analysis, medical diagnosis, DNA sequence Mining, Earthquake prediction applications. Extensive investigational outcomes illustrates that the algorithms outperforms well towards efficiency and scalability than ECLAT, STNR and MAFIA approaches.
Effect of muscle mass and intensity of isometric contraction on heart rate.
Gálvez, J M; Alonso, J P; Sangrador, L A; Navarro, G
2000-02-01
The purpose of this study was to determine the effect of muscle mass and the level of force on the contraction-induced rise in heart rate. We conducted an experimental study in a sample of 28 healthy men between 20 and 30 yr of age (power: 95%, alpha: 5%). Smokers, obese subjects, and those who performed regular physical activity over a certain amount of energetic expenditure were excluded from the study. The participants exerted two types of isometric contractions: handgrip and turning a 40-cm-diameter wheel. Both were sustained to exhaustion at 20 and 50% of maximal force. Twenty-five subjects finished the experiment. Heart rate increased a mean of 15.1 beats/min [95% confidence interval (CI): 5.5-24.6] from 20 to 50% handgrip contractions, and 20.7 beats/min (95% CI: 11.9-29.5) from 20 to 50% wheel-turn contractions. Heart rate also increased a mean of 13.3 beats/min (95% CI: 10.4-16.1) from handgrip to wheel-turn contractions at 20% maximal force, and 18.9 beats/min (95% CI: 9. 8-28.0) from handgrip to wheel-turn contractions at 50% maximal force. We conclude that the magnitude of the heart rate increase during isometric exercise is related to the intensity of the contraction and the mass of the contracted muscle.
Váczi, Márk; Tollár, József; Meszler, Balázs; Juhász, Ivett; Karsai, István
2013-01-01
The aim of the present study was to investigate the effects of a short-term in-season plyometric training program on power, agility and knee extensor strength. Male soccer players from a third league team were assigned into an experimental and a control group. The experimental group, beside its regular soccer training sessions, performed a periodized plyometric training program for six weeks. The program included two training sessions per week, and maximal intensity unilateral and bilateral plyometric exercises (total of 40 – 100 foot contacts/session) were executed. Controls participated only in the same soccer training routine, and did not perform plyometrics. Depth vertical jump height, agility (Illinois Agility Test, T Agility Test) and maximal voluntary isometric torque in knee extensors using Multicont II dynamometer were evaluated before and after the experiment. In the experimental group small but significant improvements were found in both agility tests, while depth jump height and isometric torque increments were greater. The control group did not improve in any of the measures. Results of the study indicate that plyometric training consisting of high impact unilateral and bilateral exercises induced remarkable improvements in lower extremity power and maximal knee extensor strength, and smaller improvements in soccer-specific agility. Therefore, it is concluded that short-term plyometric training should be incorporated in the in-season preparation of lower level players to improve specific performance in soccer. PMID:23717351
Predictors of cardiovascular fitness in sedentary men.
Riou, Marie-Eve; Pigeon, Etienne; St-Onge, Josée; Tremblay, Angelo; Marette, André; Weisnagel, S John; Joanisse, Denis R
2009-04-01
The relative contribution of anthropometric and skeletal muscle characteristics to cardiorespiratory fitness was studied in sedentary men. Cardiorespiratory fitness (maximal oxygen consumption) was assessed using an incremental bicycle ergometer protocol in 37 men aged 34-53 years. Vastus lateralis muscle biopsy samples were used to assess fiber type composition (I, IIA, IIX) and areas, capillary density, and activities of glycolytic and oxidative energy metabolic pathway enzymes. Correlations (all p < 0.05) were observed between maximal oxygen consumption (L.min-1) and body mass (r = 0.53), body mass index (r = 0.39), waist circumference (r = 0.34), fat free mass (FFM; r = 0.68), fat mass (r = 0.33), the enzyme activity of cytochrome c oxidase (COX; r = 0.39), muscle type IIA (r = 0.40) and IIX (r = 0.50) fiber area, and the number of capillaries per type IIA (r = 0.39) and IIX (r = 0.37) fiber. When adjusted for FFM in partial correlations, all correlations were lost, with the exception of COX (r = 0.48). Stepwise multiple regression revealed that maximal oxygen consumption was independently predicted by FFM, COX activity, mean capillary number per fiber, waist circumference, and, to a lesser extent, muscle capillary supply. In the absence of regular physical activity, cardiorespiratory fitness is strongly predicted by the potential for aerobic metabolism of skeletal muscle and negatively correlated with abdominal fat deposition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tom, Nathan M.; Yu, Yi -Hsiang; Wright, Alan D.
The aim of this study is to describe a procedure to maximize the power-to-load ratio of a novel wave energy converter (WEC) that combines an oscillating surge wave energy converter with variable structural components. The control of the power-take-off torque will be on a wave-to-wave timescale, whereas the structure will be controlled statically such that the geometry remains the same throughout the wave period. Linear hydrodynamic theory is used to calculate the upper and lower bounds for the time-averaged absorbed power and surge foundation loads while assuming that the WEC motion remains sinusoidal. Previous work using pseudo-spectral techniques to solvemore » the optimal control problem focused solely on maximizing absorbed energy. This work extends the optimal control problem to include a measure of the surge foundation force in the optimization. The objective function includes two competing terms that force the optimizer to maximize power capture while minimizing structural loads. A penalty weight was included with the surge foundation force that allows control of the optimizer performance based on whether emphasis should be placed on power absorption or load shedding. Results from pseudo-spectral optimal control indicate that a unit reduction in time-averaged power can be accompanied by a greater reduction in surge-foundation force.« less
Performance determinants of fixed gear cycling during criteriums.
Babault, Nicolas; Poisson, Maxime; Cimadoro, Guiseppe; Cometti, Carole; Païzis, Christos
2018-06-17
Nowadays, fixed gear competitions on outdoor circuits such as criteriums are regularly organized worldwide. To date, no study has investigated this alternative form of cycling. The purpose of the present study was to examine fixed gear performance indexes and to characterize physiological determinants of fixed gear cyclists. This study was carried out in two parts. Part 1 (n = 36) examined correlations between performance indexes obtained during a real fixed gear criterium (time trial, fastest laps, averaged lap time during races, fatigue indexes) and during a sprint track time trial. Part 2 (n = 9) examined correlations between the recorded performance indexes and some aerobic and anaerobic performance outputs (VO 2max , maximal aerobic power, knee extensor and knee flexor maximal voluntary torque, vertical jump height and performance during a modified Wingate test). Results from Part 1 indicated significant correlations between fixed gear final performance (i.e. average lap time during the finals) and single lap time (time trial, fastest lap during races and sprint track time trial). In addition, results from Part 2 revealed significant correlations between fixed gear performance and aerobic indicators (VO 2max and maximal aerobic power). However, no significant relationship was obtained between fixed gear cycling and anaerobic qualities such as strength. Similarly to traditional cycling disciplines, we concluded that fixed gear cycling is mainly limited by aerobic capacity, particularly criteriums final performance. However, specific skills including technical competency should be considered.
Tom, Nathan M.; Yu, Yi -Hsiang; Wright, Alan D.; ...
2017-04-18
The aim of this study is to describe a procedure to maximize the power-to-load ratio of a novel wave energy converter (WEC) that combines an oscillating surge wave energy converter with variable structural components. The control of the power-take-off torque will be on a wave-to-wave timescale, whereas the structure will be controlled statically such that the geometry remains the same throughout the wave period. Linear hydrodynamic theory is used to calculate the upper and lower bounds for the time-averaged absorbed power and surge foundation loads while assuming that the WEC motion remains sinusoidal. Previous work using pseudo-spectral techniques to solvemore » the optimal control problem focused solely on maximizing absorbed energy. This work extends the optimal control problem to include a measure of the surge foundation force in the optimization. The objective function includes two competing terms that force the optimizer to maximize power capture while minimizing structural loads. A penalty weight was included with the surge foundation force that allows control of the optimizer performance based on whether emphasis should be placed on power absorption or load shedding. Results from pseudo-spectral optimal control indicate that a unit reduction in time-averaged power can be accompanied by a greater reduction in surge-foundation force.« less
Váczi, Márk; Tollár, József; Meszler, Balázs; Juhász, Ivett; Karsai, István
2013-03-01
The aim of the present study was to investigate the effects of a short-term in-season plyometric training program on power, agility and knee extensor strength. Male soccer players from a third league team were assigned into an experimental and a control group. The experimental group, beside its regular soccer training sessions, performed a periodized plyometric training program for six weeks. The program included two training sessions per week, and maximal intensity unilateral and bilateral plyometric exercises (total of 40 - 100 foot contacts/session) were executed. Controls participated only in the same soccer training routine, and did not perform plyometrics. Depth vertical jump height, agility (Illinois Agility Test, T Agility Test) and maximal voluntary isometric torque in knee extensors using Multicont II dynamometer were evaluated before and after the experiment. In the experimental group small but significant improvements were found in both agility tests, while depth jump height and isometric torque increments were greater. The control group did not improve in any of the measures. Results of the study indicate that plyometric training consisting of high impact unilateral and bilateral exercises induced remarkable improvements in lower extremity power and maximal knee extensor strength, and smaller improvements in soccer-specific agility. Therefore, it is concluded that short-term plyometric training should be incorporated in the in-season preparation of lower level players to improve specific performance in soccer.
Neuromuscular response differences to power vs strength back squat exercise in elite athletes.
Brandon, R; Howatson, G; Strachan, F; Hunter, A M
2015-10-01
The study's aim was to establish the neuromuscular responses in elite athletes during and following maximal 'explosive' regular back squat exercise at heavy, moderate, and light loads. Ten elite track and field athletes completed 10 sets of five maximal squat repetitions on three separate days. Knee extension maximal isometric voluntary contraction (MIVC), rate of force development (RFD) and evoked peak twitch force (Pt) assessments were made pre- and post-session. Surface electromyography [root mean square (RMS)] and mechanical measurements were recorded during repetitions. The heavy session resulted in the greatest repetition impulse in comparison to moderate and light sessions (P < 0.001), while the latter showed highest repetition power (P < 0.001). MIVC, RFD, and Pt were significantly reduced post-session (P < 0.01), with greatest reduction observed after the heavy, followed by the moderate and light sessions accordingly. Power significantly reduced during the heavy session only (P < 0.001), and greater increases in RMS occurred during heavy session (P < 0.001), followed by moderate, with no change during light session. In conclusion, this study has shown in elite athletes that the moderate load is optimal for providing a neuromuscular stimulus but with limited fatigue. This type of intervention could be potentially used in the development of both strength and power in elite athletic populations. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Gamma loop contributing to maximal voluntary contractions in man.
Hagbarth, K E; Kunesch, E J; Nordin, M; Schmidt, R; Wallin, E U
1986-01-01
A local anaesthetic drug was injected around the peroneal nerve in healthy subjects in order to investigate whether the resulting loss in foot dorsiflexion power in part depended on a gamma-fibre block preventing 'internal' activation of spindle end-organs and thereby depriving the alpha-motoneurones of an excitatory spindle inflow during contraction. The motor outcome of maximal dorsiflexion efforts was assessed by measuring firing rates of individual motor units in the anterior tibial (t.a.) muscle, mean voltage e.m.g. from the pretibial muscles, dorsiflexion force and range of voluntary foot dorsiflexion movements. The tests were performed with and without peripheral conditioning stimuli, such as agonist or antagonist muscle vibration or imposed stretch of the contracting muscles. As compared to control values of t.a. motor unit firing rates in maximal isometric voluntary contractions, the firing rates were lower and more irregular during maximal dorsiflexion efforts performed during subtotal peroneal nerve blocks. During the development of paresis a gradual reduction of motor unit firing rates was observed before the units ceased responding to the voluntary commands. This change in motor unit behaviour was accompanied by a reduction of the mean voltage e.m.g. activity in the pretibial muscles. At a given stage of anaesthesia the e.m.g. responses to maximal voluntary efforts were more affected than the responses evoked by electric nerve stimuli delivered proximal to the block, indicating that impaired impulse transmission in alpha motor fibres was not the sole cause of the paresis. The inability to generate high and regular motor unit firing rates during peroneal nerve blocks was accentuated by vibration applied over the antagonistic calf muscles. By contrast, in eight out of ten experiments agonist stretch or vibration caused an enhancement of motor unit firing during the maximal force tasks. The reverse effects of agonist and antagonist vibration on the ability to activate the paretic muscles were evidenced also by alterations induced in mean voltage e.m.g. activity, dorsiflexion force and range of dorsiflexion movements. The autogenetic excitatory and the reciprocal inhibitory effects of muscle vibration rose in strength as the vibration frequency was raised from 90 to 165 Hz. Reflex effects on maximal voluntary contraction strength similar to those observed during partial nerve blocks were not seen under normal conditions when the nerve supply was intact.(ABSTRACT TRUNCATED AT 400 WORDS) PMID:3612576
Lower Cardiac Vagal Tone in Non-Obese Healthy Men with Unfavorable Anthropometric Characteristics
Ramos, Plínio S.; Araújo, Claudio Gil S.
2010-01-01
OBJECTIVES: to determine if there are differences in cardiac vagal tone values in non-obese healthy, adult men with and without unfavorable anthropometric characteristics. INTRODUCTION: It is well established that obesity reduces cardiac vagal tone. However, it remains unknown if decreases in cardiac vagal tone can be observed early in non-obese healthy, adult men presenting unfavorable anthropometric characteristics. METHODS: Among 1688 individuals assessed between 2004 and 2008, we selected 118 non-obese (BMI <30 kg/m2), healthy men (no known disease conditions or regular use of relevant medications), aged between 20 and 77 years old (42 ± 12-years-old). Their evaluation included clinical examination, anthropometric assessment (body height and weight, sum of six skinfolds, waist circumference and somatotype), a 4-second exercise test to estimate cardiac vagal tone and a maximal cardiopulmonary exercise test to exclude individuals with myocardial ischemia. The same physician performed all procedures. RESULTS: A lower cardiac vagal tone was found for the individuals in the higher quintiles – unfavorable anthropometric characteristics - of BMI (p=0.005), sum of six skinfolds (p=0.037) and waist circumference (p<0.001). In addition, the more endomorphic individuals also presented a lower cardiac vagal tone (p=0.023), while an ectomorphic build was related to higher cardiac vagal tone values as estimated by the 4-second exercise test (r=0.23; p=0.017). CONCLUSIONS: Non-obese and healthy adult men with unfavorable anthropometric characteristics tend to present lower cardiac vagal tone levels. Early identification of this trend by simple protocols that are non-invasive and risk-free, using select anthropometric characteristics, may be clinically useful in a global strategy to prevent cardiovascular disease. PMID:20126345
A Fast Multiple-Kernel Method With Applications to Detect Gene-Environment Interaction.
Marceau, Rachel; Lu, Wenbin; Holloway, Shannon; Sale, Michèle M; Worrall, Bradford B; Williams, Stephen R; Hsu, Fang-Chi; Tzeng, Jung-Ying
2015-09-01
Kernel machine (KM) models are a powerful tool for exploring associations between sets of genetic variants and complex traits. Although most KM methods use a single kernel function to assess the marginal effect of a variable set, KM analyses involving multiple kernels have become increasingly popular. Multikernel analysis allows researchers to study more complex problems, such as assessing gene-gene or gene-environment interactions, incorporating variance-component based methods for population substructure into rare-variant association testing, and assessing the conditional effects of a variable set adjusting for other variable sets. The KM framework is robust, powerful, and provides efficient dimension reduction for multifactor analyses, but requires the estimation of high dimensional nuisance parameters. Traditional estimation techniques, including regularization and the "expectation-maximization (EM)" algorithm, have a large computational cost and are not scalable to large sample sizes needed for rare variant analysis. Therefore, under the context of gene-environment interaction, we propose a computationally efficient and statistically rigorous "fastKM" algorithm for multikernel analysis that is based on a low-rank approximation to the nuisance effect kernel matrices. Our algorithm is applicable to various trait types (e.g., continuous, binary, and survival traits) and can be implemented using any existing single-kernel analysis software. Through extensive simulation studies, we show that our algorithm has similar performance to an EM-based KM approach for quantitative traits while running much faster. We also apply our method to the Vitamin Intervention for Stroke Prevention (VISP) clinical trial, examining gene-by-vitamin effects on recurrent stroke risk and gene-by-age effects on change in homocysteine level. © 2015 WILEY PERIODICALS, INC.
Bhattacharya, S.; Doveton, J.H.; Carr, T.R.; Guy, W.R.; Gerlach, P.M.
2005-01-01
Small independent operators produce most of the Mississippian carbonate fields in the United States mid-continent, where a lack of integrated characterization studies precludes maximization of hydrocarbon recovery. This study uses integrative techniques to leverage extant data in an Osagian and Meramecian (Mississippian) cherty carbonate reservoir in Kansas. Available data include petrophysical logs of varying vintages, limited number of cores, and production histories from each well. A consistent set of assumptions were used to extract well-level porosity and initial saturations, from logs of different types and vintages, to build a geomodel. Lacking regularly recorded well shut-in pressures, an iterative technique, based on material balance formulations, was used to estimate average reservoir-pressure decline that matched available drillstem test data and validated log-analysis assumptions. Core plugs representing the principal reservoir petrofacies provide critical inputs for characterization and simulation studies. However, assigning plugs among multiple reservoir petrofacies is difficult in complex (carbonate) reservoirs. In a bottom-up approach, raw capillary pressure (Pc) data were plotted on the Super-Pickett plot, and log- and core-derived saturation-height distributions were reconciled to group plugs by facies, to identify core plugs representative of the principal reservoir facies, and to discriminate facies in the logged interval. Pc data from representative core plugs were used for effective pay evaluation to estimate water cut from completions, in infill and producing wells, and guide-selective perforations for economic exploitation of mature fields. The results from this study were used to drill 22 infill wells. Techniques demonstrated here can be applied in other fields and reservoirs. Copyright ?? 2005. The American Association of Petroleum Geologists. All rights reserved.
Supramaximal Eccentrics Versus Traditional Loading in Improving Lower-Body 1RM: A Meta-Analysis.
Buskard, Andrew N L; Gregg, Heath R; Ahn, Soyeon
2018-06-11
Guidelines for improving maximal concentric strength through resistance training (RT) have traditionally included large muscle-group exercises, full ranges of motion, and a load approximating 85% of the 1-repetition maximum (1RM). Supramaximal eccentric training (SME; controlled lowering of loads above the concentric 1RM) has also been shown to be effective at increasing concentric 1RM in the lower body, but concerns regarding injury risk, postexercise soreness, and null benefit over traditional methods (TRAD) may limit the practical utility of this approach. The purpose of this study was to determine whether SME elicits greater lower-body strength improvements than TRAD. Key inclusion criteria were regular exercise modalities typical of nonspecialized exercise facilities (e.g., leg press; key exclusion: isokinetic dynamometer) and at least 6 weeks of RT exposure, leading to 5 studies included in the current meta-analysis. Unbiased effect-size measures that quantify the mean difference in lower-body 1RM between SME and TRAD were extracted. Supramaximal eccentric training did not appear to be more effective than TRAD at increasing lower-body 1RM ([Formula: see text] = .33, SE = .26, z = 1.26, 95% CI [-0.20, 0.79], p = .20, I 2 = 56.78%) under a random-effects model where between-study variance was estimated using maximum likelihood estimation ([Formula: see text] 2 = .25). The selection of SME over TRAD in RT programs designed to increase lower-body 1RM does not appear warranted in all populations. Further research should clarify the merit of periodic SME in TRAD-dominant RT programs as well as whether a differential effect exists in trained individuals.
Regularized quantile regression for SNP marker estimation of pig growth curves.
Barroso, L M A; Nascimento, M; Nascimento, A C C; Silva, F F; Serão, N V L; Cruz, C D; Resende, M D V; Silva, F L; Azevedo, C F; Lopes, P S; Guimarães, S E F
2017-01-01
Genomic growth curves are generally defined only in terms of population mean; an alternative approach that has not yet been exploited in genomic analyses of growth curves is the Quantile Regression (QR). This methodology allows for the estimation of marker effects at different levels of the variable of interest. We aimed to propose and evaluate a regularized quantile regression for SNP marker effect estimation of pig growth curves, as well as to identify the chromosome regions of the most relevant markers and to estimate the genetic individual weight trajectory over time (genomic growth curve) under different quantiles (levels). The regularized quantile regression (RQR) enabled the discovery, at different levels of interest (quantiles), of the most relevant markers allowing for the identification of QTL regions. We found the same relevant markers simultaneously affecting different growth curve parameters (mature weight and maturity rate): two (ALGA0096701 and ALGA0029483) for RQR(0.2), one (ALGA0096701) for RQR(0.5), and one (ALGA0003761) for RQR(0.8). Three average genomic growth curves were obtained and the behavior was explained by the curve in quantile 0.2, which differed from the others. RQR allowed for the construction of genomic growth curves, which is the key to identifying and selecting the most desirable animals for breeding purposes. Furthermore, the proposed model enabled us to find, at different levels of interest (quantiles), the most relevant markers for each trait (growth curve parameter estimates) and their respective chromosomal positions (identification of new QTL regions for growth curves in pigs). These markers can be exploited under the context of marker assisted selection while aiming to change the shape of pig growth curves.
Choosing the Allometric Exponent in Covariate Model Building.
Sinha, Jaydeep; Al-Sallami, Hesham S; Duffull, Stephen B
2018-04-27
Allometric scaling is often used to describe the covariate model linking total body weight (WT) to clearance (CL); however, there is no consensus on how to select its value. The aims of this study were to assess the influence of between-subject variability (BSV) and study design on (1) the power to correctly select the exponent from a priori choices, and (2) the power to obtain unbiased exponent estimates. The influence of WT distribution range (randomly sampled from the Third National Health and Nutrition Examination Survey, 1988-1994 [NHANES III] database), sample size (N = 10, 20, 50, 100, 200, 500, 1000 subjects), and BSV on CL (low 20%, normal 40%, high 60%) were assessed using stochastic simulation estimation. A priori exponent values used for the simulations were 0.67, 0.75, and 1, respectively. For normal to high BSV drugs, it is almost impossible to correctly select the exponent from an a priori set of exponents, i.e. 1 vs. 0.75, 1 vs. 0.67, or 0.75 vs. 0.67 in regular studies involving < 200 adult participants. On the other hand, such regular study designs are sufficient to appropriately estimate the exponent. However, regular studies with < 100 patients risk potential bias in estimating the exponent. Those study designs with limited sample size and narrow range of WT (e.g. < 100 adult participants) potentially risk either selection of a false value or yielding a biased estimate of the allometric exponent; however, such bias is only relevant in cases of extrapolating the value of CL outside the studied population, e.g. analysis of a study of adults that is used to extrapolate to children.
Wang, Frank; Pan, Kuang-Tse; Chu, Sung-Yu; Chan, Kun-Ming; Chou, Hong-Shiue; Wu, Ting-Jung; Lee, Wei-Chen
2011-04-01
An accurate preoperative estimate of the graft weight is vital to avoid small-for-size syndrome in the recipient and ensure donor safety after adult living donor liver transplantation (LDLT). Here we describe a simple method for estimating the graft volume (GV) that uses the maximal right portal vein diameter (RPVD) and the maximal left portal vein diameter (LPVD). Between June 2004 and December 2009, 175 consecutive donors undergoing right hepatectomy for LDLT were retrospectively reviewed. The GV was determined with 3 estimation methods: (1) the radiological graft volume (RGV) estimated by computed tomography (CT) volumetry; (2) the computed tomography-calculated graft volume (CGV-CT), which was obtained by the multiplication of the standard liver volume (SLV) by the RGV percentage with respect to the total liver volume derived from CT; and (3) the portal vein diameter ratio-calculated graft volume (CGV-PVDR), which was obtained by the multiplication of the SLV by the portal vein diameter ratio [PVDR; ie, PVDR = RPVD(2) /(RPVD(2) + LPVD(2) )]. These values were compared to the actual graft weight (AGW), which was measured intraoperatively. The mean AGW was 633.63 ± 107.51 g, whereas the mean RGV, CGV-CT, and CGV-PVDR values were 747.83 ± 138.59, 698.21 ± 94.81, and 685.20 ± 90.88 cm(3) , respectively. All 3 estimation methods tended to overestimate the AGW (P < 0.001). The actual graft-to-recipient body weight ratio (GRWR) was 1.00% ± 0.19%, and the GRWRs calculated on the basis of the RGV, CGV-CT, and CGV-PVDR values were 1.19% ± 0.25%, 1.11% ± 0.22%, and 1.09% ± 0.21%, respectively. Overall, the CGV-PVDR values better correlated with the AGW and GRWR values according to Lin's concordance correlation coefficient and the Landis and Kock benchmark. In conclusion, the PVDR method is a simple estimation method that accurately predicts GVs and GRWRs in adult LDLT. Copyright © 2011 American Association for the Study of Liver Diseases.
Long-Time Behavior and Critical Limit of Subcritical SQG Equations in Scale-Invariant Sobolev Spaces
NASA Astrophysics Data System (ADS)
Coti Zelati, Michele
2018-02-01
We consider the subcritical SQG equation in its natural scale-invariant Sobolev space and prove the existence of a global attractor of optimal regularity. The proof is based on a new energy estimate in Sobolev spaces to bootstrap the regularity to the optimal level, derived by means of nonlinear lower bounds on the fractional Laplacian. This estimate appears to be new in the literature and allows a sharp use of the subcritical nature of the L^∞ bounds for this problem. As a by-product, we obtain attractors for weak solutions as well. Moreover, we study the critical limit of the attractors and prove their stability and upper semicontinuity with respect to the strength of the diffusion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamazaki, Kazuo
2014-03-15
We study the three-dimensional magnetohydrodynamics system and obtain its regularity criteria in terms of only two velocity vector field components eliminating the condition on the third component completely. The proof consists of a new decomposition of the four nonlinear terms of the system and estimating a component of the magnetic vector field in terms of the same component of the velocity vector field. This result may be seen as a component reduction result of many previous works [C. He and Z. Xin, “On the regularity of weak solutions to the magnetohydrodynamic equations,” J. Differ. Equ. 213(2), 234–254 (2005); Y. Zhou,more » “Remarks on regularities for the 3D MHD equations,” Discrete Contin. Dyn. Syst. 12(5), 881–886 (2005)].« less
Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule
NASA Astrophysics Data System (ADS)
Jin, Qinian; Wang, Wei
2018-03-01
The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.
Travel time tomography with local image regularization by sparsity constrained dictionary learning
NASA Astrophysics Data System (ADS)
Bianco, M.; Gerstoft, P.
2017-12-01
We propose a regularization approach for 2D seismic travel time tomography which models small rectangular groups of slowness pixels, within an overall or `global' slowness image, as sparse linear combinations of atoms from a dictionary. The groups of slowness pixels are referred to as patches and a dictionary corresponds to a collection of functions or `atoms' describing the slowness in each patch. These functions could for example be wavelets.The patch regularization is incorporated into the global slowness image. The global image models the broad features, while the local patch images incorporate prior information from the dictionary. Further, high resolution slowness within patches is permitted if the travel times from the global estimates support it. The proposed approach is formulated as an algorithm, which is repeated until convergence is achieved: 1) From travel times, find the global slowness image with a minimum energy constraint on the pixel variance relative to a reference. 2) Find the patch level solutions to fit the global estimate as a sparse linear combination of dictionary atoms.3) Update the reference as the weighted average of the patch level solutions.This approach relies on the redundancy of the patches in the seismic image. Redundancy means that the patches are repetitions of a finite number of patterns, which are described by the dictionary atoms. Redundancy in the earth's structure was demonstrated in previous works in seismics where dictionaries of wavelet functions regularized inversion. We further exploit redundancy of the patches by using dictionary learning algorithms, a form of unsupervised machine learning, to estimate optimal dictionaries from the data in parallel with the inversion. We demonstrate our approach on densely, but irregularly sampled synthetic seismic images.
Tabet, Michael R.; Norman, Mantana K.; Fey, Brittney K.; Tsibulsky, Vladimir L.; Millard, Ronald W.
2011-01-01
Differences in the time to maximal effect (Tmax) of a series of dopamine receptor antagonists on the self-administration of cocaine are not consistent with their lipophilicity (octanol-water partition coefficients at pH 7.4) and expected rapid entry into the brain after intravenous injection. It was hypothesized that the Tmax reflects the time required for maximal occupancy of receptors, which would occur as equilibrium was approached. If so, the Tmax should be related to the affinity for the relevant receptor population. This hypothesis was tested using a series of nine antagonists having a 2500-fold range of Ki or Kd values for D2-like dopamine receptors. Rats self-administered cocaine at regular intervals and then were injected intravenously with a dose of antagonist, and the self-administration of cocaine was continued for 6 to 10 h. The level of cocaine at the time of every self-administration (satiety threshold) was calculated throughout the session. The satiety threshold was stable before the injection of antagonist and then increased approximately 3-fold over the baseline value at doses of antagonists selected to produce this approximately equivalent maximal magnitude of effect (maximum increase in the equiactive cocaine concentration, satiety threshold; Cmax). Despite the similar Cmax, the mean Tmax varied between 5 and 157 min across this series of antagonists. Furthermore, there was a strong and significant correlation between the in vivo Tmax values for each antagonist and the affinity for D2-like dopamine receptors measured in vitro. It is concluded that the cocaine self-administration paradigm offers a reliable and predictive bioassay for measuring the affinity of a competitive antagonist for D2-like dopamine receptors. PMID:21606176
Ohlsson, A; Steinhaus, D; Kjellström, B; Ryden, L; Bennett, T
2003-06-01
Exercise testing is commonly used in patients with congestive heart failure for diagnostic and prognostic purposes. Such testing may be even more valuable if invasive hemodynamics are acquired. However, this will make the test more complex and expensive and only provides information from isolated moments. We studied serial exercise tests in heart failure patients with implanted hemodynamic monitors allowing recording of central hemodynamics. Twenty-one NYHA Class II-III heart failure patients underwent maximal exercise tests and submaximal bike or 6-min hall walk tests to quantify their hemodynamic responses and to study the feasibility of conducting exercise tests in patients with such devices. Patients were followed for 2-3 years with serial exercise tests. During maximal tests (n=70), heart rate increased by 52+/-19 bpm while S(v)O(2) decreased by 35+/-10% saturation units. RV systolic and diastolic pressure increased 29+/-11 and 11+/-6 mmHg, respectively, while pulmonary artery diastolic pressure increased 21+/-8 mmHg. Submaximal bike (n=196) and hall walk tests (n=172) resulted in S(v)O(2) changes of 80 and 91% of the maximal tests, while RV pressures ranged from 72 to 79% of maximal responses. An added potential value of implantable hemodynamic monitors in heart failure patients may be to quantitatively determine the true hemodynamic profile during standard non-invasive clinical exercise tests and to compare that to hemodynamic effects of regular exercise during daily living. It would be of interest to study whether such information could improve the ability to predict changes in a patient's clinical condition and to improve tailoring patient management.
Siveke, Ida; Leibold, Christian; Grothe, Benedikt
2007-11-01
We are regularly exposed to several concurrent sounds, producing a mixture of binaural cues. The neuronal mechanisms underlying the localization of concurrent sounds are not well understood. The major binaural cues for localizing low-frequency sounds in the horizontal plane are interaural time differences (ITDs). Auditory brain stem neurons encode ITDs by firing maximally in response to "favorable" ITDs and weakly or not at all in response to "unfavorable" ITDs. We recorded from ITD-sensitive neurons in the dorsal nucleus of the lateral lemniscus (DNLL) while presenting pure tones at different ITDs embedded in noise. We found that increasing levels of concurrent white noise suppressed the maximal response rate to tones with favorable ITDs and slightly enhanced the response rate to tones with unfavorable ITDs. Nevertheless, most of the neurons maintained ITD sensitivity to tones even for noise intensities equal to that of the tone. Using concurrent noise with a spectral composition in which the neuron's excitatory frequencies are omitted reduced the maximal response similar to that obtained with concurrent white noise. This finding indicates that the decrease of the maximal rate is mediated by suppressive cross-frequency interactions, which we also observed during monaural stimulation with additional white noise. In contrast, the enhancement of the firing rate to tones at unfavorable ITD might be due to early binaural interactions (e.g., at the level of the superior olive). A simple simulation corroborates this interpretation. Taken together, these findings suggest that the spectral composition of a concurrent sound strongly influences the spatial processing of ITD-sensitive DNLL neurons.
Brand, Samuel P C; Keeling, Matt J
2017-03-01
It is a long recognized fact that climatic variations, especially temperature, affect the life history of biting insects. This is particularly important when considering vector-borne diseases, especially in temperate regions where climatic fluctuations are large. In general, it has been found that most biological processes occur at a faster rate at higher temperatures, although not all processes change in the same manner. This differential response to temperature, often considered as a trade-off between onward transmission and vector life expectancy, leads to the total transmission potential of an infected vector being maximized at intermediate temperatures. Here we go beyond the concept of a static optimal temperature, and mathematically model how realistic temperature variation impacts transmission dynamics. We use bluetongue virus (BTV), under UK temperatures and transmitted by Culicoides midges, as a well-studied example where temperature fluctuations play a major role. We first consider an optimal temperature profile that maximizes transmission, and show that this is characterized by a warm day to maximize biting followed by cooler weather to maximize vector life expectancy. This understanding can then be related to recorded representative temperature patterns for England, the UK region which has experienced BTV cases, allowing us to infer historical transmissibility of BTV, as well as using forecasts of climate change to predict future transmissibility. Our results show that when BTV first invaded northern Europe in 2006 the cumulative transmission intensity was higher than any point in the last 50 years, although with climate change such high risks are the expected norm by 2050. Such predictions would indicate that regular BTV epizootics should be expected in the UK in the future. © 2017 The Author(s).
Niemelä, Kristiina; Väänänen, Ilkka; Leinonen, Raija; Laukkanen, Pia
2011-08-01
Home-based exercise is a viable alternative for older adults with difficulties in exercise opportunities outside the home. The aim of this study was to investigate the benefits of home-based rocking-chair training, and its effects on the physical performance of elderly women. Community- dwelling women (n=51) aged 73-87 years were randomly assigned to the rocking-chair group (RCG, n=26) or control group (CG, n=25) by drawing lots. Baseline and outcome measurements were hand grip strength, maximal isometric knee extension, maximal walking speed over 10 meters, rising from a chair five times, and the Berg Balance Scale (BBS). The RCG carried out a six-week rocking-chair training program at home, involving ten sessions per week, twice a day for 15 minutes per session, and ten different movements. The CG continued their usual daily lives. After three months, the RCG responded to a mail questionnaire. After the intervention, the RCG improved and the CG declined. The data showed significant interactions of group by time in the BBS score (p=0.001), maximal knee extension strength (p=0.006) and maximal walking speed (p=0.046), which indicates that the change between groups during the follow-up period was significant. Adherence to the training protocol was high (96%). After three months, the exercise program had become a regular home exercise habit for 88.5% of the subjects. Results indicate that home-based elderly women benefit from this easily implemented rocking-chair exercise program. The subjects became motivated to participate in training and continued the exercises. This is a promising alternative exercise method for maintaining physical activity and leads to improvements in physical performance.
NASA Astrophysics Data System (ADS)
Yu, Haiqing; Chen, Shuhang; Chen, Yunmei; Liu, Huafeng
2017-05-01
Dynamic positron emission tomography (PET) is capable of providing both spatial and temporal information of radio tracers in vivo. In this paper, we present a novel joint estimation framework to reconstruct temporal sequences of dynamic PET images and the coefficients characterizing the system impulse response function, from which the associated parametric images of the system macro parameters for tracer kinetics can be estimated. The proposed algorithm, which combines statistical data measurement and tracer kinetic models, integrates a dictionary sparse coding (DSC) into a total variational minimization based algorithm for simultaneous reconstruction of the activity distribution and parametric map from measured emission sinograms. DSC, based on the compartmental theory, provides biologically meaningful regularization, and total variation regularization is incorporated to provide edge-preserving guidance. We rely on techniques from minimization algorithms (the alternating direction method of multipliers) to first generate the estimated activity distributions with sub-optimal kinetic parameter estimates, and then recover the parametric maps given these activity estimates. These coupled iterative steps are repeated as necessary until convergence. Experiments with synthetic, Monte Carlo generated data, and real patient data have been conducted, and the results are very promising.
Dantan, Etienne; Foucher, Yohann; Lorent, Marine; Giral, Magali; Tessier, Philippe
2018-06-01
Defining thresholds of prognostic markers is essential for stratified medicine. Such thresholds are mostly estimated from purely statistical measures regardless of patient preferences potentially leading to unacceptable medical decisions. Quality-Adjusted Life-Years are a widely used preferences-based measure of health outcomes. We develop a time-dependent Quality-Adjusted Life-Years-based expected utility function for censored data that should be maximized to estimate an optimal threshold. We performed a simulation study to compare estimated thresholds when using the proposed expected utility approach and purely statistical estimators. Two applications illustrate the usefulness of the proposed methodology which was implemented in the R package ROCt ( www.divat.fr ). First, by reanalysing data of a randomized clinical trial comparing the efficacy of prednisone vs. placebo in patients with chronic liver cirrhosis, we demonstrate the utility of treating patients with a prothrombin level higher than 89%. Second, we reanalyze the data of an observational cohort of kidney transplant recipients: we conclude to the uselessness of the Kidney Transplant Failure Score to adapt the frequency of clinical visits. Applying such a patient-centered methodology may improve future transfer of novel prognostic scoring systems or markers in clinical practice.
González, M; Gutiérrez, C; Martínez, R
2012-09-01
A two-dimensional bisexual branching process has recently been presented for the analysis of the generation-to-generation evolution of the number of carriers of a Y-linked gene. In this model, preference of females for males with a specific genetic characteristic is assumed to be determined by an allele of the gene. It has been shown that the behavior of this kind of Y-linked gene is strongly related to the reproduction law of each genotype. In practice, the corresponding offspring distributions are usually unknown, and it is necessary to develop their estimation theory in order to determine the natural selection of the gene. Here we deal with the estimation problem for the offspring distribution of each genotype of a Y-linked gene when the only observable data are each generation's total numbers of males of each genotype and of females. We set out the problem in a non parametric framework and obtain the maximum likelihood estimators of the offspring distributions using an expectation-maximization algorithm. From these estimators, we also derive the estimators for the reproduction mean of each genotype and forecast the distribution of the future population sizes. Finally, we check the accuracy of the algorithm by means of a simulation study.
Classification with asymmetric label noise: Consistency and maximal denoising
Blanchard, Gilles; Flaska, Marek; Handy, Gregory; ...
2016-09-20
In many real-world classification problems, the labels of training examples are randomly corrupted. Most previous theoretical work on classification with label noise assumes that the two classes are separable, that the label noise is independent of the true class label, or that the noise proportions for each class are known. In this work, we give conditions that are necessary and sufficient for the true class-conditional distributions to be identifiable. These conditions are weaker than those analyzed previously, and allow for the classes to be nonseparable and the noise levels to be asymmetric and unknown. The conditions essentially state that amore » majority of the observed labels are correct and that the true class-conditional distributions are “mutually irreducible,” a concept we introduce that limits the similarity of the two distributions. For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions. Our results are facilitated by a connection to “mixture proportion estimation,” which is the problem of estimating the maximal proportion of one distribution that is present in another. We establish a novel rate of convergence result for mixture proportion estimation, and apply this to obtain consistency of a discrimination rule based on surrogate loss minimization. Experimental results on benchmark data and a nuclear particle classification problem demonstrate the efficacy of our approach. MSC 2010 subject classifications: Primary 62H30; secondary 68T10. Keywords and phrases: Classification, label noise, mixture proportion estimation, surrogate loss, consistency.« less
Classification with asymmetric label noise: Consistency and maximal denoising
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanchard, Gilles; Flaska, Marek; Handy, Gregory
In many real-world classification problems, the labels of training examples are randomly corrupted. Most previous theoretical work on classification with label noise assumes that the two classes are separable, that the label noise is independent of the true class label, or that the noise proportions for each class are known. In this work, we give conditions that are necessary and sufficient for the true class-conditional distributions to be identifiable. These conditions are weaker than those analyzed previously, and allow for the classes to be nonseparable and the noise levels to be asymmetric and unknown. The conditions essentially state that amore » majority of the observed labels are correct and that the true class-conditional distributions are “mutually irreducible,” a concept we introduce that limits the similarity of the two distributions. For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions. Our results are facilitated by a connection to “mixture proportion estimation,” which is the problem of estimating the maximal proportion of one distribution that is present in another. We establish a novel rate of convergence result for mixture proportion estimation, and apply this to obtain consistency of a discrimination rule based on surrogate loss minimization. Experimental results on benchmark data and a nuclear particle classification problem demonstrate the efficacy of our approach. MSC 2010 subject classifications: Primary 62H30; secondary 68T10. Keywords and phrases: Classification, label noise, mixture proportion estimation, surrogate loss, consistency.« less
ERIC Educational Resources Information Center
Sinharay, Sandip
2015-01-01
The maximum likelihood estimate (MLE) of the ability parameter of an item response theory model with known item parameters was proved to be asymptotically normally distributed under a set of regularity conditions for tests involving dichotomous items and a unidimensional ability parameter (Klauer, 1990; Lord, 1983). This article first considers…
Estimating allowable-cut by area-scheduling
William B. Leak
2011-01-01
Estimation of the regulated allowable-cut is an important step in placing a forest property under management and ensuring a continued supply of timber over time. Regular harvests also provide for the maintenance of needed wildlife habitat. There are two basic approaches: (1) volume, and (2) area/volume regulation, with many variations of each. Some require...
Incremental Net Effects in Multiple Regression
ERIC Educational Resources Information Center
Lipovetsky, Stan; Conklin, Michael
2005-01-01
A regular problem in regression analysis is estimating the comparative importance of the predictors in the model. This work considers the 'net effects', or shares of the predictors in the coefficient of the multiple determination, which is a widely used characteristic of the quality of a regression model. Estimation of the net effects can be a…
Estimate Of The Decay Rate Constant of Hydrogen Sulfide Generation From Landfilled Drywall
Research was conducted to investigate the impact of particle size on H2S gas emissions and estimate a decay rate constant for H2S gas generation from the anaerobic decomposition of drywall. Three different particle sizes of regular drywall and one particle size of paperless drywa...
Combined Radar-Radiometer Surface Soil Moisture and Roughness Estimation
NASA Technical Reports Server (NTRS)
Akbar, Ruzbeh; Cosh, Michael H.; O'Neill, Peggy E.; Entekhabi, Dara; Moghaddam, Mahta
2017-01-01
A robust physics-based combined radar-radiometer, or Active-Passive, surface soil moisture and roughness estimation methodology is presented. Soil moisture and roughness retrieval is performed via optimization, i.e., minimization, of a joint objective function which constrains similar resolution radar and radiometer observations simultaneously. A data-driven and noise-dependent regularization term has also been developed to automatically regularize and balance corresponding radar and radiometer contributions to achieve optimal soil moisture retrievals. It is shown that in order to compensate for measurement and observation noise, as well as forward model inaccuracies, in combined radar-radiometer estimation surface roughness can be considered a free parameter. Extensive Monte-Carlo numerical simulations and assessment using field data have been performed to both evaluate the algorithms performance and to demonstrate soil moisture estimation. Unbiased root mean squared errors (RMSE) range from 0.18 to 0.03 cm3cm3 for two different land cover types of corn and soybean. In summary, in the context of soil moisture retrieval, the importance of consistent forward emission and scattering development is discussed and presented.
Combined Radar-Radiometer Surface Soil Moisture and Roughness Estimation.
Akbar, Ruzbeh; Cosh, Michael H; O'Neill, Peggy E; Entekhabi, Dara; Moghaddam, Mahta
2017-07-01
A robust physics-based combined radar-radiometer, or Active-Passive, surface soil moisture and roughness estimation methodology is presented. Soil moisture and roughness retrieval is performed via optimization, i.e., minimization, of a joint objective function which constrains similar resolution radar and radiometer observations simultaneously. A data-driven and noise-dependent regularization term has also been developed to automatically regularize and balance corresponding radar and radiometer contributions to achieve optimal soil moisture retrievals. It is shown that in order to compensate for measurement and observation noise, as well as forward model inaccuracies, in combined radar-radiometer estimation surface roughness can be considered a free parameter. Extensive Monte-Carlo numerical simulations and assessment using field data have been performed to both evaluate the algorithm's performance and to demonstrate soil moisture estimation. Unbiased root mean squared errors (RMSE) range from 0.18 to 0.03 cm3/cm3 for two different land cover types of corn and soybean. In summary, in the context of soil moisture retrieval, the importance of consistent forward emission and scattering development is discussed and presented.
Neuron-Type-Specific Utility in a Brain-Machine Interface: a Pilot Study.
Garcia-Garcia, Martha G; Bergquist, Austin J; Vargas-Perez, Hector; Nagai, Mary K; Zariffa, Jose; Marquez-Chin, Cesar; Popovic, Milos R
2017-11-01
Firing rates of single cortical neurons can be volitionally modulated through biofeedback (i.e. operant conditioning), and this information can be transformed to control external devices (i.e. brain-machine interfaces; BMIs). However, not all neurons respond to operant conditioning in BMI implementation. Establishing criteria that predict neuron utility will assist translation of BMI research to clinical applications. Single cortical neurons (n=7) were recorded extracellularly from primary motor cortex of a Long-Evans rat. Recordings were incorporated into a BMI involving up-regulation of firing rate to control the brightness of a light-emitting-diode and subsequent reward. Neurons were classified as 'fast-spiking', 'bursting' or 'regular-spiking' according to waveform-width and intrinsic firing patterns. Fast-spiking and bursting neurons were found to up-regulate firing rate by a factor of 2.43±1.16, demonstrating high utility, while regular-spiking neurons decreased firing rates on average by a factor of 0.73±0.23, demonstrating low utility. The ability to select neurons with high utility will be important to minimize training times and maximize information yield in future clinical BMI applications. The highly contrasting utility observed between fast-spiking and bursting neurons versus regular-spiking neurons allows for the hypothesis to be advanced that intrinsic electrophysiological properties may be useful criteria that predict neuron utility in BMI implementation.
Does aerobic exercise mitigate the effects of cigarette smoking on arterial stiffness?
Park, Wonil; Miyachi, Motohiko; Tanaka, Hirofumi
2014-09-01
The largest percentage of mortality from tobacco smoking is cardiovascular-related. It is not known whether regular participation in exercise mitigates the adverse influence of smoking on vasculature. Accordingly, the authors determined whether regular aerobic exercise is associated with reduced arterial stiffness in men who smoke cigarettes. Using a cross-sectional study design, 78 young men were studied, including sedentary nonsmokers (n=20), sedentary smokers (n=12), physically active nonsmokers (n=21), and physically active smokers (n=25). Arterial stiffness was assessed by brachial-ankle pulse wave velocity (baPWV). There were no group differences in height, body fat, and systolic and diastolic blood pressure. As expected, both physically active groups demonstrated greater maximal oxygen consumption and lower heart rate at rest than their sedentary peers. The sedentary smokers demonstrated greater baPWV than the sedentary nonsmokers (11.8±1 m/s vs 10.6±1 m/s, P=.036). baPWV values were not different between the physically active nonsmokers and the physically active smokers (10.8±1 m/s vs 10.7±1 m/s). Chronic smoking is associated with arterial stiffening in sedentary men but a significant smoking-induced increase in arterial stiffness was not observed in physically active adults. These results are consistent with the idea that regular participation in physical activity may mitigate the adverse effects of smoking on the vasculature. ©2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia
2016-10-01
Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.
Chen, Zhongxian; Yu, Haitao; Wen, Cheng
2014-01-01
The goal of direct drive ocean wave energy extraction system is to convert ocean wave energy into electricity. The problem explored in this paper is the design and optimal control for the direct drive ocean wave energy extraction system. An optimal control method based on internal model proportion integration differentiation (IM-PID) is proposed in this paper though most of ocean wave energy extraction systems are optimized by the structure, weight, and material. With this control method, the heavy speed of outer heavy buoy of the energy extraction system is in resonance with incident wave, and the system efficiency is largely improved. Validity of the proposed optimal control method is verified in both regular and irregular ocean waves, and it is shown that IM-PID control method is optimal in that it maximizes the energy conversion efficiency. In addition, the anti-interference ability of IM-PID control method has been assessed, and the results show that the IM-PID control method has good robustness, high precision, and strong anti-interference ability. PMID:25152913
AdS3 to dS3 transition in the near horizon of asymptotically de Sitter solutions
NASA Astrophysics Data System (ADS)
Sadeghian, S.; Vahidinia, M. H.
2017-08-01
We consider two solutions of Einstein-Λ theory which admit the extremal vanishing horizon (EVH) limit, odd-dimensional multispinning Kerr black hole (in the presence of cosmological constant) and cosmological soliton. We show that the near horizon EVH geometry of Kerr has a three-dimensional maximally symmetric subspace whose curvature depends on rotational parameters and the cosmological constant. In the Kerr-dS case, this subspace interpolates between AdS3 , three-dimensional flat and dS3 by varying rotational parameters, while the near horizon of the EVH cosmological soliton always has a dS3 . The feature of the EVH cosmological soliton is that it is regular everywhere on the horizon. In the near EVH case, these three-dimensional parts turn into the corresponding locally maximally symmetric spacetimes with a horizon: Kerr-dS3 , flat space cosmology or BTZ black hole. We show that their thermodynamics match with the thermodynamics of the original near EVH black holes. We also briefly discuss the holographic two-dimensional CFT dual to the near horizon of EVH solutions.
The effect of a novel square-profile hand rim on propulsion technique of wheelchair tennis players.
de Groot, Sonja; Bos, Femke; Koopman, Jorine; Hoekstra, Aldo E; Vegter, Riemer J K
2018-09-01
The purpose of this study was to investigate the effect of a square-profile hand rim (SPR) on propulsion technique of wheelchair tennis players. Eight experienced wheelchair tennis players performed two sets of three submaximal exercise tests and six sprint tests on a wheelchair ergometer, once with a regular rim (RR) and once with a SPR. Torque and velocity were measured continuously and power output and timing variables were calculated. No significant differences were found in propulsion technique between the RR and SPR during the submaximal tests. When sprinting with the racket, the SPR showed a significantly lower overall speed (9.1 vs. 9.8 m s -1 ), maximal speed (10.5 vs. 11.4 m s -1 ), and maximal acceleration (18.6 vs. 10.9 m s -2 ). The SPR does not seem to improve the propulsion technique when propelling a wheelchair with a tennis racket in the hand. However, the results gave input for new hand rim designs for wheelchair tennis. Copyright © 2018 Elsevier Ltd. All rights reserved.
Network marketing on a small-world network
NASA Astrophysics Data System (ADS)
Kim, Beom Jun; Jun, Tackseung; Kim, Jeong-Yoo; Choi, M. Y.
2006-02-01
We investigate a dynamic model of network marketing in a small-world network structure artificially constructed similarly to the Watts-Strogatz network model. Different from the traditional marketing, consumers can also play the role of the manufacturer's selling agents in network marketing, which is stimulated by the referral fee the manufacturer offers. As the wiring probability α is increased from zero to unity, the network changes from the one-dimensional regular directed network to the star network where all but one player are connected to one consumer. The price p of the product and the referral fee r are used as free parameters to maximize the profit of the manufacturer. It is observed that at α=0 the maximized profit is constant independent of the network size N while at α≠0, it increases linearly with N. This is in parallel to the small-world transition. It is also revealed that while the optimal value of p stays at an almost constant level in a broad range of α, that of r is sensitive to a change in the network structure. The consumer surplus is also studied and discussed.
2014-01-01
An amylase and lipase producing bacterium (strain C2) was enriched and isolated from soil regularly contaminated with olive washing wastewater in Sfax, Tunisia. Cell was aerobic, mesophilic, Gram-negative, motile, non-sporulating bacterium, capable of growing optimally at pH 7 and 30°C and tolerated maximally 10% (W/V) NaCl. The predominant fatty acids were found to be C18:1ω7c (32.8%), C16:1ω7c (27.3%) and C16:0 (23.1%). Phylogenetic analysis of the 16S rRNA gene revealed that this strain belonging to the genus Pseudomonas. Strain C2 was found to be closely related to Pseudomonas luteola with more than 99% of similarity. Amylase optimization extraction was carried out using Box Behnken Design (BBD). Its maximal activity was found when the pH and temperature ranged from 5.5 to 6.5 and from 33 to 37°C, respectively. Under these conditions, amylase activity was found to be about 9.48 U/ml. PMID:24405763
How the medical practice employee can get more from continuing education programs.
Hills, Laura Sachs
2007-01-01
Continuing education can be a win-win situation for the medical practice employee and for the practice. However, in order education programs must become informed consumers of such programs. They must know how to select the right educational programs for their needs and maximize their own participation. Employees who attend continuing education programs without preparation may not get the full benefit from their experiences. This article suggests benchmarks to help determine whether a continuing education program is worthwhile and offers advice for calculating the actual cost of any continuing education program. It provides a how-to checklist for medical practice employees so they know how to get the most out of their continuing education experience before, during, and after the program. This article also suggests using a study partner system to double educational efforts among employees and offers 10 practical tips for taking and using notes at a continuing education program. Finally, this article outlines the benefits of becoming a regular student and offers three practical tips for maximizing the employee's exhibit hall experience.
Chen, Zhongxian; Yu, Haitao; Wen, Cheng
2014-01-01
The goal of direct drive ocean wave energy extraction system is to convert ocean wave energy into electricity. The problem explored in this paper is the design and optimal control for the direct drive ocean wave energy extraction system. An optimal control method based on internal model proportion integration differentiation (IM-PID) is proposed in this paper though most of ocean wave energy extraction systems are optimized by the structure, weight, and material. With this control method, the heavy speed of outer heavy buoy of the energy extraction system is in resonance with incident wave, and the system efficiency is largely improved. Validity of the proposed optimal control method is verified in both regular and irregular ocean waves, and it is shown that IM-PID control method is optimal in that it maximizes the energy conversion efficiency. In addition, the anti-interference ability of IM-PID control method has been assessed, and the results show that the IM-PID control method has good robustness, high precision, and strong anti-interference ability.
Evaluating alternative prescribed burning policies to reduce net economic damages from wildfire
D. Evan Mercer; Jeffrey P. Prestemon; David T. Butry; John M. Pye
2007-01-01
We estimate a wildfire risk model with a new measure of wildfire output, intensity-weighted risk and use it in Monte Carlo simulations to estimate welfare changes from alternative prescribed burning policies. Using Volusia County, Florida as a case study, an annual prescribed burning rate of 13% of all forest lands maximizes net welfare; ignoring the effects on...
ERIC Educational Resources Information Center
Song, Hairong; Ferrer, Emilio
2009-01-01
This article presents a state-space modeling (SSM) technique for fitting process factor analysis models directly to raw data. The Kalman smoother via the expectation-maximization algorithm to obtain maximum likelihood parameter estimates is used. To examine the finite sample properties of the estimates in SSM when common factors are involved, a…
Optimizing Estimated Loss Reduction for Active Sampling in Rank Learning
2008-01-01
active learning framework for SVM-based and boosting-based rank learning. Our approach suggests sampling based on maximizing the estimated loss differential over unlabeled data. Experimental results on two benchmark corpora show that the proposed model substantially reduces the labeling effort, and achieves superior performance rapidly with as much as 30% relative improvement over the margin-based sampling
Probabilistic description of probable maximum precipitation
NASA Astrophysics Data System (ADS)
Ben Alaya, Mohamed Ali; Zwiers, Francis W.; Zhang, Xuebin
2017-04-01
Probable Maximum Precipitation (PMP) is the key parameter used to estimate probable Maximum Flood (PMF). PMP and PMF are important for dam safety and civil engineering purposes. Even if the current knowledge of storm mechanisms remains insufficient to properly evaluate limiting values of extreme precipitation, PMP estimation methods are still based on deterministic consideration, and give only single values. This study aims to provide a probabilistic description of the PMP based on the commonly used method, the so-called moisture maximization. To this end, a probabilistic bivariate extreme values model is proposed to address the limitations of traditional PMP estimates via moisture maximization namely: (i) the inability to evaluate uncertainty and to provide a range PMP values, (ii) the interpretation that a maximum of a data series as a physical upper limit (iii) and the assumption that a PMP event has maximum moisture availability. Results from simulation outputs of the Canadian Regional Climate Model CanRCM4 over North America reveal the high uncertainties inherent in PMP estimates and the non-validity of the assumption that PMP events have maximum moisture availability. This later assumption leads to overestimation of the PMP by an average of about 15% over North America, which may have serious implications for engineering design.
Estimated maximal and current brain volume predict cognitive ability in old age.
Royle, Natalie A; Booth, Tom; Valdés Hernández, Maria C; Penke, Lars; Murray, Catherine; Gow, Alan J; Maniega, Susana Muñoz; Starr, John; Bastin, Mark E; Deary, Ian J; Wardlaw, Joanna M
2013-12-01
Brain tissue deterioration is a significant contributor to lower cognitive ability in later life; however, few studies have appropriate data to establish how much influence prior brain volume and prior cognitive performance have on this association. We investigated the associations between structural brain imaging biomarkers, including an estimate of maximal brain volume, and detailed measures of cognitive ability at age 73 years in a large (N = 620), generally healthy, community-dwelling population. Cognitive ability data were available from age 11 years. We found positive associations (r) between general cognitive ability and estimated brain volume in youth (male, 0.28; females, 0.12), and in measured brain volume in later life (males, 0.27; females, 0.26). Our findings show that cognitive ability in youth is a strong predictor of estimated prior and measured current brain volume in old age but that these effects were the same for both white and gray matter. As 1 of the largest studies of associations between brain volume and cognitive ability with normal aging, this work contributes to the wider understanding of how some early-life factors influence cognitive aging. Copyright © 2013 Elsevier Inc. All rights reserved.
Relationship between arterial oxygen desaturation and ventilation during maximal exercise.
Miyachi, M; Tabata, I
1992-12-01
The purpose of the present study was to investigate the contribution of ventilation to arterial O2 desaturation during maximal exercise. Nine untrained subjects and 22 trained long-distance runners [age 18-36 yr, maximal O2 uptake (VO2max) 48-74 ml.min-1 x kg-1] volunteered to participate in the study. The subjects performed an incremental exhaustive cycle ergometry test at 70 rpm of pedaling frequency, during which arterial O2 saturation (SaO2) and ventilatory data were collected every minute. SaO2 was estimated with a pulse oximeter. A significant positive correlation was found between SaO2 and end-tidal PO2 (PETO2; r = 0.72, r2 = 0.52, P < 0.001) during maximal exercise. These statistical results suggest that approximately 50% of the variability of SaO2 can be accounted for by differences in PETO2, which reflects alveolar PO2. Furthermore, PETO2 was highly correlated with the ventilatory equivalent for O2 (VE/VO2; r = 0.91, P < 0.001), which indicates that PETO2 could be the result of ventilation stimulated by maximal exercise. Finally, SaO2 was positively related to VE/VO2 during maximal exercise (r = 0.74, r2 = 0.55, P < 0.001). Therefore, one-half of the arterial O2 desaturation occurring during maximal exercise may be explained by less hyperventilation, specifically for our subjects, who demonstrated a wide range of trained states. Furthermore, we found an indirect positive correlation between SaO2 and ventilatory response to CO2 at rest (r = 0.45, P < 0.05), which was mediated by ventilation during maximal exercise. These data also suggest that ventilation is an important factor for arterial O2 desaturation during maximal exercise.
The augmented Lagrangian method for parameter estimation in elliptic systems
NASA Technical Reports Server (NTRS)
Ito, Kazufumi; Kunisch, Karl
1990-01-01
In this paper a new technique for the estimation of parameters in elliptic partial differential equations is developed. It is a hybrid method combining the output-least-squares and the equation error method. The new method is realized by an augmented Lagrangian formulation, and convergence as well as rate of convergence proofs are provided. Technically the critical step is the verification of a coercivity estimate of an appropriately defined Lagrangian functional. To obtain this coercivity estimate a seminorm regularization technique is used.
Extracting volatility signal using maximum a posteriori estimation
NASA Astrophysics Data System (ADS)
Neto, David
2016-11-01
This paper outlines a methodology to estimate a denoised volatility signal for foreign exchange rates using a hidden Markov model (HMM). For this purpose a maximum a posteriori (MAP) estimation is performed. A double exponential prior is used for the state variable (the log-volatility) in order to allow sharp jumps in realizations and then log-returns marginal distributions with heavy tails. We consider two routes to choose the regularization and we compare our MAP estimate to realized volatility measure for three exchange rates.
Sarzynski, M A; Rankinen, T; Earnest, C P; Leon, A S; Rao, D C; Skinner, J S; Bouchard, C
2013-01-01
The purpose of this study was to examine how well two commonly used age-based prediction equations for maximal heart rate (HRmax ) estimate the actual HRmax measured in Black and White adults from the HERITAGE Family Study. A total of 762 sedentary subjects (39% Black, 57% Females) from HERITAGE were included. HRmax was measured during maximal exercise tests using cycle ergometers. Age-based HRmax was predicted using the Fox (220-age) and Tanaka (208 - 0.7 × age) formulas. The standard error of estimate (SEE) of predicted HRmax was 12.4 and 11.4 bpm for the Fox and Tanaka formulas, respectively, indicating a wide-spread of measured-HRmax values are compared to their age-predicted values. The SEE (shown as Fox/Tanaka) was higher in Blacks (14.4/13.1 bpm) and Males (12.6/11.7 bpm) compared to Whites (11.0/10.2 bpm) and Females (12.3/11.2 bpm) for both formulas. The SEE was higher in subjects above the BMI median (12.8/11.9 bpm) and below the fitness median (13.4/12.4 bpm) when compared to those below the BMI median (12.2/11.0 bpm) and above the fitness median (11.4/10.3) for both formulas. Our findings show that based on the SEE, the prevailing age-based estimated HRmax equations do not precisely predict an individual's measured-HRmax . Copyright © 2013 Wiley Periodicals, Inc.
Maximum Rate of Growth of Enstrophy in Solutions of the Fractional Burgers Equation
NASA Astrophysics Data System (ADS)
Yun, Dongfang; Protas, Bartosz
2018-02-01
This investigation is a part of a research program aiming to characterize the extreme behavior possible in hydrodynamic models by analyzing the maximum growth of certain fundamental quantities. We consider here the rate of growth of the classical and fractional enstrophy in the fractional Burgers equation in the subcritical and supercritical regimes. Since solutions to this equation exhibit, respectively, globally well-posed behavior and finite-time blowup in these two regimes, this makes it a useful model to study the maximum instantaneous growth of enstrophy possible in these two distinct situations. First, we obtain estimates on the rates of growth and then show that these estimates are sharp up to numerical prefactors. This is done by numerically solving suitably defined constrained maximization problems and then demonstrating that for different values of the fractional dissipation exponent the obtained maximizers saturate the upper bounds in the estimates as the enstrophy increases. We conclude that the power-law dependence of the enstrophy rate of growth on the fractional dissipation exponent has the same global form in the subcritical, critical and parts of the supercritical regime. This indicates that the maximum enstrophy rate of growth changes smoothly as global well-posedness is lost when the fractional dissipation exponent attains supercritical values. In addition, nontrivial behavior is revealed for the maximum rate of growth of the fractional enstrophy obtained for small values of the fractional dissipation exponents. We also characterize the structure of the maximizers in different cases.
A guide to the visual analysis and communication of biomolecular structural data.
Johnson, Graham T; Hertig, Samuel
2014-10-01
Biologists regularly face an increasingly difficult task - to effectively communicate bigger and more complex structural data using an ever-expanding suite of visualization tools. Whether presenting results to peers or educating an outreach audience, a scientist can achieve maximal impact with minimal production time by systematically identifying an audience's needs, planning solutions from a variety of visual communication techniques and then applying the most appropriate software tools. A guide to available resources that range from software tools to professional illustrators can help researchers to generate better figures and presentations tailored to any audience's needs, and enable artistically inclined scientists to create captivating outreach imagery.
How can organisations influence their older employees' decision of when to retire?
Oakman, Jodi; Howie, Linsey
2013-01-01
This article reports on a study of older employees of a large public service organisation and examines their experiences of employment and their intentions to retire. This study collected qualitative data through focus group interviews with 42 participants. Key themes derived from data analysis with regard to influences on retirement intentions included: personal, organizational and legislative influences. The study concludes that organisations can retain their older workers longer if they provide sufficient support, the work offered is satisfying, and part-time work is available. Regular review of employees' performance and satisfaction is required to maximize the productivity and retention of older workers.
Gravitational lensing and ghost images in the regular Bardeen no-horizon spacetimes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schee, Jan; Stuchlík, Zdeněk, E-mail: jan.schee@fpf.slu.cz, E-mail: zdenek.stuchlik@fpf.slu.cz
We study deflection of light rays and gravitational lensing in the regular Bardeen no-horizon spacetimes. Flatness of these spacetimes in the central region implies existence of interesting optical effects related to photons crossing the gravitational field of the no-horizon spacetimes with low impact parameters. These effects occur due to existence of a critical impact parameter giving maximal deflection of light rays in the Bardeen no-horizon spacetimes. We give the critical impact parameter in dependence on the specific charge of the spacetimes, and discuss 'ghost' direct and indirect images of Keplerian discs, generated by photons with low impact parameters. The ghostmore » direct images can occur only for large inclination angles of distant observers, while ghost indirect images can occur also for small inclination angles. We determine the range of the frequency shift of photons generating the ghost images and determine distribution of the frequency shift across these images. We compare them to those of the standard direct images of the Keplerian discs. The difference of the ranges of the frequency shift on the ghost and direct images could serve as a quantitative measure of the Bardeen no-horizon spacetimes. The regions of the Keplerian discs giving the ghost images are determined in dependence on the specific charge of the no-horizon spacetimes. For comparison we construct direct and indirect (ordinary and ghost) images of Keplerian discs around Reissner-Nördström naked singularities demonstrating a clear qualitative difference to the ghost direct images in the regular Bardeen no-horizon spacetimes. The optical effects related to the low impact parameter photons thus give clear signature of the regular Bardeen no-horizon spacetimes, as no similar phenomena could occur in the black hole or naked singularity spacetimes. Similar direct ghost images have to occur in any regular no-horizon spacetimes having nearly flat central region.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
O’Shea, Tuathan P., E-mail: tuathan.oshea@icr.ac.uk; Bamber, Jeffrey C.; Harris, Emma J.
Purpose: Ultrasound-based motion estimation is an expanding subfield of image-guided radiation therapy. Although ultrasound can detect tissue motion that is a fraction of a millimeter, its accuracy is variable. For controlling linear accelerator tracking and gating, ultrasound motion estimates must remain highly accurate throughout the imaging sequence. This study presents a temporal regularization method for correlation-based template matching which aims to improve the accuracy of motion estimates. Methods: Liver ultrasound sequences (15–23 Hz imaging rate, 2.5–5.5 min length) from ten healthy volunteers under free breathing were used. Anatomical features (blood vessels) in each sequence were manually annotated for comparison withmore » normalized cross-correlation based template matching. Five sequences from a Siemens Acuson™ scanner were used for algorithm development (training set). Results from incremental tracking (IT) were compared with a temporal regularization method, which included a highly specific similarity metric and state observer, known as the α–β filter/similarity threshold (ABST). A further five sequences from an Elekta Clarity™ system were used for validation, without alteration of the tracking algorithm (validation set). Results: Overall, the ABST method produced marked improvements in vessel tracking accuracy. For the training set, the mean and 95th percentile (95%) errors (defined as the difference from manual annotations) were 1.6 and 1.4 mm, respectively (compared to 6.2 and 9.1 mm, respectively, for IT). For each sequence, the use of the state observer leads to improvement in the 95% error. For the validation set, the mean and 95% errors for the ABST method were 0.8 and 1.5 mm, respectively. Conclusions: Ultrasound-based motion estimation has potential to monitor liver translation over long time periods with high accuracy. Nonrigid motion (strain) and the quality of the ultrasound data are likely to have an impact on tracking performance. A future study will investigate spatial uniformity of motion and its effect on the motion estimation errors.« less
Hopkins, Melanie J.; Smith, Andrew B.
2015-01-01
How ecological and morphological diversity accrues over geological time has been much debated by paleobiologists. Evidence from the fossil record suggests that many clades reach maximal diversity early in their evolutionary history, followed by a decline in evolutionary rates as ecological space fills or due to internal constraints. Here, we apply recently developed methods for estimating rates of morphological evolution during the post-Paleozoic history of a major invertebrate clade, the Echinoidea. Contrary to expectation, rates of evolution were lowest during the initial phase of diversification following the Permo-Triassic mass extinction and increased over time. Furthermore, although several subclades show high initial rates and net decreases in rates of evolution, consistent with “early bursts” of morphological diversification, at more inclusive taxonomic levels, these bursts appear as episodic peaks. Peak rates coincided with major shifts in ecological morphology, primarily associated with innovations in feeding strategies. Despite having similar numbers of species in today’s oceans, regular echinoids have accrued far less morphological diversity than irregular echinoids due to lower intrinsic rates of morphological evolution and less morphological innovation, the latter indicative of constrained or bounded evolution. These results indicate that rates of evolution are extremely heterogenous through time and their interpretation depends on the temporal and taxonomic scale of analysis. PMID:25713369
Xiong, Naixue; Liu, Ryan Wen; Liang, Maohan; Wu, Di; Liu, Zhao; Wu, Huisi
2017-01-18
Single-image blind deblurring for imaging sensors in the Internet of Things (IoT) is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L 1 -norm of kernel intensity and the squared L 2 -norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L 1 -norm data-fidelity term and the total generalized variation (TGV) regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM)-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.
Modeling of polychromatic attenuation using computed tomography reconstructed images
NASA Technical Reports Server (NTRS)
Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.
1999-01-01
This paper presents a procedure for estimating an accurate model of the CT imaging process including spectral effects. As raw projection data are typically unavailable to the end-user, we adopt a post-processing approach that utilizes the reconstructed images themselves. This approach includes errors from x-ray scatter and the nonidealities of the built-in soft tissue correction into the beam characteristics, which is crucial to beam hardening correction algorithms that are designed to be applied directly to CT reconstructed images. We formulate this approach as a quadratic programming problem and propose two different methods, dimension reduction and regularization, to overcome ill conditioning in the model. For the regularization method we use a statistical procedure, Cross Validation, to select the regularization parameter. We have constructed step-wedge phantoms to estimate the effective beam spectrum of a GE CT-I scanner. Using the derived spectrum, we computed the attenuation ratios for the wedge phantoms and found that the worst case modeling error is less than 3% of the corresponding attenuation ratio. We have also built two test (hybrid) phantoms to evaluate the effective spectrum. Based on these test phantoms, we have shown that the effective beam spectrum provides an accurate model for the CT imaging process. Last, we used a simple beam hardening correction experiment to demonstrate the effectiveness of the estimated beam profile for removing beam hardening artifacts. We hope that this estimation procedure will encourage more independent research on beam hardening corrections and will lead to the development of application-specific beam hardening correction algorithms.
Increased cardiac output elicits higher V̇O2max in response to self-paced exercise.
Astorino, Todd Anthony; McMillan, David William; Edmunds, Ross Montgomery; Sanchez, Eduardo
2015-03-01
Recently, a self-paced protocol demonstrated higher maximal oxygen uptake versus the traditional ramp protocol. The primary aim of the current study was to further explore potential differences in maximal oxygen uptake between the ramp and self-paced protocols using simultaneous measurement of cardiac output. Active men and women of various fitness levels (N = 30, mean age = 26.0 ± 5.0 years) completed 3 graded exercise tests separated by a minimum of 48 h. Participants initially completed progressive ramp exercise to exhaustion to determine maximal oxygen uptake followed by a verification test to confirm maximal oxygen uptake attainment. Over the next 2 sessions, they performed a self-paced and an additional ramp protocol. During exercise, gas exchange data were obtained using indirect calorimetry, and thoracic impedance was utilized to estimate hemodynamic function (stroke volume and cardiac output). One-way ANOVA with repeated measures was used to determine differences in maximal oxygen uptake and cardiac output between ramp and self-paced testing. Results demonstrated lower (p < 0.001) maximal oxygen uptake via the ramp (47.2 ± 10.2 mL·kg(-1)·min(-1)) versus the self-paced (50.2 ± 9.6 mL·kg(-1)·min(-1)) protocol, with no interaction (p = 0.06) seen for fitness level. Maximal heart rate and cardiac output (p = 0.02) were higher in the self-paced protocol versus ramp exercise. In conclusion, data show that the traditional ramp protocol may underestimate maximal oxygen uptake compared with a newly developed self-paced protocol, with a greater cardiac output potentially responsible for this outcome.
Energy functions for regularization algorithms
NASA Technical Reports Server (NTRS)
Delingette, H.; Hebert, M.; Ikeuchi, K.
1991-01-01
Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.
Information transmission using non-poisson regular firing.
Koyama, Shinsuke; Omi, Takahiro; Kass, Robert E; Shinomoto, Shigeru
2013-04-01
In many cortical areas, neural spike trains do not follow a Poisson process. In this study, we investigate a possible benefit of non-Poisson spiking for information transmission by studying the minimal rate fluctuation that can be detected by a Bayesian estimator. The idea is that an inhomogeneous Poisson process may make it difficult for downstream decoders to resolve subtle changes in rate fluctuation, but by using a more regular non-Poisson process, the nervous system can make rate fluctuations easier to detect. We evaluate the degree to which regular firing reduces the rate fluctuation detection threshold. We find that the threshold for detection is reduced in proportion to the coefficient of variation of interspike intervals.
Martínez-Lagunas, Vanessa; Hartmann, Ulrich
2014-09-01
To evaluate the validity of the Yo-Yo Intermittent Recovery Test Level 1 (YYIR1) for the direct assessment and the indirect estimation of maximal oxygen consumption (VO2max) in female soccer players compared with a maximal laboratory treadmill test (LTT). Eighteen female soccer players (21.5 ± 3.4 y, 165.6 ± 7.5 cm, 63.3 ± 7.4 kg; mean ± SD) completed an LTT and a YYIR1 in random order (1 wk apart). Their VO2max was directly measured via portable spirometry during both tests and indirectly estimated from a published non-gender-specific formula (YYIR1-F1). The measured VO2max values in LTT and YYIR1 were 55.0 ± 5.3 and 49.9 ± 4.9 mL · kg-1 · min-1, respectively, while the estimated VO2max values from YYIR1-F1 corresponded to 45.2 ± 3.4 mL · kg-1 · min-1. Large positive correlations between the VO2max values from YYIR1 and LTT (r = .83, P < .001, 90% confidence interval = .64-.92) and YYIR1-F1 and LTT (r = .67, P = .002, .37-.84) were found. However, the YYIR1 significantly underestimated players' VO2max by 9.4% compared with LTT (P < .001) with Bland-Altman 95% limits of agreement ranging from -20.0% to 1.4%. A significant underestimation from the YYIR1-F1 (P < .001) was also identified (17.8% with Bland-Altman 95% limits of agreement ranging from -31.8% to -3.8%). The YYIR1 and YYIR1-F1 are not accurate methods for the direct assessment or indirect estimation of VO2max in female soccer players. The YYIR1-F1 lacks gender specificity, which might have been the reason for its larger error.
van Heesch, Peter N; Struijk, Pieter C; Laudy, Jaqueline A M; Steegers, Eric A P; Wildschut, Hajo I J
2010-05-01
To establish how different methods of estimating gestational age (GA) affect reliability of first-trimester screening for Down syndrome. Retrospective single-center study of 100 women with a viable singleton pregnancy, who had first-trimester screening. We calculated multiples of the median (MoM) for maternal-serum free beta human chorionic gonadotropin (free beta-hCG) and pregnancy associated plasma protein-A (PAPP-A), derived from either last menstrual period (LMP) or ultrasound-dating scans. In women with a regular cycle, LMP-derived estimates of GA were two days longer (range -11 to 18), than crown-rump length (CRL)-derived estimates of GA whereas this discrepancy was more pronounced in women who reported to have an irregular cycle, i.e., six days (range -7 to 32). Except for PAPP-A in the regular-cycle group, all differences were significant. Consequently, risk estimates are affected by the mode of estimating GA. In fact, LMP-based estimates revealed ten "screen-positive" cases compared to five "screen-positive" cases where GA was derived from dating-scans. Provided fixed values for nuchal translucency are applied, dating-scans reduce the number of screen-positive findings on the basis of biochemical screening. We recommend implementation of guidelines for Down syndrome screening based on CRL-dependent rather than LMP-dependent parameters of GA.
Park, Sohyun; Blanck, Heidi M.; Sherry, Bettylou; Jones, Sherry Everett; Pan, Liping
2015-01-01
Limited research shows an inconclusive association between soda intake and asthma, potentially attributable to certain preservatives in sodas. This cross-sectional study examined the association between regular (nondiet)-soda intake and current asthma among a nationally representative sample of high school students. Analysis was based on the 2009 national Youth Risk Behavior Survey and included 15,960 students (grades 9 through 12) with data for both regular-soda intake and current asthma status. The outcome measure was current asthma (ie, told by doctor/nurse that they had asthma and still have asthma). The main exposure variable was regular-soda intake (ie, drank a can/bottle/glass of soda during the 7 days before the survey). Multivariable logistic regression was used to estimate the adjusted odds ratios for regular-soda intake with current asthma after controlling for age, sex, race/ethnicity, weight status, and current cigarette use. Overall, 10.8% of students had current asthma. In addition, 9.7% of students who did not drink regular soda had current asthma, and 14.7% of students who drank regular soda three or more times per day had current asthma. Compared with those who did not drink regular soda, odds of having current asthma were higher among students who drank regular soda two times per day (adjusted odds ratio = 1.28; 95% CI 1.02 to 1.62) and three or more times per day (adjusted odds ratio = 1.64; 95% CI 1.25 to 2.16). The association between high regular-soda intake and current asthma suggests efforts to reduce regular-soda intake among youth might have benefits beyond improving diet quality. However, this association needs additional research, such as a longitudinal examination. PMID:23260727
Park, Sohyun; Blanck, Heidi M; Sherry, Bettylou; Jones, Sherry Everett; Pan, Liping
2013-01-01
Limited research shows an inconclusive association between soda intake and asthma, potentially attributable to certain preservatives in sodas. This cross-sectional study examined the association between regular (nondiet)-soda intake and current asthma among a nationally representative sample of high school students. Analysis was based on the 2009 national Youth Risk Behavior Survey and included 15,960 students (grades 9 through 12) with data for both regular-soda intake and current asthma status. The outcome measure was current asthma (ie, told by doctor/nurse that they had asthma and still have asthma). The main exposure variable was regular-soda intake (ie, drank a can/bottle/glass of soda during the 7 days before the survey). Multivariable logistic regression was used to estimate the adjusted odds ratios for regular-soda intake with current asthma after controlling for age, sex, race/ethnicity, weight status, and current cigarette use. Overall, 10.8% of students had current asthma. In addition, 9.7% of students who did not drink regular soda had current asthma, and 14.7% of students who drank regular soda three or more times per day had current asthma. Compared with those who did not drink regular soda, odds of having current asthma were higher among students who drank regular soda two times per day (adjusted odds ratio=1.28; 95% CI 1.02 to 1.62) and three or more times per day (adjusted odds ratio=1.64; 95% CI 1.25 to 2.16). The association between high regular-soda intake and current asthma suggests efforts to reduce regular-soda intake among youth might have benefits beyond improving diet quality. However, this association needs additional research, such as a longitudinal examination. Published by Elsevier Inc.
Quantitative Oxygenation Venography from MRI Phase
Fan, Audrey P.; Bilgic, Berkin; Gagnon, Louis; Witzel, Thomas; Bhat, Himanshu; Rosen, Bruce R.; Adalsteinsson, Elfar
2014-01-01
Purpose To demonstrate acquisition and processing methods for quantitative oxygenation venograms that map in vivo oxygen saturation (SvO2) along cerebral venous vasculature. Methods Regularized quantitative susceptibility mapping (QSM) is used to reconstruct susceptibility values and estimate SvO2 in veins. QSM with ℓ1 and ℓ2 regularization are compared in numerical simulations of vessel structures with known magnetic susceptibility. Dual-echo, flow-compensated phase images are collected in three healthy volunteers to create QSM images. Bright veins in the susceptibility maps are vectorized and used to form a three-dimensional vascular mesh, or venogram, along which to display SvO2 values from QSM. Results Quantitative oxygenation venograms that map SvO2 along brain vessels of arbitrary orientation and geometry are shown in vivo. SvO2 values in major cerebral veins lie within the normal physiological range reported by 15O positron emission tomography. SvO2 from QSM is consistent with previous MR susceptometry methods for vessel segments oriented parallel to the main magnetic field. In vessel simulations, ℓ1 regularization results in less than 10% SvO2 absolute error across all vessel tilt orientations and provides more accurate SvO2 estimation than ℓ2 regularization. Conclusion The proposed analysis of susceptibility images enables reliable mapping of quantitative SvO2 along venograms and may facilitate clinical use of venous oxygenation imaging. PMID:24006229
Zhao, Qi; Liu, Yunchao; Yuan, Xiao; Chitambar, Eric; Ma, Xiongfeng
2018-02-16
Manipulation and quantification of quantum resources are fundamental problems in quantum physics. In the asymptotic limit, coherence distillation and dilution have been proposed by manipulating infinite identical copies of states. In the nonasymptotic setting, finite data-size effects emerge, and the practically relevant problem of coherence manipulation using finite resources has been left open. This Letter establishes the one-shot theory of coherence dilution, which involves converting maximally coherent states into an arbitrary quantum state using maximally incoherent operations, dephasing-covariant incoherent operations, incoherent operations, or strictly incoherent operations. We introduce several coherence monotones with concrete operational interpretations that estimate the one-shot coherence cost-the minimum amount of maximally coherent states needed for faithful coherence dilution. Furthermore, we derive the asymptotic coherence dilution results with maximally incoherent operations, incoherent operations, and strictly incoherent operations as special cases. Our result can be applied in the analyses of quantum information processing tasks that exploit coherence as resources, such as quantum key distribution and random number generation.
NASA Astrophysics Data System (ADS)
Zhao, Qi; Liu, Yunchao; Yuan, Xiao; Chitambar, Eric; Ma, Xiongfeng
2018-02-01
Manipulation and quantification of quantum resources are fundamental problems in quantum physics. In the asymptotic limit, coherence distillation and dilution have been proposed by manipulating infinite identical copies of states. In the nonasymptotic setting, finite data-size effects emerge, and the practically relevant problem of coherence manipulation using finite resources has been left open. This Letter establishes the one-shot theory of coherence dilution, which involves converting maximally coherent states into an arbitrary quantum state using maximally incoherent operations, dephasing-covariant incoherent operations, incoherent operations, or strictly incoherent operations. We introduce several coherence monotones with concrete operational interpretations that estimate the one-shot coherence cost—the minimum amount of maximally coherent states needed for faithful coherence dilution. Furthermore, we derive the asymptotic coherence dilution results with maximally incoherent operations, incoherent operations, and strictly incoherent operations as special cases. Our result can be applied in the analyses of quantum information processing tasks that exploit coherence as resources, such as quantum key distribution and random number generation.
Lin, Shin-Yi; Chien, Shih-Chang; Wang, Sheng-Yang; Mau, Jeng-Leun
2016-01-01
Pleurotus citrinopileatus mycelium was prepared with high ergothioneine (Hi-Ergo) content and its proximate composition, nonvolatile taste components, and antioxidant properties were studied. The ergothioneine contents of fruiting bodies and Hi-Ergo and regular mycelia were 3.89, 14.57, and 0.37 mg/g dry weight, respectively. Hi-Ergo mycelium contained more dietary fiber, soluble polysaccharides, and ash but less carbohydrates, reducing sugar, fiber, and fat than regular mycelium. However, Hi-Ergo mycelium contained the smallest amounts of total sugars and polyols (47.43 mg/g dry weight). In addition, Hi-Ergo mycelium showed the most intense umami taste. On the basis of the half-maximal effective concentration values obtained, the 70% ethanolic extract from Hi-Ergo mycelium showed the most effective antioxidant activity, reducing power, and scavenging ability, whereas the fruiting body showed the most effective antioxidant activity, chelating ability, and Trolox-equivalent antioxidant capacity. Overall, Hi-Ergo mycelium could be beneficially used as a food-flavoring material or as a nutritional supplement.
Field experience and performance evaluation of a medium-concentration CPV system
NASA Astrophysics Data System (ADS)
Norton, Matthew; Bentley, Roger; Georghiou, George E.; Chonavel, Sylvain; De Mutiis, Alfredo
2012-10-01
With the aim of gaining experience and performance data from location with a harsh summer climate, a 70 X concentrating photovoltaic (CPV) system was installed in Janurary 2009 in Nicosia, Cyprus. The performance of this system has been monitored using regular current-voltage characterisations for three years. Over this period, the output of the system has remained fairly constant. Measured performance ratios varied from 0.79 to 0.86 in the winter, but fell to 0.64 over the year when left uncleaned. Operating cell temperatures were modeled and found to be similar to those of flat plate modules. The most significant causes of energy loss have been identified as originating from tracking issues and soiling. Losses due to soiling could account for a drop in output of 0.2% per day. When cleaned and properly oriented, the normalized output of the system has remained constant, suggesting that this particular design is tolerant to the physical strain of long-term outdoor exposure in harsh summer conditions. Regular cleaning and reliable tracker operation are shown to be essential for maximizing energy yield.
NASA Astrophysics Data System (ADS)
Wijayanto, D.; Kurohman, F.; Nugroho, RA
2018-03-01
The research purpose was to develop a model bioeconomic of profit maximization that can be applied to red tilapia culture. The development of fish growth model used polynomial growth function. Profit maximization process used the first derivative of profit equation to time of culture equal to zero. This research has also developed the equations to estimate the culture time to reach the target size of the fish harvest. The research proved that this research model could be applied in the red tilapia culture. In the case of this study, red tilapia culture can achieve the maximum profit at 584 days and the profit of Rp. 28,605,731 per culture cycle. If used size target of 250 g, the culture of red tilapia need 82 days of culture time.
Predictors of VO2Peak in children age 6- to 7-years-old.
Dencker, Magnus; Hermansen, Bianca; Bugge, Anna; Froberg, Karsten; Andersen, Lars B
2011-02-01
This study investigated the predictors of aerobic fitness (VO2PEAK) in young children on a population-base. Participants were 436 children (229 boys and 207 girls) aged 6.7 ± 0.4 yrs. VO2PEAK was measured during a maximal treadmill exercise test. Physical activity was assessed by accelerometers. Total body fat and total fat free mass were estimated from skinfold measurements. Regression analyses indicated that significant predictors for VO2PEAK per kilogram body mass were total body fat, maximal heart rate, sex, and age. Physical activity explained an additional 4-7%. Further analyses showed the main contributing factors for absolute values of VO2PEAK were fat free mass, maximal heart rate, sex, and age. Physical activity explained an additional 3-6%.
Hudson, H M; Ma, J; Green, P
1994-01-01
Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.
Partial regularity of weak solutions to a PDE system with cubic nonlinearity
NASA Astrophysics Data System (ADS)
Liu, Jian-Guo; Xu, Xiangsheng
2018-04-01
In this paper we investigate regularity properties of weak solutions to a PDE system that arises in the study of biological transport networks. The system consists of a possibly singular elliptic equation for the scalar pressure of the underlying biological network coupled to a diffusion equation for the conductance vector of the network. There are several different types of nonlinearities in the system. Of particular mathematical interest is a term that is a polynomial function of solutions and their partial derivatives and this polynomial function has degree three. That is, the system contains a cubic nonlinearity. Only weak solutions to the system have been shown to exist. The regularity theory for the system remains fundamentally incomplete. In particular, it is not known whether or not weak solutions develop singularities. In this paper we obtain a partial regularity theorem, which gives an estimate for the parabolic Hausdorff dimension of the set of possible singular points.
Zhou, Hua; Li, Lexin
2014-01-01
Summary Modern technologies are producing a wealth of data with complex structures. For instance, in two-dimensional digital imaging, flow cytometry and electroencephalography, matrix-type covariates frequently arise when measurements are obtained for each combination of two underlying variables. To address scientific questions arising from those data, new regression methods that take matrices as covariates are needed, and sparsity or other forms of regularization are crucial owing to the ultrahigh dimensionality and complex structure of the matrix data. The popular lasso and related regularization methods hinge on the sparsity of the true signal in terms of the number of its non-zero coefficients. However, for the matrix data, the true signal is often of, or can be well approximated by, a low rank structure. As such, the sparsity is frequently in the form of low rank of the matrix parameters, which may seriously violate the assumption of the classical lasso. We propose a class of regularized matrix regression methods based on spectral regularization. A highly efficient and scalable estimation algorithm is developed, and a degrees-of-freedom formula is derived to facilitate model selection along the regularization path. Superior performance of the method proposed is demonstrated on both synthetic and real examples. PMID:24648830
3D first-arrival traveltime tomography with modified total variation regularization
NASA Astrophysics Data System (ADS)
Jiang, Wenbin; Zhang, Jie
2018-02-01
Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.
Covariance estimation in Terms of Stokes Parameters with Application to Vector Sensor Imaging
2016-12-15
S. Klein, “HF Vector Sensor for Radio Astronomy : Ground Testing Results,” in AIAA SPACE 2016, ser. AIAA SPACE Forum, American Institute of... astronomy ,” in 2016 IEEE Aerospace Conference, Mar. 2016, pp. 1–17. doi: 10.1109/ AERO.2016.7500688. [4] K.-C. Ho, K.-C. Tan, and B. T. G. Tan, “Estimation of...Statistical Imaging in Radio Astronomy via an Expectation-Maximization Algorithm for Structured Covariance Estimation,” in Statistical Methods in Imaging: IN
Dissipative structure and global existence in critical space for Timoshenko system of memory type
NASA Astrophysics Data System (ADS)
Mori, Naofumi
2018-08-01
In this paper, we consider the initial value problem for the Timoshenko system with a memory term in one dimensional whole space. In the first place, we consider the linearized system: applying the energy method in the Fourier space, we derive the pointwise estimate of the solution in the Fourier space, which first gives the optimal decay estimate of the solution. Next, we give a characterization of the dissipative structure of the system by using the spectral analysis, which confirms our pointwise estimate is optimal. In the second place, we consider the nonlinear system: we show that the global-in-time existence and uniqueness result could be proved in the minimal regularity assumption in the critical Sobolev space H2. In the proof we don't need any time-weighted norm as recent works; we use just an energy method, which is improved to overcome the difficulties caused by regularity-loss property of Timoshenko system.
Potential estimates for the p-Laplace system with data in divergence form
NASA Astrophysics Data System (ADS)
Cianchi, A.; Schwarzacher, S.
2018-07-01
A pointwise bound for local weak solutions to the p-Laplace system is established in terms of data on the right-hand side in divergence form. The relevant bound involves a Havin-Maz'ya-Wolff potential of the datum, and is a counterpart for data in divergence form of a classical result of [25], recently extended to systems in [28]. A local bound for oscillations is also provided. These results allow for a unified approach to regularity estimates for broad classes of norms, including Banach function norms (e.g. Lebesgue, Lorentz and Orlicz norms), and norms depending on the oscillation of functions (e.g. Hölder, BMO and, more generally, Campanato type norms). In particular, new regularity properties are exhibited, and well-known results are easily recovered.
NASA Astrophysics Data System (ADS)
Deng, Shuxian; Ge, Xinxin
2017-10-01
Considering the non-Newtonian fluid equation of incompressible porous media, using the properties of operator semigroup and measure space and the principle of squeezed image, Fourier analysis and a priori estimate in the measurement space are used to discuss the non-compressible porous media, the properness of the solution of the equation, its gradual behavior and its topological properties. Through the diffusion regularization method and the compressed limit compact method, we study the overall decay rate of the solution of the equation in a certain space when the initial value is sufficient. The decay estimation of the solution of the incompressible seepage equation is obtained, and the asymptotic behavior of the solution is obtained by using the double regularization model and the Duhamel principle.
Returns to Education: New Evidence for India, 1983-1999
ERIC Educational Resources Information Center
Dutta, Puja Vasudeva
2006-01-01
This paper estimates the returns to education for adult male workers in regular and casual wage employment using Indian national survey data at three points in time spanning almost two decades. Both standard and augmented Mincerian wage equations are estimated using a set of human capital measures and other controls after addressing the issue of…
ERIC Educational Resources Information Center
Brady, Timothy F.; Tenenbaum, Joshua B.
2013-01-01
When remembering a real-world scene, people encode both detailed information about specific objects and higher order information like the overall gist of the scene. However, formal models of change detection, like those used to estimate visual working memory capacity, assume observers encode only a simple memory representation that includes no…
Development of a Frequency-based Measure of Syntactic Difficulty for Estimating Readability.
ERIC Educational Resources Information Center
Selden, Ramsay
Readability estimates are usually based on measures of word difficulty and measures of sentence difficulty. Word difficulty is measured in two ways: by the structural size and complexity of words or by reference to phonomena of language use, such as word-list frequency or the regularity of spelling patterns. Sentence difficulty is measured only in…
Der, Geoff; Roberts, Chris; Haw, Sally
2016-01-01
Introduction: Smoke-free legislation has been a great success for tobacco control but its impact on smoking uptake remains under-explored. We investigated if trends in smoking uptake amongst adolescents differed before and after the introduction of smoke-free legislation in the United Kingdom. Methods: Prevalence estimates for regular smoking were obtained from representative school-based surveys for the four countries of the United Kingdom. Post-intervention status was represented using a dummy variable and to allow for a change in trend, the number of years since implementation was included. To estimate the association between smoke-free legislation and adolescent smoking, the percentage of regular smokers was modeled using linear regression adjusted for trends over time and country. All models were stratified by age (13 and 15 years) and sex. Results: For 15-year-old girls, the implementation of smoke-free legislation in the United Kingdom was associated with a 4.3% reduction in the prevalence of regular smoking (P = .029). In addition, regular smoking fell by an additional 1.5% per annum post-legislation in this group (P = .005). Among 13-year-old girls, there was a reduction of 2.8% in regular smoking (P = .051), with no evidence of a change in trend post-legislation. Smaller and nonsignificant reductions in regular smoking were observed for 15- and 13-year-old boys (P = .175 and P = .113, respectively). Conclusions: Smoke-free legislation may help reduce smoking uptake amongst teenagers, with stronger evidence for an association seen in females. Further research that analyses longitudinal data across more countries is required. Implications: Previous research has established that smoke-free legislation has led to many improvements in population health, including reductions in heart attack, stroke, and asthma. However, the impacts of smoke-free legislation on the rates of smoking amongst children have been less investigated. Analysis of repeated cross-sectional surveys across the four countries of the United Kingdom shows smoke-free legislation may be associated with a reduction in regular smoking among school-aged children. If this association is causal, comprehensive smoke-free legislation could help prevent future generations from taking up smoking. PMID:26911840
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stark, Christopher C.; Roberge, Aki; Mandell, Avi
ExoEarth yield is a critical science metric for future exoplanet imaging missions. Here we estimate exoEarth candidate yield using single visit completeness for a variety of mission design and astrophysical parameters. We review the methods used in previous yield calculations and show that the method choice can significantly impact yield estimates as well as how the yield responds to mission parameters. We introduce a method, called Altruistic Yield Optimization, that optimizes the target list and exposure times to maximize mission yield, adapts maximally to changes in mission parameters, and increases exoEarth candidate yield by up to 100% compared to previousmore » methods. We use Altruistic Yield Optimization to estimate exoEarth candidate yield for a large suite of mission and astrophysical parameters using single visit completeness. We find that exoEarth candidate yield is most sensitive to telescope diameter, followed by coronagraph inner working angle, followed by coronagraph contrast, and finally coronagraph contrast noise floor. We find a surprisingly weak dependence of exoEarth candidate yield on exozodi level. Additionally, we provide a quantitative approach to defining a yield goal for future exoEarth-imaging missions.« less
Probabilistic distance-based quantizer design for distributed estimation
NASA Astrophysics Data System (ADS)
Kim, Yoon Hak
2016-12-01
We consider an iterative design of independently operating local quantizers at nodes that should cooperate without interaction to achieve application objectives for distributed estimation systems. We suggest as a new cost function a probabilistic distance between the posterior distribution and its quantized one expressed as the Kullback Leibler (KL) divergence. We first present the analysis that minimizing the KL divergence in the cyclic generalized Lloyd design framework is equivalent to maximizing the logarithmic quantized posterior distribution on the average which can be further computationally reduced in our iterative design. We propose an iterative design algorithm that seeks to maximize the simplified version of the posterior quantized distribution and discuss that our algorithm converges to a global optimum due to the convexity of the cost function and generates the most informative quantized measurements. We also provide an independent encoding technique that enables minimization of the cost function and can be efficiently simplified for a practical use of power-constrained nodes. We finally demonstrate through extensive experiments an obvious advantage of improved estimation performance as compared with the typical designs and the novel design techniques previously published.
Schuchardt, Christiane; Kulkarni, Harshad R.; Shahinfar, Mostafa; Singh, Aviral; Glatting, Gerhard; Baum, Richard P.; Beer, Ambros J.
2016-01-01
In molecular radiotherapy with 177Lu-labeled prostate specific membrane antigen (PSMA) peptides, kidney and/or salivary glands doses limit the activity which can be administered. The aim of this work was to investigate the effect of the ligand amount and injected activity on the tumor-to-normal tissue biologically effective dose (BED) ratio for 177Lu-labeled PSMA peptides. For this retrospective study, a recently developed physiologically based pharmacokinetic model was adapted for PSMA targeting peptides. General physiological parameters were taken from the literature. Individual parameters were fitted to planar gamma camera measurements (177Lu-PSMA I&T) of five patients with metastasizing prostate cancer. Based on the estimated parameters, the pharmacokinetics of tumor, salivary glands, kidneys, total body and red marrow was simulated and time-integrated activity coefficients were calculated for different peptide amounts. Based on these simulations, the absorbed doses and BEDs for normal tissue and tumor were calculated for all activities leading to a maximal tolerable kidney BED of 10 Gy2.5/cycle, a maximal salivary gland absorbed dose of 7.5 Gy/cycle and a maximal red marrow BED of 0.25 Gy15/cycle. The fits yielded coefficients of determination > 0.85, acceptable relative standard errors and low parameter correlations. All estimated parameters were in a physiologically reasonable range. The amounts (for 25−29 nmol) and pertaining activities leading to a maximal tumor dose, considering the defined maximal tolerable doses to organs of risk, were calculated to be 272±253 nmol (452±420 μg) and 7.3±5.1 GBq. Using the actually injected amount (235±155 μg) and the same maximal tolerable doses, the potential improvement for the tumor BED was 1–3 fold. The results suggest that currently given amounts for therapy are in the appropriate order of magnitude for many lesions. However, for lesions with high binding site density or lower perfusion, optimizing the peptide amount and activity might improve the tumor-to-kidney and tumor-to-salivary glands BED ratio considerably. PMID:27611841
Kletting, Peter; Schuchardt, Christiane; Kulkarni, Harshad R; Shahinfar, Mostafa; Singh, Aviral; Glatting, Gerhard; Baum, Richard P; Beer, Ambros J
2016-01-01
In molecular radiotherapy with 177Lu-labeled prostate specific membrane antigen (PSMA) peptides, kidney and/or salivary glands doses limit the activity which can be administered. The aim of this work was to investigate the effect of the ligand amount and injected activity on the tumor-to-normal tissue biologically effective dose (BED) ratio for 177Lu-labeled PSMA peptides. For this retrospective study, a recently developed physiologically based pharmacokinetic model was adapted for PSMA targeting peptides. General physiological parameters were taken from the literature. Individual parameters were fitted to planar gamma camera measurements (177Lu-PSMA I&T) of five patients with metastasizing prostate cancer. Based on the estimated parameters, the pharmacokinetics of tumor, salivary glands, kidneys, total body and red marrow was simulated and time-integrated activity coefficients were calculated for different peptide amounts. Based on these simulations, the absorbed doses and BEDs for normal tissue and tumor were calculated for all activities leading to a maximal tolerable kidney BED of 10 Gy2.5/cycle, a maximal salivary gland absorbed dose of 7.5 Gy/cycle and a maximal red marrow BED of 0.25 Gy15/cycle. The fits yielded coefficients of determination > 0.85, acceptable relative standard errors and low parameter correlations. All estimated parameters were in a physiologically reasonable range. The amounts (for 25-29 nmol) and pertaining activities leading to a maximal tumor dose, considering the defined maximal tolerable doses to organs of risk, were calculated to be 272±253 nmol (452±420 μg) and 7.3±5.1 GBq. Using the actually injected amount (235±155 μg) and the same maximal tolerable doses, the potential improvement for the tumor BED was 1-3 fold. The results suggest that currently given amounts for therapy are in the appropriate order of magnitude for many lesions. However, for lesions with high binding site density or lower perfusion, optimizing the peptide amount and activity might improve the tumor-to-kidney and tumor-to-salivary glands BED ratio considerably.
Kubik, Martha Y; Davey, Cynthia; MacLehose, Richard F; Coombes, Brandon; Nanney, Marilyn S
2015-01-01
In US secondary schools, vending machines and school stores are a common source of low-nutrient, energy-dense snacks and beverages, including sugar-sweetened beverages, high-fat salty snacks, and candy. However, little is known about the prevalence of these food practices in alternative schools, which are educational settings for students at risk of academic failure due to truancy, school expulsion, and behavior problems. Nationwide, more than 5,000 alternative schools enroll about one-half million students who are disproportionately minority and low-income youth. Principal survey data from a cross-sectional sample of alternative (n=104) and regular (n=339) schools collected biennially from 2002-2008 as part of the Centers for Disease Control and Prevention Minnesota School Health Profiles were used to assess and compare food practice prevalence over time. Generalized estimating equation models were used to estimate prevalence, adjusting for school demographics. Over time, food practice prevalence decreased significantly for both alternative and regular schools, although declines were mostly modest. However, the decrease in high-fat, salty snacks was significantly less for alternative than regular schools (-22.9% vs -42.2%; P<0.0001). Efforts to improve access to healthy food choices at school should reach all schools, including alternative schools. Study findings suggest high-fat salty snacks are more common in vending machines and school stores in alternative schools than regular schools, which may contribute to increased snacking behavior among students and extra consumption of salt, fat, and sugar. Study findings support the need to include alternative schools in future efforts that aim to reform the school food environment. Copyright © 2015 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
Estimating cross-price elasticity of e-cigarettes using a simulated demand procedure.
Grace, Randolph C; Kivell, Bronwyn M; Laugesen, Murray
2015-05-01
Our goal was to measure the cross-price elasticity of electronic cigarettes (e-cigarettes) and simulated demand for tobacco cigarettes both in the presence and absence of e-cigarette availability. A sample of New Zealand smokers (N = 210) completed a Cigarette Purchase Task to indicate their demand for tobacco at a range of prices. They sampled an e-cigarette and rated it and their own-brand tobacco for favorability, and indicated how many e-cigarettes and regular cigarettes they would purchase at 0.5×, 1×, and 2× the current market price for regular cigarettes, assuming that the price of e-cigarettes remained constant. Cross-price elasticity for e-cigarettes was estimated as 0.16, and was significantly positive, indicating that e-cigarettes were partially substitutable for regular cigarettes. Simulated demand for regular cigarettes at current market prices decreased by 42.8% when e-cigarettes were available, and e-cigarettes were rated 81% as favorably as own-brand tobacco. However when cigarettes cost 2× the current market price, significantly more smokers said they would quit (50.2%) if e-cigarettes were not available than if they were available (30.0%). Results show that e-cigarettes are potentially substitutable for regular cigarettes and their availability will reduce tobacco consumption. However, e-cigarettes may discourage smokers from quitting entirely as cigarette price increases, so policy makers should consider maintaining a constant relative price differential between e-cigarettes and tobacco cigarettes. © The Author 2014. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Kan, Hirohito; Kasai, Harumasa; Arai, Nobuyuki; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta
2016-09-01
An effective background field removal technique is desired for more accurate quantitative susceptibility mapping (QSM) prior to dipole inversion. The aim of this study was to evaluate the accuracy of regularization enabled sophisticated harmonic artifact reduction for phase data with varying spherical kernel sizes (REV-SHARP) method using a three-dimensional head phantom and human brain data. The proposed REV-SHARP method used the spherical mean value operation and Tikhonov regularization in the deconvolution process, with varying 2-14mm kernel sizes. The kernel sizes were gradually reduced, similar to the SHARP with varying spherical kernel (VSHARP) method. We determined the relative errors and relationships between the true local field and estimated local field in REV-SHARP, VSHARP, projection onto dipole fields (PDF), and regularization enabled SHARP (RESHARP). Human experiment was also conducted using REV-SHARP, VSHARP, PDF, and RESHARP. The relative errors in the numerical phantom study were 0.386, 0.448, 0.838, and 0.452 for REV-SHARP, VSHARP, PDF, and RESHARP. REV-SHARP result exhibited the highest correlation between the true local field and estimated local field. The linear regression slopes were 1.005, 1.124, 0.988, and 0.536 for REV-SHARP, VSHARP, PDF, and RESHARP in regions of interest on the three-dimensional head phantom. In human experiments, no obvious errors due to artifacts were present in REV-SHARP. The proposed REV-SHARP is a new method combined with variable spherical kernel size and Tikhonov regularization. This technique might make it possible to be more accurate backgroud field removal and help to achive better accuracy of QSM. Copyright © 2016 Elsevier Inc. All rights reserved.
Davey, Cynthia; MacLehose, Richard F.; Coombes, Brandon; Nanney, Marilyn S.
2014-01-01
In US secondary schools, vending machines and school stores are a common source of low-nutrient, energy-dense snacks and beverages, including sugar-sweetened beverages, high fat salty snacks and candy. However, little is known about the prevalence of these food practices in alternative schools, educational settings for students at risk of academic failure due to truancy, school expulsion and behavioral problems. Nationwide, over 5000 alternative schools enroll about one-half million students, who are disproportionately minority and low-income youth. Principal survey data from a cross-sectional sample of alternative (n=104) and regular (n=339) schools collected biennially from 2002–2008 as part of the Centers for Disease Control and Prevention Minnesota School Health Profiles were used to assess and compare food practice prevalence over time. Generalized estimating equation models were used to estimate prevalence, adjusting for school demographics. Over time, food practice prevalence decreased significantly for both alternative and regular schools, although declines were mostly modest. However, the decrease in high fat, salty snacks was significantly less for alternative than regular schools (−22.9% versus −42.2%; p<0.0001). Efforts to improve access to healthy food choice at school should reach all schools, including alternative schools. Study findings suggest high fat salty snacks are more common in vending machines and school stores in alternative schools than regular schools, which may contribute to increased snacking behavior among students and extra consumption of salt, fat and sugar. Study findings support the need to include alternative schools in future efforts that aim to reform the school food environment. PMID:25132120
Multi-objective optimization in quantum parameter estimation
NASA Astrophysics Data System (ADS)
Gong, BeiLi; Cui, Wei
2018-04-01
We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.
Analysis of ground-water data for selected wells near Holloman Air Force Base, New Mexico, 1950-95
Huff, G.F.
1996-01-01
Ground-water-level, ground-water-withdrawal, and ground- water-quality data were evaluated for trends. Holloman Air Force Base is located in the west-central part of Otero County, New Mexico. Ground-water-data analyses include assembly and inspection of U.S. Geological Survey and Holloman Air Force Base data, including ground-water-level data for public-supply and observation wells and withdrawal and water-quality data for public-supply wells in the area. Well Douglas 4 shows a statistically significant decreasing trend in water levels for 1972-86 and a statistically significant increasing trend in water levels for 1986-90. Water levels in wells San Andres 5 and San Andres 6 show statistically significant decreasing trends for 1972-93 and 1981-89, respectively. A mixture of statistically significant increasing trends, statistically significant decreasing trends, and lack of statistically significant trends over periods ranging from the early 1970's to the early 1990's are indicated for the Boles wells and wells near the Boles wells. Well Boles 5 shows a statistically significant increasing trend in water levels for 1981-90. Well Boles 5 and well 17S.09E.25.343 show no statistically significant trends in water levels for 1990-93 and 1988-93, respectively. For 1986-93, well Frenchy 1 shows a statistically significant decreasing trend in water levels. Ground-water withdrawal from the San Andres and Douglas wells regularly exceeded estimated ground-water recharge from San Andres Canyon for 1963-87. For 1951-57 and 1960-86, ground-water withdrawal from the Boles wells regularly exceeded total estimated ground-water recharge from Mule, Arrow, and Lead Canyons. Ground-water withdrawal from the San Andres and Douglas wells and from the Boles wells nearly equaled estimated ground- water recharge for 1989-93 and 1986-93, respectively. For 1987- 93, ground-water withdrawal from the Escondido well regularly exceeded estimated ground-water recharge from Escondido Canyon, and ground-water withdrawal from the Frenchy wells regularly exceeded total estimated ground-water recharge from Dog and Deadman Canyons. Water-quality samples were collected from selected Douglas, San Andres, and Boles public-supply wells from December 1994 to February 1995. Concentrations of dissolved nitrate show the most consistent increases between current and historical data. Current concentrations of dissolved nitrate are greater than historical concentrations in 7 of 10 wells.
Maximizing mitigation benefits: research to support a mitigation cost framework-final report.
DOT National Transportation Integrated Search
2016-08-01
Tracking environmental costs in the project development process has been a challenging task for state : departments of transportation (DOTs). Previous research identified the need to accurately track and : subsequently estimate project costs resultin...
Mauser, Wolfram; Klepper, Gernot; Zabel, Florian; Delzeit, Ruth; Hank, Tobias; Putzenlechner, Birgitta; Calzadilla, Alvaro
2015-01-01
Global biomass demand is expected to roughly double between 2005 and 2050. Current studies suggest that agricultural intensification through optimally managed crops on today's cropland alone is insufficient to satisfy future demand. In practice though, improving crop growth management through better technology and knowledge almost inevitably goes along with (1) improving farm management with increased cropping intensity and more annual harvests where feasible and (2) an economically more efficient spatial allocation of crops which maximizes farmers' profit. By explicitly considering these two factors we show that, without expansion of cropland, today's global biomass potentials substantially exceed previous estimates and even 2050s' demands. We attribute 39% increase in estimated global production potentials to increasing cropping intensities and 30% to the spatial reallocation of crops to their profit-maximizing locations. The additional potentials would make cropland expansion redundant. Their geographic distribution points at possible hotspots for future intensification. PMID:26558436
Mauser, Wolfram; Klepper, Gernot; Zabel, Florian; Delzeit, Ruth; Hank, Tobias; Putzenlechner, Birgitta; Calzadilla, Alvaro
2015-11-12
Global biomass demand is expected to roughly double between 2005 and 2050. Current studies suggest that agricultural intensification through optimally managed crops on today's cropland alone is insufficient to satisfy future demand. In practice though, improving crop growth management through better technology and knowledge almost inevitably goes along with (1) improving farm management with increased cropping intensity and more annual harvests where feasible and (2) an economically more efficient spatial allocation of crops which maximizes farmers' profit. By explicitly considering these two factors we show that, without expansion of cropland, today's global biomass potentials substantially exceed previous estimates and even 2050s' demands. We attribute 39% increase in estimated global production potentials to increasing cropping intensities and 30% to the spatial reallocation of crops to their profit-maximizing locations. The additional potentials would make cropland expansion redundant. Their geographic distribution points at possible hotspots for future intensification.
Assessment of physiological demand in kitesurfing.
Vercruyssen, F; Blin, N; L'huillier, D; Brisswalter, J
2009-01-01
To evaluate the physiological demands of kitesurfing, ten elite subjects performed an incremental running test on a 400-m track and a 30-min on-water crossing trial during a light crosswind (LW, 12-15 knots). Oxygen uptake (V(O)(2)) was estimated from the heart rate (HR) recorded during the crossing trial using the individual HR-V(O)(2) relationship determined during the incremental test. Blood lactate concentration [La(b)] was measured at rest and 3 min after the exercise completion. Mean HR and estimated V(O)(2) values represented, respectively 80.6 +/- 7.5% of maximal heart rate and 69.8 +/- 11.7% of maximal oxygen uptake for board speeds ranging from 15 to 17 knots. Low values for [La(b)] were observed at the end of crossing trial (2.1 +/- 1.2 mmol l(-1). This first analysis of kitesurfing suggests that the energy demand is mainly sustained by aerobic metabolism during a LW condition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Yan; Mohanty, Soumya D.; Center for Gravitational Wave Astronomy, Department of Physics and Astronomy, University of Texas at Brownsville, 80 Fort Brown, Brownsville, Texas 78520
2010-03-15
The detection and estimation of gravitational wave signals belonging to a parameterized family of waveforms requires, in general, the numerical maximization of a data-dependent function of the signal parameters. Because of noise in the data, the function to be maximized is often highly multimodal with numerous local maxima. Searching for the global maximum then becomes computationally expensive, which in turn can limit the scientific scope of the search. Stochastic optimization is one possible approach to reducing computational costs in such applications. We report results from a first investigation of the particle swarm optimization method in this context. The method ismore » applied to a test bed motivated by the problem of detection and estimation of a binary inspiral signal. Our results show that particle swarm optimization works well in the presence of high multimodality, making it a viable candidate method for further applications in gravitational wave data analysis.« less
Chapman, Benjamin P.; Weiss, Alexander; Duberstein, Paul
2016-01-01
Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in “big data” problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how three common SLT algorithms–Supervised Principal Components, Regularization, and Boosting—can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach—or perhaps because of them–SLT methods may hold value as a statistically rigorous approach to exploratory regression. PMID:27454257
Estimation of teleported and gained parameters in a non-inertial frame
NASA Astrophysics Data System (ADS)
Metwally, N.
2017-04-01
Quantum Fisher information is introduced as a measure of estimating the teleported information between two users, one of which is uniformly accelerated. We show that the final teleported state depends on the initial parameters, in addition to the gained parameters during the teleportation process. The estimation degree of these parameters depends on the value of the acceleration, the used single mode approximation (within/beyond), the type of encoded information (classic/quantum) in the teleported state, and the entanglement of the initial communication channel. The estimation degree of the parameters can be maximized if the partners teleport classical information.
Cawley, Gavin C; Talbot, Nicola L C
2006-10-01
Gene selection algorithms for cancer classification, based on the expression of a small number of biomarker genes, have been the subject of considerable research in recent years. Shevade and Keerthi propose a gene selection algorithm based on sparse logistic regression (SLogReg) incorporating a Laplace prior to promote sparsity in the model parameters, and provide a simple but efficient training procedure. The degree of sparsity obtained is determined by the value of a regularization parameter, which must be carefully tuned in order to optimize performance. This normally involves a model selection stage, based on a computationally intensive search for the minimizer of the cross-validation error. In this paper, we demonstrate that a simple Bayesian approach can be taken to eliminate this regularization parameter entirely, by integrating it out analytically using an uninformative Jeffrey's prior. The improved algorithm (BLogReg) is then typically two or three orders of magnitude faster than the original algorithm, as there is no longer a need for a model selection step. The BLogReg algorithm is also free from selection bias in performance estimation, a common pitfall in the application of machine learning algorithms in cancer classification. The SLogReg, BLogReg and Relevance Vector Machine (RVM) gene selection algorithms are evaluated over the well-studied colon cancer and leukaemia benchmark datasets. The leave-one-out estimates of the probability of test error and cross-entropy of the BLogReg and SLogReg algorithms are very similar, however the BlogReg algorithm is found to be considerably faster than the original SLogReg algorithm. Using nested cross-validation to avoid selection bias, performance estimation for SLogReg on the leukaemia dataset takes almost 48 h, whereas the corresponding result for BLogReg is obtained in only 1 min 24 s, making BLogReg by far the more practical algorithm. BLogReg also demonstrates better estimates of conditional probability than the RVM, which are of great importance in medical applications, with similar computational expense. A MATLAB implementation of the sparse logistic regression algorithm with Bayesian regularization (BLogReg) is available from http://theoval.cmp.uea.ac.uk/~gcc/cbl/blogreg/
Kochanowicz, Andrzej; Kochanowicz, Kazimierz; Mieszkowski, Jan; Aschenbrenner, Piotr; Bielec, Grzegorz; Szark-Eckardt, Mirosława
2016-01-01
Abstract The aim of the study was to define the relationship between maximal power of lower limbs, the biomechanics of the forward handspring vault and the score received during a gymnastics competition. The research involved 42 gymnasts aged 9-11 years competing in the Poland’s Junior Championships. The study consisted of three stages: first -estimating the level of indicators of maximal power of lower limbs tested on a force plate during the countermovement jump; second - estimating the level of biomechanical indicators of the front handspring vault. For both mentioned groups of indicators and the score received by gymnasts during the vault, linear correlation analyses were made. The last stage consisted of conducting multiple regression analysis in order to predict the performance level of the front handspring vault. Results showed a positive correlation (0.401, p < 0.05) of lower limbs’ maximal power (1400 ± 502 W) with the judges’ score for the front handstand vault (13.38 ± 1.02 points). However, the highest significant (p < 0.001) correlation with the judges’ score was revealed in the angle of the hip joint in the second phase of the flight (196.00 ± 16.64°) and the contact time of hands with the vault surface (0.264 ± 0.118 s), where correlation coefficients were: -0.671 and -0.634, respectively. In conclusion, the angles of the hip joint in the second phase of the flight and when the hands touched the vault surface proved to be the most important indicators for the received score. PMID:28149408
Human skeletal muscle mitochondrial capacity.
Rasmussen, U F; Rasmussen, H N
2000-04-01
Under aerobic work, the oxygen consumption and major ATP production occur in the mitochondria and it is therefore a relevant question whether the in vivo rates can be accounted for by mitochondrial capacities measured in vitro. Mitochondria were isolated from human quadriceps muscle biopsies in yields of approximately 45%. The tissue content of total creatine, mitochondrial protein and different cytochromes was estimated. A number of activities were measured in functional assays of the mitochondria: pyruvate, ketoglutarate, glutamate and succinate dehydrogenases, palmitoyl-carnitine respiration, cytochrome oxidase, the respiratory chain and the ATP synthesis. The activities involved in carbohydrate oxidation could account for in vivo oxygen uptakes of 15-16 mmol O2 min-1 kg-1 or slightly above the value measured at maximal work rates in the knee-extensor model of Saltin and co-workers, i.e. without limitation from the cardiac output. This probably indicates that the maximal oxygen consumption of the muscle is limited by the mitochondrial capacities. The in vitro activities of fatty acid oxidation corresponded to only 39% of those of carbohydrate oxidation. The maximal rate of free energy production from aerobic metabolism of glycogen was calculated from the mitochondrial activities and estimates of the DeltaG or ATP hydrolysis and the efficiency of the actin-myosin reaction. The resultant value was 20 W kg-1 or approximately 70% of the maximal in vivo work rates of which 10-20% probably are sustained by the anaerobic ATP production. The lack of aerobic in vitro ATP synthesis might reflect termination of some critical interplay between cytoplasm and mitochondria.
Estimating Likelihood of Fetal In Vivo Interactions Using In ...
Tox21/ToxCast efforts provide in vitro concentration-response data for thousands of compounds. Predicting whether chemical-biological interactions observed in vitro will occur in vivo is challenging. We hypothesize that using a modified model from the FDA guidance for drug interaction studies, Cmax/AC50 (i.e., maximal in vivo blood concentration over the half-maximal in in vitro activity concentration), will give a useful approximation for concentrations where in vivo interactions are likely. Further, for doses where maternal blood concentrations are likely to elicit an interaction (Cmax/AC50>0.1), where do the compounds accumulate in fetal tissues? In order to estimate these doses based on Tox21 data, in silico parameters of chemical fraction unbound in plasma and intrinsic hepatic clearance were estimated from ADMET predictor (Simulations-Plus Inc.) and used in the HTTK R-package to obtain Cmax values from a physiologically-based toxicokinetics model. In silico estimated Cmax values predicted in vivo human Cmax with median absolute error of 0.81 for 93 chemicals, giving confidence in the R-package and in silico estimates. A case example evaluating Cmax/AC50 values for peroxisome proliferator-activated receptor gamma (PPARγ) and glucocorticoid receptor revealed known compounds (glitazones and corticosteroids, respectively) highest on the list at pharmacological doses. Doses required to elicit likely interactions across all Tox21/ToxCast assays were compared to
Galias, Zbigniew
2017-05-01
An efficient method to find positions of periodic windows for the quadratic map f(x)=ax(1-x) and a heuristic algorithm to locate the majority of wide periodic windows are proposed. Accurate rigorous bounds of positions of all periodic windows with periods below 37 and the majority of wide periodic windows with longer periods are found. Based on these results, we prove that the measure of the set of regular parameters in the interval [3,4] is above 0.613960137. The properties of periodic windows are studied numerically. The results of the analysis are used to estimate that the true value of the measure of the set of regular parameters is close to 0.6139603.
SCoT: a Python toolbox for EEG source connectivity.
Billinger, Martin; Brunner, Clemens; Müller-Putz, Gernot R
2014-01-01
Analysis of brain connectivity has become an important research tool in neuroscience. Connectivity can be estimated between cortical sources reconstructed from the electroencephalogram (EEG). Such analysis often relies on trial averaging to obtain reliable results. However, some applications such as brain-computer interfaces (BCIs) require single-trial estimation methods. In this paper, we present SCoT-a source connectivity toolbox for Python. This toolbox implements routines for blind source decomposition and connectivity estimation with the MVARICA approach. Additionally, a novel extension called CSPVARICA is available for labeled data. SCoT estimates connectivity from various spectral measures relying on vector autoregressive (VAR) models. Optionally, these VAR models can be regularized to facilitate ill posed applications such as single-trial fitting. We demonstrate basic usage of SCoT on motor imagery (MI) data. Furthermore, we show simulation results of utilizing SCoT for feature extraction in a BCI application. These results indicate that CSPVARICA and correct regularization can significantly improve MI classification. While SCoT was mainly designed for application in BCIs, it contains useful tools for other areas of neuroscience. SCoT is a software package that (1) brings combined source decomposition and connectivtiy estimation to the open Python platform, and (2) offers tools for single-trial connectivity estimation. The source code is released under the MIT license and is available online at github.com/SCoT-dev/SCoT.
Motion estimation under location uncertainty for turbulent fluid flows
NASA Astrophysics Data System (ADS)
Cai, Shengze; Mémin, Etienne; Dérian, Pierre; Xu, Chao
2018-01-01
In this paper, we propose a novel optical flow formulation for estimating two-dimensional velocity fields from an image sequence depicting the evolution of a passive scalar transported by a fluid flow. This motion estimator relies on a stochastic representation of the flow allowing to incorporate naturally a notion of uncertainty in the flow measurement. In this context, the Eulerian fluid flow velocity field is decomposed into two components: a large-scale motion field and a small-scale uncertainty component. We define the small-scale component as a random field. Subsequently, the data term of the optical flow formulation is based on a stochastic transport equation, derived from the formalism under location uncertainty proposed in Mémin (Geophys Astrophys Fluid Dyn 108(2):119-146, 2014) and Resseguier et al. (Geophys Astrophys Fluid Dyn 111(3):149-176, 2017a). In addition, a specific regularization term built from the assumption of constant kinetic energy involves the very same diffusion tensor as the one appearing in the data transport term. Opposite to the classical motion estimators, this enables us to devise an optical flow method dedicated to fluid flows in which the regularization parameter has now a clear physical interpretation and can be easily estimated. Experimental evaluations are presented on both synthetic and real world image sequences. Results and comparisons indicate very good performance of the proposed formulation for turbulent flow motion estimation.