NASA Astrophysics Data System (ADS)
Lusiana, Evellin Dewi
2017-12-01
The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.
A New Online Calibration Method Based on Lord's Bias-Correction.
He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei
2017-09-01
Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.
Lin, Jen-Jen; Cheng, Jung-Yu; Huang, Li-Fei; Lin, Ying-Hsiu; Wan, Yung-Liang; Tsui, Po-Hsiang
2017-05-01
The Nakagami distribution is an approximation useful to the statistics of ultrasound backscattered signals for tissue characterization. Various estimators may affect the Nakagami parameter in the detection of changes in backscattered statistics. In particular, the moment-based estimator (MBE) and maximum likelihood estimator (MLE) are two primary methods used to estimate the Nakagami parameters of ultrasound signals. This study explored the effects of the MBE and different MLE approximations on Nakagami parameter estimations. Ultrasound backscattered signals of different scatterer number densities were generated using a simulation model, and phantom experiments and measurements of human liver tissues were also conducted to acquire real backscattered echoes. Envelope signals were employed to estimate the Nakagami parameters by using the MBE, first- and second-order approximations of MLE (MLE 1 and MLE 2 , respectively), and Greenwood approximation (MLE gw ) for comparisons. The simulation results demonstrated that, compared with the MBE and MLE 1 , the MLE 2 and MLE gw enabled more stable parameter estimations with small sample sizes. Notably, the required data length of the envelope signal was 3.6 times the pulse length. The phantom and tissue measurement results also showed that the Nakagami parameters estimated using the MLE 2 and MLE gw could simultaneously differentiate various scatterer concentrations with lower standard deviations and reliably reflect physical meanings associated with the backscattered statistics. Therefore, the MLE 2 and MLE gw are suggested as estimators for the development of Nakagami-based methodologies for ultrasound tissue characterization. Copyright © 2017 Elsevier B.V. All rights reserved.
Zou, W; Ouyang, H
2016-02-01
We propose a multiple estimation adjustment (MEA) method to correct effect overestimation due to selection bias from a hypothesis-generating study (HGS) in pharmacogenetics. MEA uses a hierarchical Bayesian approach to model individual effect estimates from maximal likelihood estimation (MLE) in a region jointly and shrinks them toward the regional effect. Unlike many methods that model a fixed selection scheme, MEA capitalizes on local multiplicity independent of selection. We compared mean square errors (MSEs) in simulated HGSs from naive MLE, MEA and a conditional likelihood adjustment (CLA) method that model threshold selection bias. We observed that MEA effectively reduced MSE from MLE on null effects with or without selection, and had a clear advantage over CLA on extreme MLE estimates from null effects under lenient threshold selection in small samples, which are common among 'top' associations from a pharmacogenetics HGS.
A Carrier Estimation Method Based on MLE and KF for Weak GNSS Signals.
Zhang, Hongyang; Xu, Luping; Yan, Bo; Zhang, Hua; Luo, Liyan
2017-06-22
Maximum likelihood estimation (MLE) has been researched for some acquisition and tracking applications of global navigation satellite system (GNSS) receivers and shows high performance. However, all current methods are derived and operated based on the sampling data, which results in a large computation burden. This paper proposes a low-complexity MLE carrier tracking loop for weak GNSS signals which processes the coherent integration results instead of the sampling data. First, the cost function of the MLE of signal parameters such as signal amplitude, carrier phase, and Doppler frequency are used to derive a MLE discriminator function. The optimal value of the cost function is searched by an efficient Levenberg-Marquardt (LM) method iteratively. Its performance including Cramér-Rao bound (CRB), dynamic characteristics and computation burden are analyzed by numerical techniques. Second, an adaptive Kalman filter is designed for the MLE discriminator to obtain smooth estimates of carrier phase and frequency. The performance of the proposed loop, in terms of sensitivity, accuracy and bit error rate, is compared with conventional methods by Monte Carlo (MC) simulations both in pedestrian-level and vehicle-level dynamic circumstances. Finally, an optimal loop which combines the proposed method and conventional method is designed to achieve the optimal performance both in weak and strong signal circumstances.
NASA Astrophysics Data System (ADS)
Wang, Hongrui; Wang, Cheng; Wang, Ying; Gao, Xiong; Yu, Chen
2017-06-01
This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLE confidence interval and thus more precise estimation by using the related information from regional gage stations. The Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.
SLDAssay: A software package and web tool for analyzing limiting dilution assays.
Trumble, Ilana M; Allmon, Andrew G; Archin, Nancie M; Rigdon, Joseph; Francis, Owen; Baldoni, Pedro L; Hudgens, Michael G
2017-11-01
Serial limiting dilution (SLD) assays are used in many areas of infectious disease related research. This paper presents SLDAssay, a free and publicly available R software package and web tool for analyzing data from SLD assays. SLDAssay computes the maximum likelihood estimate (MLE) for the concentration of target cells, with corresponding exact and asymptotic confidence intervals. Exact and asymptotic goodness of fit p-values, and a bias-corrected (BC) MLE are also provided. No other publicly available software currently implements the BC MLE or the exact methods. For validation of SLDAssay, results from Myers et al. (1994) are replicated. Simulations demonstrate the BC MLE is less biased than the MLE. Additionally, simulations demonstrate that exact methods tend to give better confidence interval coverage and goodness-of-fit tests with lower type I error than the asymptotic methods. Additional advantages of using exact methods are also discussed. Copyright © 2017 Elsevier B.V. All rights reserved.
Shoari, Niloofar; Dubé, Jean-Sébastien; Chenouri, Shoja'eddin
2015-11-01
In environmental studies, concentration measurements frequently fall below detection limits of measuring instruments, resulting in left-censored data. Some studies employ parametric methods such as the maximum likelihood estimator (MLE), robust regression on order statistic (rROS), and gamma regression on order statistic (GROS), while others suggest a non-parametric approach, the Kaplan-Meier method (KM). Using examples of real data from a soil characterization study in Montreal, we highlight the need for additional investigations that aim at unifying the existing literature. A number of studies have examined this issue; however, those considering data skewness and model misspecification are rare. These aspects are investigated in this paper through simulations. Among other findings, results show that for low skewed data, the performance of different statistical methods is comparable, regardless of the censoring percentage and sample size. For highly skewed data, the performance of the MLE method under lognormal and Weibull distributions is questionable; particularly, when the sample size is small or censoring percentage is high. In such conditions, MLE under gamma distribution, rROS, GROS, and KM are less sensitive to skewness. Related to model misspecification, MLE based on lognormal and Weibull distributions provides poor estimates when the true distribution of data is misspecified. However, the methods of rROS, GROS, and MLE under gamma distribution are generally robust to model misspecifications regardless of skewness, sample size, and censoring percentage. Since the characteristics of environmental data (e.g., type of distribution and skewness) are unknown a priori, we suggest using MLE based on gamma distribution, rROS and GROS. Copyright © 2015 Elsevier Ltd. All rights reserved.
GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation
Li, Hong; Lu, Mingquan
2017-01-01
Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks. PMID:28665318
GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation.
Wang, Fei; Li, Hong; Lu, Mingquan
2017-06-30
Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks.
Hyodo, T; Minagawa, K; Inoue, T; Fujimoto, J; Minami, N; Bito, R; Mikita, A
2013-12-01
A nicotine part-filter method can be applied to estimate smokers' mouth level exposure (MLE) to smoke constituents. The objectives of this study were (1) to generate calibration curves for 47 smoke constituents, (2) to estimate MLE to selected smoke constituents using Japanese smokers of commercially available cigarettes covering a wide range of International Organization for Standardization tar yields (1-21mg/cigarette), and (3) to investigate relationships between MLE estimates and various machine-smoking yields. Five cigarette brands were machine-smoked under 7 different smoking regimes and smoke constituents and nicotine content in part-filters were measured. Calibration curves were then generated. Spent cigarette filters were collected from a target of 50 smokers for each of the 15 brands and a total of 780 filters were obtained. Nicotine content in part-filters was then measured and MLE to each smoke constituent was estimated. Strong correlations were identified between nicotine content in part-filters and 41 out of the 47 smoke constituent yields. Estimates of MLE to acetaldehyde, acrolein, 1,3-butadiene, benzene, benzo[a]pyrene, carbon monoxide, and tar showed significant negative correlations with corresponding constituent yields per mg nicotine under the Health Canada Intense smoking regime, whereas significant positive correlations were observed for N-nitrosonornicotine and (4-methylnitrosoamino)-1-(3-pyridyl)-1-butanone. Copyright © 2013 Elsevier Inc. All rights reserved.
An 'unconditional-like' structure for the conditional estimator of odds ratio from 2 x 2 tables.
Hanley, James A; Miettinen, Olli S
2006-02-01
In the estimation of the odds ratio (OR), the conditional maximum-likelihood estimate (cMLE) is preferred to the more readily computed unconditional one (uMLE). However, the exact cMLE does not have a closed form to help divine it from the uMLE or to understand in what circumstances the difference between the two is appreciable. Here, the cMLE is shown to have the same 'ratio of cross-products' structure as its unconditional counterpart, but with two of the cell frequencies augmented, so as to shrink the unconditional estimator towards unity. The augmentation involves a factor, similar to the finite population correction, derived from the minimum of the marginal totals.
PRECISE TULLY-FISHER RELATIONS WITHOUT GALAXY INCLINATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Obreschkow, D.; Meyer, M.
2013-11-10
Power-law relations between tracers of baryonic mass and rotational velocities of disk galaxies, so-called Tully-Fisher relations (TFRs), offer a wealth of applications in galaxy evolution and cosmology. However, measurements of rotational velocities require galaxy inclinations, which are difficult to measure, thus limiting the range of TFR studies. This work introduces a maximum likelihood estimation (MLE) method for recovering the TFR in galaxy samples with limited or no information on inclinations. The robustness and accuracy of this method is demonstrated using virtual and real galaxy samples. Intriguingly, the MLE reliably recovers the TFR of all test samples, even without using anymore » inclination measurements—that is, assuming a random sin i-distribution for galaxy inclinations. Explicitly, this 'inclination-free MLE' recovers the three TFR parameters (zero-point, slope, scatter) with statistical errors only about 1.5 times larger than the best estimates based on perfectly known galaxy inclinations with zero uncertainty. Thus, given realistic uncertainties, the inclination-free MLE is highly competitive. If inclination measurements have mean errors larger than 10°, it is better not to use any inclinations than to consider the inclination measurements to be exact. The inclination-free MLE opens interesting perspectives for future H I surveys by the Square Kilometer Array and its pathfinders.« less
Fractal analysis of the short time series in a visibility graph method
NASA Astrophysics Data System (ADS)
Li, Ruixue; Wang, Jiang; Yu, Haitao; Deng, Bin; Wei, Xile; Chen, Yingyuan
2016-05-01
The aim of this study is to evaluate the performance of the visibility graph (VG) method on short fractal time series. In this paper, the time series of Fractional Brownian motions (fBm), characterized by different Hurst exponent H, are simulated and then mapped into a scale-free visibility graph, of which the degree distributions show the power-law form. The maximum likelihood estimation (MLE) is applied to estimate power-law indexes of degree distribution, and in this progress, the Kolmogorov-Smirnov (KS) statistic is used to test the performance of estimation of power-law index, aiming to avoid the influence of droop head and heavy tail in degree distribution. As a result, we find that the MLE gives an optimal estimation of power-law index when KS statistic reaches its first local minimum. Based on the results from KS statistic, the relationship between the power-law index and the Hurst exponent is reexamined and then amended to meet short time series. Thus, a method combining VG, MLE and KS statistics is proposed to estimate Hurst exponents from short time series. Lastly, this paper also offers an exemplification to verify the effectiveness of the combined method. In addition, the corresponding results show that the VG can provide a reliable estimation of Hurst exponents.
F-8C adaptive flight control extensions. [for maximum likelihood estimation
NASA Technical Reports Server (NTRS)
Stein, G.; Hartmann, G. L.
1977-01-01
An adaptive concept which combines gain-scheduled control laws with explicit maximum likelihood estimation (MLE) identification to provide the scheduling values is described. The MLE algorithm was improved by incorporating attitude data, estimating gust statistics for setting filter gains, and improving parameter tracking during changing flight conditions. A lateral MLE algorithm was designed to improve true air speed and angle of attack estimates during lateral maneuvers. Relationships between the pitch axis sensors inherent in the MLE design were examined and used for sensor failure detection. Design details and simulation performance are presented for each of the three areas investigated.
Survival Bayesian Estimation of Exponential-Gamma Under Linex Loss Function
NASA Astrophysics Data System (ADS)
Rizki, S. W.; Mara, M. N.; Sulistianingsih, E.
2017-06-01
This paper elaborates a research of the cancer patients after receiving a treatment in cencored data using Bayesian estimation under Linex Loss function for Survival Model which is assumed as an exponential distribution. By giving Gamma distribution as prior and likelihood function produces a gamma distribution as posterior distribution. The posterior distribution is used to find estimatior {\\hat{λ }}BL by using Linex approximation. After getting {\\hat{λ }}BL, the estimators of hazard function {\\hat{h}}BL and survival function {\\hat{S}}BL can be found. Finally, we compare the result of Maximum Likelihood Estimation (MLE) and Linex approximation to find the best method for this observation by finding smaller MSE. The result shows that MSE of hazard and survival under MLE are 2.91728E-07 and 0.000309004 and by using Bayesian Linex worths 2.8727E-07 and 0.000304131, respectively. It concludes that the Bayesian Linex is better than MLE.
Accumulation of Major Life Events in Childhood and Adult Life and Risk of Type 2 Diabetes Mellitus
Masters Pedersen, Jolene; Hulvej Rod, Naja; Andersen, Ingelise; Lange, Theis; Poulsen, Gry; Prescott, Eva; Lund, Rikke
2015-01-01
Background The aim of the study was to estimate the effect of the accumulation of major life events (MLE) in childhood and adulthood, in both the private and working domains, on risk of type 2 diabetes mellitus (T2DM). Furthermore, we aimed to test the possible interaction between childhood and adult MLE and to investigate modification of these associations by educational attainment. Methods The study was based on 4,761 participants from the Copenhagen City Heart Study free of diabetes at baseline and followed for 10 years. MLE were categorized as 0, 1, 2, 3 or more events. Multivariate logistic regression models adjusted for age, sex, education and family history of diabetes were used to estimate the association between MLE and T2DM. Results In childhood, experiencing 3 or more MLE was associated with a 69% higher risk of developing T2DM (Odds Ratio (OR) 1.69; 95% Confidence Interval (CI) 1.60, 3.27). The accumulation of MLE in adult private (p-trend = 0.016) and work life (p-trend = 0.049) was associated with risk of T2DM in a dose response manner. There was no evidence that experiencing MLE in both childhood and adult life was more strongly associated with T2DM than experiencing events at only one time point. There was some evidence that being simultaneously exposed to childhood MLE and short education (OR 2.28; 95% C.I. 1.45, 3.59) and work MLE and short education (OR 2.86; 95% C.I. 1.62, 5.03) was associated with higher risk of T2DM, as the joint effects were greater than the sum of their individual effects. Conclusions Findings from this study suggest that the accumulation of MLE in childhood, private adult life and work life, respectively, are risk factors for developing T2DM. PMID:26394040
Wang, Hongrui; Wang, Cheng; Wang, Ying; ...
2017-04-05
This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLEmore » confidence interval and thus more precise estimation by using the related information from regional gage stations. As a result, the Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.« less
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laurence, T; Chromy, B
2009-11-10
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms ofmore » counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE) for the Poisson distribution is also well known, but has not become generally used. This is primarily because, in contrast to non-linear least squares fitting, there has been no quick, robust, and general fitting method. In the field of fluorescence lifetime spectroscopy and imaging, there have been some efforts to use this estimator through minimization routines such as Nelder-Mead optimization, exhaustive line searches, and Gauss-Newton minimization. Minimization based on specific one- or multi-exponential models has been used to obtain quick results, but this procedure does not allow the incorporation of the instrument response, and is not generally applicable to models found in other fields. Methods for using the MLE for Poisson-distributed data have been published by the wider spectroscopic community, including iterative minimization schemes based on Gauss-Newton minimization. The slow acceptance of these procedures for fitting event counting histograms may also be explained by the use of the ubiquitous, fast Levenberg-Marquardt (L-M) fitting procedure for fitting non-linear models using least squares fitting (simple searches obtain {approx}10000 references - this doesn't include those who use it, but don't know they are using it). The benefits of L-M include a seamless transition between Gauss-Newton minimization and downward gradient minimization through the use of a regularization parameter. This transition is desirable because Gauss-Newton methods converge quickly, but only within a limited domain of convergence; on the other hand the downward gradient methods have a much wider domain of convergence, but converge extremely slowly nearer the minimum. L-M has the advantages of both procedures: relative insensitivity to initial parameters and rapid convergence. Scientists, when wanting an answer quickly, will fit data using L-M, get an answer, and move on. Only those that are aware of the bias issues will bother to fit using the more appropriate MLE for Poisson deviates. However, since there is a simple, analytical formula for the appropriate MLE measure for Poisson deviates, it is inexcusable that least squares estimators are used almost exclusively when fitting event counting histograms. There have been ways found to use successive non-linear least squares fitting to obtain similarly unbiased results, but this procedure is justified by simulation, must be re-tested when conditions change significantly, and requires two successive fits. There is a great need for a fitting routine for the MLE estimator for Poisson deviates that has convergence domains and rates comparable to the non-linear least squares L-M fitting. We show in this report that a simple way to achieve that goal is to use the L-M fitting procedure not to minimize the least squares measure, but the MLE for Poisson deviates.« less
Soares, Fabiano Araujo; Carvalho, João Luiz Azevedo; Miosso, Cristiano Jacques; de Andrade, Marcelino Monteiro; da Rocha, Adson Ferreira
2015-09-17
In surface electromyography (surface EMG, or S-EMG), conduction velocity (CV) refers to the velocity at which the motor unit action potentials (MUAPs) propagate along the muscle fibers, during contractions. The CV is related to the type and diameter of the muscle fibers, ion concentration, pH, and firing rate of the motor units (MUs). The CV can be used in the evaluation of contractile properties of MUs, and of muscle fatigue. The most popular methods for CV estimation are those based on maximum likelihood estimation (MLE). This work proposes an algorithm for estimating CV from S-EMG signals, using digital image processing techniques. The proposed approach is demonstrated and evaluated, using both simulated and experimentally-acquired multichannel S-EMG signals. We show that the proposed algorithm is as precise and accurate as the MLE method in typical conditions of noise and CV. The proposed method is not susceptible to errors associated with MUAP propagation direction or inadequate initialization parameters, which are common with the MLE algorithm. Image processing -based approaches may be useful in S-EMG analysis to extract different physiological parameters from multichannel S-EMG signals. Other new methods based on image processing could also be developed to help solving other tasks in EMG analysis, such as estimation of the CV for individual MUs, localization and tracking of innervation zones, and study of MU recruitment strategies.
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.
2017-11-01
This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation
Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Cheong, R. Y.; Gabda, D.
2017-09-01
Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.
Improving z-tracking accuracy in the two-photon single-particle tracking microscope.
Liu, C; Liu, Y-L; Perillo, E P; Jiang, N; Dunn, A K; Yeh, H-C
2015-10-12
Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we have precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico . Our method can be generally applied to other 3D single-particle tracking techniques.
Improving z-tracking accuracy in the two-photon single-particle tracking microscope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, C.; Liu, Y.-L.; Perillo, E. P.
Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we havemore » precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico. Our method can be generally applied to other 3D single-particle tracking techniques.« less
Inaniwa, Taku; Kohno, Toshiyuki; Tomitani, Takehiro; Urakabe, Eriko; Sato, Shinji; Kanazawa, Mitsutaka; Kanai, Tatsuaki
2006-09-07
In radiation therapy with highly energetic heavy ions, the conformal irradiation of a tumour can be achieved by using their advantageous features such as the good dose localization and the high relative biological effectiveness around their mean range. For effective utilization of such properties, it is necessary to evaluate the range of incident ions and the deposited dose distribution in a patient's body. Several methods have been proposed to derive such physical quantities; one of them uses positron emitters generated through projectile fragmentation reactions of incident ions with target nuclei. We have proposed the application of the maximum likelihood estimation (MLE) method to a detected annihilation gamma-ray distribution for determination of the range of incident ions in a target and we have demonstrated the effectiveness of the method with computer simulations. In this paper, a water, a polyethylene and a polymethyl methacrylate target were each irradiated with stable (12)C, (14)N, (16)O and (20)Ne beams. Except for a few combinations of incident beams and targets, the MLE method could determine the range of incident ions R(MLE) with a difference between R(MLE) and the experimental range of less than 2.0 mm under the circumstance that the measurement of annihilation gamma rays was started just after the irradiation of 61.4 s and lasted for 500 s. In the process of evaluating the range of incident ions with the MLE method, we must calculate many physical quantities such as the fluence and the energy of both primary ions and fragments as a function of depth in a target. Consequently, by using them we can obtain the dose distribution. Thus, when the mean range of incident ions is determined with the MLE method, the annihilation gamma-ray distribution and the deposited dose distribution can be derived simultaneously. The derived dose distributions in water for the mono-energetic heavy-ion beams of four species were compared with those measured with an ionization chamber. The good agreement between the derived and the measured distributions implies that the deposited dose distribution in a target can be estimated from the detected annihilation gamma-ray distribution with a positron camera.
Influence of cigarette filter ventilation on smokers' mouth level exposure to tar and nicotine.
Caraway, John W; Ashley, Madeleine; Bowman, Sheri A; Chen, Peter; Errington, Graham; Prasad, Krishna; Nelson, Paul R; Shepperd, Christopher J; Fearon, Ian M
2017-12-01
Cigarette filter ventilation allows air to be drawn into the filter, diluting the cigarette smoke. Although machine smoking reveals that toxicant yields are reduced, it does not predict human yields. The objective of this study was to investigate the relationship between cigarette filter ventilation and mouth level exposure (MLE) to tar and nicotine in cigarette smokers. We collated and reviewed data from 11 studies across 9 countries, in studies performed between 2005 and 2013 which contained data on MLE from 156 products with filter ventilation between 0% and 87%. MLE among 7534 participants to tar and nicotine was estimated using the part-filter analysis method from spent filter tips. For each of the countries, MLE to tar and nicotine tended to decrease as filter ventilation increased. Across countries, per-cigarette MLE to tar and nicotine decreased as filter ventilation increased from 0% to 87%. Daily MLE to tar and nicotine also decreased across the range of increasing filter ventilation. These data suggest that on average smokers of highly ventilated cigarettes are exposed to lower amounts of nicotine and tar per cigarette and per day than smokers of cigarettes with lower levels of ventilation. Copyright © 2017 British American Tobacco. Published by Elsevier Inc. All rights reserved.
Range estimation of passive infrared targets through the atmosphere
NASA Astrophysics Data System (ADS)
Cho, Hoonkyung; Chun, Joohwan; Seo, Doochun; Choi, Seokweon
2013-04-01
Target range estimation is traditionally based on radar and active sonar systems in modern combat systems. However, jamming signals tremendously degrade the performance of such active sensor devices. We introduce a simple target range estimation method and the fundamental limits of the proposed method based on the atmosphere propagation model. Since passive infrared (IR) sensors measure IR signals radiating from objects in different wavelengths, this method has robustness against electromagnetic jamming. The measured target radiance of each wavelength at the IR sensor depends on the emissive properties of target material and various attenuation factors (i.e., the distance between sensor and target and atmosphere environment parameters). MODTRAN is a tool that models atmospheric propagation of electromagnetic radiation. Based on the results from MODTRAN and atmosphere propagation-based modeling, the target range can be estimated. To analyze the proposed method's performance statistically, we use maximum likelihood estimation (MLE) and evaluate the Cramer-Rao lower bound (CRLB) via the probability density function of measured radiance. We also compare CRLB and the variance of MLE using Monte-Carlo simulation.
NASA Astrophysics Data System (ADS)
Sutawanir
2015-12-01
Mortality tables play important role in actuarial studies such as life annuities, premium determination, premium reserve, valuation pension plan, pension funding. Some known mortality tables are CSO mortality table, Indonesian Mortality Table, Bowers mortality table, Japan Mortality table. For actuary applications some tables are constructed with different environment such as single decrement, double decrement, and multiple decrement. There exist two approaches in mortality table construction : mathematics approach and statistical approach. Distribution model and estimation theory are the statistical concepts that are used in mortality table construction. This article aims to discuss the statistical approach in mortality table construction. The distributional assumptions are uniform death distribution (UDD) and constant force (exponential). Moment estimation and maximum likelihood are used to estimate the mortality parameter. Moment estimation methods are easier to manipulate compared to maximum likelihood estimation (mle). However, the complete mortality data are not used in moment estimation method. Maximum likelihood exploited all available information in mortality estimation. Some mle equations are complicated and solved using numerical methods. The article focus on single decrement estimation using moment and maximum likelihood estimation. Some extension to double decrement will introduced. Simple dataset will be used to illustrated the mortality estimation, and mortality table.
An MLE method for finding LKB NTCP model parameters using Monte Carlo uncertainty estimates
NASA Astrophysics Data System (ADS)
Carolan, Martin; Oborn, Brad; Foo, Kerwyn; Haworth, Annette; Gulliford, Sarah; Ebert, Martin
2014-03-01
The aims of this work were to establish a program to fit NTCP models to clinical data with multiple toxicity endpoints, to test the method using a realistic test dataset, to compare three methods for estimating confidence intervals for the fitted parameters and to characterise the speed and performance of the program.
Statistical Techniques to Analyze Pesticide Data Program Food Residue Observations.
Szarka, Arpad Z; Hayworth, Carol G; Ramanarayanan, Tharacad S; Joseph, Robert S I
2018-06-26
The U.S. EPA conducts dietary-risk assessments to ensure that levels of pesticides on food in the U.S. food supply are safe. Often these assessments utilize conservative residue estimates, maximum residue levels (MRLs), and a high-end estimate derived from registrant-generated field-trial data sets. A more realistic estimate of consumers' pesticide exposure from food may be obtained by utilizing residues from food-monitoring programs, such as the Pesticide Data Program (PDP) of the U.S. Department of Agriculture. A substantial portion of food-residue concentrations in PDP monitoring programs are below the limits of detection (left-censored), which makes the comparison of regulatory-field-trial and PDP residue levels difficult. In this paper, we present a novel adaption of established statistical techniques, the Kaplan-Meier estimator (K-M), the robust regression on ordered statistic (ROS), and the maximum-likelihood estimator (MLE), to quantify the pesticide-residue concentrations in the presence of heavily censored data sets. The examined statistical approaches include the most commonly used parametric and nonparametric methods for handling left-censored data that have been used in the fields of medical and environmental sciences. This work presents a case study in which data of thiamethoxam residue on bell pepper generated from registrant field trials were compared with PDP-monitoring residue values. The results from the statistical techniques were evaluated and compared with commonly used simple substitution methods for the determination of summary statistics. It was found that the maximum-likelihood estimator (MLE) is the most appropriate statistical method to analyze this residue data set. Using the MLE technique, the data analyses showed that the median and mean PDP bell pepper residue levels were approximately 19 and 7 times lower, respectively, than the corresponding statistics of the field-trial residues.
Deterministic quantum annealing expectation-maximization algorithm
NASA Astrophysics Data System (ADS)
Miyahara, Hideyuki; Tsumura, Koji; Sughiyama, Yuki
2017-11-01
Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization approach. Motivated by QA, we propose a quantum annealing extension of EM, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. We also discuss its advantage in terms of the path integral formulation. Furthermore, by employing numerical simulations, we illustrate how DQAEM works in MLE and show that DQAEM moderate the problem of local optima in EM.
NASA Astrophysics Data System (ADS)
Xiong, Yan; Reichenbach, Stephen E.
1999-01-01
Understanding of hand-written Chinese characters is at such a primitive stage that models include some assumptions about hand-written Chinese characters that are simply false. So Maximum Likelihood Estimation (MLE) may not be an optimal method for hand-written Chinese characters recognition. This concern motivates the research effort to consider alternative criteria. Maximum Mutual Information Estimation (MMIE) is an alternative method for parameter estimation that does not derive its rationale from presumed model correctness, but instead examines the pattern-modeling problem in automatic recognition system from an information- theoretic point of view. The objective of MMIE is to find a set of parameters in such that the resultant model allows the system to derive from the observed data as much information as possible about the class. We consider MMIE for recognition of hand-written Chinese characters using on a simplified hidden Markov Random Field. MMIE provides improved performance improvement over MLE in this application.
Flood Frequency Analysis With Historical and Paleoflood Information
NASA Astrophysics Data System (ADS)
Stedinger, Jery R.; Cohn, Timothy A.
1986-05-01
An investigation is made of flood quantile estimators which can employ "historical" and paleoflood information in flood frequency analyses. Two categories of historical information are considered: "censored" data, where the magnitudes of historical flood peaks are known; and "binomial" data, where only threshold exceedance information is available. A Monte Carlo study employing the two-parameter lognormal distribution shows that maximum likelihood estimators (MLEs) can extract the equivalent of an additional 10-30 years of gage record from a 50-year period of historical observation. The MLE routines are shown to be substantially better than an adjusted-moment estimator similar to the one recommended in Bulletin 17B of the United States Water Resources Council Hydrology Committee (1982). The MLE methods performed well even when floods were drawn from other than the assumed lognormal distribution.
Eberhard, Wynn L
2017-04-01
The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.
A Bayesian model for estimating multi-state disease progression.
Shen, Shiwen; Han, Simon X; Petousis, Panayiotis; Weiss, Robert E; Meng, Frank; Bui, Alex A T; Hsu, William
2017-02-01
A growing number of individuals who are considered at high risk of cancer are now routinely undergoing population screening. However, noted harms such as radiation exposure, overdiagnosis, and overtreatment underscore the need for better temporal models that predict who should be screened and at what frequency. The mean sojourn time (MST), an average duration period when a tumor can be detected by imaging but with no observable clinical symptoms, is a critical variable for formulating screening policy. Estimation of MST has been long studied using continuous Markov model (CMM) with Maximum likelihood estimation (MLE). However, a lot of traditional methods assume no observation error of the imaging data, which is unlikely and can bias the estimation of the MST. In addition, the MLE may not be stably estimated when data is sparse. Addressing these shortcomings, we present a probabilistic modeling approach for periodic cancer screening data. We first model the cancer state transition using a three state CMM model, while simultaneously considering observation error. We then jointly estimate the MST and observation error within a Bayesian framework. We also consider the inclusion of covariates to estimate individualized rates of disease progression. Our approach is demonstrated on participants who underwent chest x-ray screening in the National Lung Screening Trial (NLST) and validated using posterior predictive p-values and Pearson's chi-square test. Our model demonstrates more accurate and sensible estimates of MST in comparison to MLE. Copyright © 2016 Elsevier Ltd. All rights reserved.
Medical Literature Evaluation Education at US Schools of Pharmacy
Phillips, Jennifer; Demaris, Kendra
2016-01-01
Objective. To determine how medical literature evaluation (MLE) is being taught across the United States and to summarize methods for teaching and assessing MLE. Methods. An 18-question survey was administered to faculty members whose primary responsibility was teaching MLE at schools and colleges of pharmacy. Results. Responses were received from 90 (71%) US schools of pharmacy. The most common method of integrating MLE into the curriculum was as a stand-alone course (49%). The most common placement was during the second professional year (43%) or integrated throughout the curriculum (25%). The majority (77%) of schools used a team-based approach. The use of active-learning strategies was common as was the use of multiple methods of evaluation. Responses varied regarding what role the course director played in incorporating MLE into advanced pharmacy practice experiences (APPEs). Conclusion. There is a trend toward incorporating MLE education components throughout the pre-APPE curriculum and placement of literature review/evaluation exercises into therapeutics practice skills laboratories to help students see how this skill integrates into other patient care skills. Several pre-APPE educational standards for MLE education exist, including journal club activities, a team-based approach to teaching and evaluation, and use of active-learning techniques. PMID:26941431
Cohn, Timothy A.
2005-01-01
This paper presents an adjusted maximum likelihood estimator (AMLE) that can be used to estimate fluvial transport of contaminants, like phosphorus, that are subject to censoring because of analytical detection limits. The AMLE is a generalization of the widely accepted minimum variance unbiased estimator (MVUE), and Monte Carlo experiments confirm that it shares essentially all of the MVUE's desirable properties, including high efficiency and negligible bias. In particular, the AMLE exhibits substantially less bias than alternative censored‐data estimators such as the MLE (Tobit) or the MLE followed by a jackknife. As with the MLE and the MVUE the AMLE comes close to achieving the theoretical Frechet‐Cramér‐Rao bounds on its variance. This paper also presents a statistical framework, applicable to both censored and complete data, for understanding and estimating the components of uncertainty associated with load estimates. This can serve to lower the cost and improve the efficiency of both traditional and real‐time water quality monitoring.
Estimation of Rank Correlation for Clustered Data
Rosner, Bernard; Glynn, Robert
2017-01-01
It is well known that the sample correlation coefficient (Rxy) is the maximum likelihood estimator (MLE) of the Pearson correlation (ρxy) for i.i.d. bivariate normal data. However, this is not true for ophthalmologic data where X (e.g., visual acuity) and Y (e.g., visual field) are available for each eye and there is positive intraclass correlation for both X and Y in fellow eyes. In this paper, we provide a regression-based approach for obtaining the MLE of ρxy for clustered data, which can be implemented using standard mixed effects model software. This method is also extended to allow for estimation of partial correlation by controlling both X and Y for a vector U of other covariates. In addition, these methods can be extended to allow for estimation of rank correlation for clustered data by (a) converting ranks of both X and Y to the probit scale, (b) estimating the Pearson correlation between probit scores for X and Y, and (c) using the relationship between Pearson and rank correlation for bivariate normally distributed data. The validity of the methods in finite-sized samples is supported by simulation studies. Finally, two examples from ophthalmology and analgesic abuse are used to illustrate the methods. PMID:28399615
Estimating distributions with increasing failure rate in an imperfect repair model.
Kvam, Paul H; Singh, Harshinder; Whitaker, Lyn R
2002-03-01
A failed system is repaired minimally if after failure, it is restored to the working condition of an identical system of the same age. We extend the nonparametric maximum likelihood estimator (MLE) of a system's lifetime distribution function to test units that are known to have an increasing failure rate. Such items comprise a significant portion of working components in industry. The order-restricted MLE is shown to be consistent. Similar results hold for the Brown-Proschan imperfect repair model, which dictates that a failed component is repaired perfectly with some unknown probability, and is otherwise repaired minimally. The estimators derived are motivated and illustrated by failure data in the nuclear industry. Failure times for groups of emergency diesel generators and motor-driven pumps are analyzed using the order-restricted methods. The order-restricted estimators are consistent and show distinct differences from the ordinary MLEs. Simulation results suggest significant improvement in reliability estimation is available in many cases when component failure data exhibit the IFR property.
A comparative simulation study of AR(1) estimators in short time series.
Krone, Tanja; Albers, Casper J; Timmerman, Marieke E
2017-01-01
Various estimators of the autoregressive model exist. We compare their performance in estimating the autocorrelation in short time series. In Study 1, under correct model specification, we compare the frequentist r 1 estimator, C-statistic, ordinary least squares estimator (OLS) and maximum likelihood estimator (MLE), and a Bayesian method, considering flat (B f ) and symmetrized reference (B sr ) priors. In a completely crossed experimental design we vary lengths of time series (i.e., T = 10, 25, 40, 50 and 100) and autocorrelation (from -0.90 to 0.90 with steps of 0.10). The results show a lowest bias for the B sr , and a lowest variability for r 1 . The power in different conditions is highest for B sr and OLS. For T = 10, the absolute performance of all measurements is poor, as expected. In Study 2, we study robustness of the methods through misspecification by generating the data according to an ARMA(1,1) model, but still analysing the data with an AR(1) model. We use the two methods with the lowest bias for this study, i.e., B sr and MLE. The bias gets larger when the non-modelled moving average parameter becomes larger. Both the variability and power show dependency on the non-modelled parameter. The differences between the two estimation methods are negligible for all measurements.
Reverse Transcription Errors and RNA-DNA Differences at Short Tandem Repeats.
Fungtammasan, Arkarachai; Tomaszkiewicz, Marta; Campos-Sánchez, Rebeca; Eckert, Kristin A; DeGiorgio, Michael; Makova, Kateryna D
2016-10-01
Transcript variation has important implications for organismal function in health and disease. Most transcriptome studies focus on assessing variation in gene expression levels and isoform representation. Variation at the level of transcript sequence is caused by RNA editing and transcription errors, and leads to nongenetically encoded transcript variants, or RNA-DNA differences (RDDs). Such variation has been understudied, in part because its detection is obscured by reverse transcription (RT) and sequencing errors. It has only been evaluated for intertranscript base substitution differences. Here, we investigated transcript sequence variation for short tandem repeats (STRs). We developed the first maximum-likelihood estimator (MLE) to infer RT error and RDD rates, taking next generation sequencing error rates into account. Using the MLE, we empirically evaluated RT error and RDD rates for STRs in a large-scale DNA and RNA replicated sequencing experiment conducted in a primate species. The RT error rates increased exponentially with STR length and were biased toward expansions. The RDD rates were approximately 1 order of magnitude lower than the RT error rates. The RT error rates estimated with the MLE from a primate data set were concordant with those estimated with an independent method, barcoded RNA sequencing, from a Caenorhabditis elegans data set. Our results have important implications for medical genomics, as STR allelic variation is associated with >40 diseases. STR nonallelic transcript variation can also contribute to disease phenotype. The MLE and empirical rates presented here can be used to evaluate the probability of disease-associated transcripts arising due to RDD. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Chan, Aaron C.; Srinivasan, Vivek J.
2013-01-01
In optical coherence tomography (OCT) and ultrasound, unbiased Doppler frequency estimators with low variance are desirable for blood velocity estimation. Hardware improvements in OCT mean that ever higher acquisition rates are possible, which should also, in principle, improve estimation performance. Paradoxically, however, the widely used Kasai autocorrelation estimator’s performance worsens with increasing acquisition rate. We propose that parametric estimators based on accurate models of noise statistics can offer better performance. We derive a maximum likelihood estimator (MLE) based on a simple additive white Gaussian noise model, and show that it can outperform the Kasai autocorrelation estimator. In addition, we also derive the Cramer Rao lower bound (CRLB), and show that the variance of the MLE approaches the CRLB for moderate data lengths and noise levels. We note that the MLE performance improves with longer acquisition time, and remains constant or improves with higher acquisition rates. These qualities may make it a preferred technique as OCT imaging speed continues to improve. Finally, our work motivates the development of more general parametric estimators based on statistical models of decorrelation noise. PMID:23446044
Pearson-type goodness-of-fit test with bootstrap maximum likelihood estimation.
Yin, Guosheng; Ma, Yanyuan
2013-01-01
The Pearson test statistic is constructed by partitioning the data into bins and computing the difference between the observed and expected counts in these bins. If the maximum likelihood estimator (MLE) of the original data is used, the statistic generally does not follow a chi-squared distribution or any explicit distribution. We propose a bootstrap-based modification of the Pearson test statistic to recover the chi-squared distribution. We compute the observed and expected counts in the partitioned bins by using the MLE obtained from a bootstrap sample. This bootstrap-sample MLE adjusts exactly the right amount of randomness to the test statistic, and recovers the chi-squared distribution. The bootstrap chi-squared test is easy to implement, as it only requires fitting exactly the same model to the bootstrap data to obtain the corresponding MLE, and then constructs the bin counts based on the original data. We examine the test size and power of the new model diagnostic procedure using simulation studies and illustrate it with a real data set.
Maximum likelihood estimation for Cox's regression model under nested case-control sampling.
Scheike, Thomas H; Juul, Anders
2004-04-01
Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used to obtain information additional to the relative risk estimates of covariates.
Evaluation of Coastal Sea Level from Jason-2 Altimetry Offshore Hong Kong
NASA Astrophysics Data System (ADS)
Birol, F.; Xu, X. Y., , Dr; Cazenave, A. A.
2017-12-01
In the recent years, several coastal altimetry products of Jason-2 mission have been distributed by different agencies, the most advance ones of which are XTRACK, PISTACH and ALES. Each product represents extraordinary endeavors on some aspects of retracking or advanced geophysical corrections, and each has its advantage. The motivation of this presentation is to evaluate these products in order to refine the sea level measurements at the coast. Three retrackers: MLE4, MLE3 and ALES are focused on. Within 20km coastward, neither GDR nor ALES readily provides sea level anomaly (SLA) measurements, so we recomputed the 20Hz GDR and ALES SLA from the raw data, adopting auxiliary information (such as waveform classification and wet tropospheric delay) from PISTACH. The region of interest is track #153 of the Jason-2 satellite (offshore Hong Kong, China), and the altimetry products are processed over seven years (2008-2015, cycles 1-252). The coastline offshore Hong Kong is rather complicated and we feel that it can be a good indicator of the performance of coastal altimetry under undesirable coast conditions. We computed the bias and noise level of ALES, MLE3 and MLE4 SLA over open ocean and in the coastal zone (within 10km or 5km coast-ward). The results showed that, after outlier-editing, ALES performs better than MLE4 and MLE3 both in terms of noise level and uncertainty in sea level trend estimation. We validated the coastal altimetry-based SLA by comparing with data from the Hong Kong tide gauge (located 10km across-track). An interesting , but still preliminary, result is that the computed sea level trend within 5 km from the coast is significantly larger than the trend estimated at larger distances from the coast. Keywords: Jason-2, Hong Kong coast, ALES, MLE3, MLE4
BAO from Angular Clustering: Optimization and Mitigation of Theoretical Systematics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crocce, M.; et al.
We study the theoretical systematics and optimize the methodology in Baryon Acoustic Oscillations (BAO) detections using the angular correlation function with tomographic bins. We calibrate and optimize the pipeline for the Dark Energy Survey Year 1 dataset using 1800 mocks. We compare the BAO fitting results obtained with three estimators: the Maximum Likelihood Estimator (MLE), Profile Likelihood, and Markov Chain Monte Carlo. The MLE method yields the least bias in the fit results (bias/spreadmore » $$\\sim 0.02$$) and the error bar derived is the closest to the Gaussian results (1% from 68% Gaussian expectation). When there is mismatch between the template and the data either due to incorrect fiducial cosmology or photo-$z$ error, the MLE again gives the least-biased results. The BAO angular shift that is estimated based on the sound horizon and the angular diameter distance agree with the numerical fit. Various analysis choices are further tested: the number of redshift bins, cross-correlations, and angular binning. We propose two methods to correct the mock covariance when the final sample properties are slightly different from those used to create the mock. We show that the sample changes can be accommodated with the help of the Gaussian covariance matrix or more effectively using the eigenmode expansion of the mock covariance. The eigenmode expansion is significantly less susceptible to statistical fluctuations relative to the direct measurements of the covariance matrix because the number of free parameters is substantially reduced [$p$ parameters versus $p(p+1)/2$ from direct measurement].« less
NASA Technical Reports Server (NTRS)
Thadani, S. G.
1977-01-01
The Maximum Likelihood Estimation of Signature Transformation (MLEST) algorithm is used to obtain maximum likelihood estimates (MLE) of affine transformation. The algorithm has been evaluated for three sets of data: simulated (training and recognition segment pairs), consecutive-day (data gathered from Landsat images), and geographical-extension (large-area crop inventory experiment) data sets. For each set, MLEST signature extension runs were made to determine MLE values and the affine-transformed training segment signatures were used to classify the recognition segments. The classification results were used to estimate wheat proportions at 0 and 1% threshold values.
Optimal estimation of diffusion coefficients from single-particle trajectories
NASA Astrophysics Data System (ADS)
Vestergaard, Christian L.; Blainey, Paul C.; Flyvbjerg, Henrik
2014-02-01
How does one optimally determine the diffusion coefficient of a diffusing particle from a single-time-lapse recorded trajectory of the particle? We answer this question with an explicit, unbiased, and practically optimal covariance-based estimator (CVE). This estimator is regression-free and is far superior to commonly used methods based on measured mean squared displacements. In experimentally relevant parameter ranges, it also outperforms the analytically intractable and computationally more demanding maximum likelihood estimator (MLE). For the case of diffusion on a flexible and fluctuating substrate, the CVE is biased by substrate motion. However, given some long time series and a substrate under some tension, an extended MLE can separate particle diffusion on the substrate from substrate motion in the laboratory frame. This provides benchmarks that allow removal of bias caused by substrate fluctuations in CVE. The resulting unbiased CVE is optimal also for short time series on a fluctuating substrate. We have applied our estimators to human 8-oxoguanine DNA glycolase proteins diffusing on flow-stretched DNA, a fluctuating substrate, and found that diffusion coefficients are severely overestimated if substrate fluctuations are not accounted for.
Cohn, T.A.; Lane, W.L.; Baier, W.G.
1997-01-01
This paper presents the expected moments algorithm (EMA), a simple and efficient method for incorporating historical and paleoflood information into flood frequency studies. EMA can utilize three types of at-site flood information: systematic stream gage record; information about the magnitude of historical floods; and knowledge of the number of years in the historical period when no large flood occurred. EMA employs an iterative procedure to compute method-of-moments parameter estimates. Initial parameter estimates are calculated from systematic stream gage data. These moments are then updated by including the measured historical peaks and the expected moments, given the previously estimated parameters, of the below-threshold floods from the historical period. The updated moments result in new parameter estimates, and the last two steps are repeated until the algorithm converges. Monte Carlo simulations compare EMA, Bulletin 17B's [United States Water Resources Council, 1982] historically weighted moments adjustment, and maximum likelihood estimators when fitting the three parameters of the log-Pearson type III distribution. These simulations demonstrate that EMA is more efficient than the Bulletin 17B method, and that it is nearly as efficient as maximum likelihood estimation (MLE). The experiments also suggest that EMA has two advantages over MLE when dealing with the log-Pearson type III distribution: It appears that EMA estimates always exist and that they are unique, although neither result has been proven. EMA can be used with binomial or interval-censored data and with any distributional family amenable to method-of-moments estimation.
NASA Astrophysics Data System (ADS)
Cohn, T. A.; Lane, W. L.; Baier, W. G.
This paper presents the expected moments algorithm (EMA), a simple and efficient method for incorporating historical and paleoflood information into flood frequency studies. EMA can utilize three types of at-site flood information: systematic stream gage record; information about the magnitude of historical floods; and knowledge of the number of years in the historical period when no large flood occurred. EMA employs an iterative procedure to compute method-of-moments parameter estimates. Initial parameter estimates are calculated from systematic stream gage data. These moments are then updated by including the measured historical peaks and the expected moments, given the previously estimated parameters, of the below-threshold floods from the historical period. The updated moments result in new parameter estimates, and the last two steps are repeated until the algorithm converges. Monte Carlo simulations compare EMA, Bulletin 17B's [United States Water Resources Council, 1982] historically weighted moments adjustment, and maximum likelihood estimators when fitting the three parameters of the log-Pearson type III distribution. These simulations demonstrate that EMA is more efficient than the Bulletin 17B method, and that it is nearly as efficient as maximum likelihood estimation (MLE). The experiments also suggest that EMA has two advantages over MLE when dealing with the log-Pearson type III distribution: It appears that EMA estimates always exist and that they are unique, although neither result has been proven. EMA can be used with binomial or interval-censored data and with any distributional family amenable to method-of-moments estimation.
Statistical Considerations of Data Processing in Giovanni Online Tool
NASA Technical Reports Server (NTRS)
Suhung, Shen; Leptoukh, G.; Acker, J.; Berrick, S.
2005-01-01
The GES DISC Interactive Online Visualization and Analysis Infrastructure (Giovanni) is a web-based interface for the rapid visualization and analysis of gridded data from a number of remote sensing instruments. The GES DISC currently employs several Giovanni instances to analyze various products, such as Ocean-Giovanni for ocean products from SeaWiFS and MODIS-Aqua; TOMS & OM1 Giovanni for atmospheric chemical trace gases from TOMS and OMI, and MOVAS for aerosols from MODIS, etc. (http://giovanni.gsfc.nasa.gov) Foremost among the Giovanni statistical functions is data averaging. Two aspects of this function are addressed here. The first deals with the accuracy of averaging gridded mapped products vs. averaging from the ungridded Level 2 data. Some mapped products contain mean values only; others contain additional statistics, such as number of pixels (NP) for each grid, standard deviation, etc. Since NP varies spatially and temporally, averaging with or without weighting by NP will be different. In this paper, we address differences of various weighting algorithms for some datasets utilized in Giovanni. The second aspect is related to different averaging methods affecting data quality and interpretation for data with non-normal distribution. The present study demonstrates results of different spatial averaging methods using gridded SeaWiFS Level 3 mapped monthly chlorophyll a data. Spatial averages were calculated using three different methods: arithmetic mean (AVG), geometric mean (GEO), and maximum likelihood estimator (MLE). Biogeochemical data, such as chlorophyll a, are usually considered to have a log-normal distribution. The study determined that differences between methods tend to increase with increasing size of a selected coastal area, with no significant differences in most open oceans. The GEO method consistently produces values lower than AVG and MLE. The AVG method produces values larger than MLE in some cases, but smaller in other cases. Further studies indicated that significant differences between AVG and MLE methods occurred in coastal areas where data have large spatial variations and a log-bimodal distribution instead of log-normal distribution.
Survey on the Performance of Source Localization Algorithms.
Fresno, José Manuel; Robles, Guillermo; Martínez-Tarifa, Juan Manuel; Stewart, Brian G
2017-11-18
The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton-Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm.
Survey on the Performance of Source Localization Algorithms
2017-01-01
The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton–Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm. PMID:29156565
Estimating the probability of rare events: addressing zero failure data.
Quigley, John; Revie, Matthew
2011-07-01
Traditional statistical procedures for estimating the probability of an event result in an estimate of zero when no events are realized. Alternative inferential procedures have been proposed for the situation where zero events have been realized but often these are ad hoc, relying on selecting methods dependent on the data that have been realized. Such data-dependent inference decisions violate fundamental statistical principles, resulting in estimation procedures whose benefits are difficult to assess. In this article, we propose estimating the probability of an event occurring through minimax inference on the probability that future samples of equal size realize no more events than that in the data on which the inference is based. Although motivated by inference on rare events, the method is not restricted to zero event data and closely approximates the maximum likelihood estimate (MLE) for nonzero data. The use of the minimax procedure provides a risk adverse inferential procedure where there are no events realized. A comparison is made with the MLE and regions of the underlying probability are identified where this approach is superior. Moreover, a comparison is made with three standard approaches to supporting inference where no event data are realized, which we argue are unduly pessimistic. We show that for situations of zero events the estimator can be simply approximated with 1/2.5n, where n is the number of trials. © 2011 Society for Risk Analysis.
Optimal designs based on the maximum quasi-likelihood estimator
Shen, Gang; Hyun, Seung Won; Wong, Weng Kee
2016-01-01
We use optimal design theory and construct locally optimal designs based on the maximum quasi-likelihood estimator (MqLE), which is derived under less stringent conditions than those required for the MLE method. We show that the proposed locally optimal designs are asymptotically as efficient as those based on the MLE when the error distribution is from an exponential family, and they perform just as well or better than optimal designs based on any other asymptotically linear unbiased estimators such as the least square estimator (LSE). In addition, we show current algorithms for finding optimal designs can be directly used to find optimal designs based on the MqLE. As an illustrative application, we construct a variety of locally optimal designs based on the MqLE for the 4-parameter logistic (4PL) model and study their robustness properties to misspecifications in the model using asymptotic relative efficiency. The results suggest that optimal designs based on the MqLE can be easily generated and they are quite robust to mis-specification in the probability distribution of the responses. PMID:28163359
A comparison of Probability Of Detection (POD) data determined using different statistical methods
NASA Astrophysics Data System (ADS)
Fahr, A.; Forsyth, D.; Bullock, M.
1993-12-01
Different statistical methods have been suggested for determining probability of detection (POD) data for nondestructive inspection (NDI) techniques. A comparative assessment of various methods of determining POD was conducted using results of three NDI methods obtained by inspecting actual aircraft engine compressor disks which contained service induced cracks. The study found that the POD and 95 percent confidence curves as a function of crack size as well as the 90/95 percent crack length vary depending on the statistical method used and the type of data. The distribution function as well as the parameter estimation procedure used for determining POD and the confidence bound must be included when referencing information such as the 90/95 percent crack length. The POD curves and confidence bounds determined using the range interval method are very dependent on information that is not from the inspection data. The maximum likelihood estimators (MLE) method does not require such information and the POD results are more reasonable. The log-logistic function appears to model POD of hit/miss data relatively well and is easy to implement. The log-normal distribution using MLE provides more realistic POD results and is the preferred method. Although it is more complicated and slower to calculate, it can be implemented on a common spreadsheet program.
Fast estimation of diffusion tensors under Rician noise by the EM algorithm.
Liu, Jia; Gasbarra, Dario; Railavo, Juha
2016-01-15
Diffusion tensor imaging (DTI) is widely used to characterize, in vivo, the white matter of the central nerve system (CNS). This biological tissue contains much anatomic, structural and orientational information of fibers in human brain. Spectral data from the displacement distribution of water molecules located in the brain tissue are collected by a magnetic resonance scanner and acquired in the Fourier domain. After the Fourier inversion, the noise distribution is Gaussian in both real and imaginary parts and, as a consequence, the recorded magnitude data are corrupted by Rician noise. Statistical estimation of diffusion leads a non-linear regression problem. In this paper, we present a fast computational method for maximum likelihood estimation (MLE) of diffusivities under the Rician noise model based on the expectation maximization (EM) algorithm. By using data augmentation, we are able to transform a non-linear regression problem into the generalized linear modeling framework, reducing dramatically the computational cost. The Fisher-scoring method is used for achieving fast convergence of the tensor parameter. The new method is implemented and applied using both synthetic and real data in a wide range of b-amplitudes up to 14,000s/mm(2). Higher accuracy and precision of the Rician estimates are achieved compared with other log-normal based methods. In addition, we extend the maximum likelihood (ML) framework to the maximum a posteriori (MAP) estimation in DTI under the aforementioned scheme by specifying the priors. We will describe how close numerically are the estimators of model parameters obtained through MLE and MAP estimation. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Brooks, W. L.; Dooley, R. P.
1975-01-01
The design of a high resolution radar for altimetry and ocean wave height estimation was studied. From basic principles, it is shown that a short pulse wide beam radar is the most appropriate and recommended technique for measuring both altitude and ocean wave height. To achieve a topographic resolution of + or - 10 cm RMS at 5.0 meter RMS wave heights, as required for SEASAT-A, it is recommended that the altimeter design include an onboard adaptive processor. The resulting design, which assumes a maximum likelihood estimation (MLE) processor, is shown to satisfy all performance requirements. A design summary is given for the recommended radar altimeter, which includes a full deramp STRETCH pulse compression technique followed by an analog filter bank to separate range returns as well as the assumed MLE processor. The feedback loop implementation of the MLE on a digital computer was examined in detail, and computer size, estimation accuracies, and bias due to range sidelobes are given for the MLE with typical SEASAT-A parameters. The standard deviation of the altitude estimate was developed and evaluated for several adaptive and nonadaptive split-gate trackers. Split-gate tracker biases due to range sidelobes and transmitter noise are examined. An approximate closed form solution for the altimeter power return is derived and evaluated. The feasibility of utilizing the basic radar altimeter design for the measurement of ocean wave spectra was examined.
Development of probabilistic emission inventories of air toxics for Jacksonville, Florida, USA.
Zhao, Yuchao; Frey, H Christopher
2004-11-01
Probabilistic emission inventories were developed for 1,3-butadiene, mercury (Hg), arsenic (As), benzene, formaldehyde, and lead for Jacksonville, FL. To quantify inter-unit variability in empirical emission factor data, the Maximum Likelihood Estimation (MLE) method or the Method of Matching Moments was used to fit parametric distributions. For data sets that contain nondetected measurements, a method based upon MLE was used for parameter estimation. To quantify the uncertainty in urban air toxic emission factors, parametric bootstrap simulation and empirical bootstrap simulation were applied to uncensored and censored data, respectively. The probabilistic emission inventories were developed based on the product of the uncertainties in the emission factors and in the activity factors. The uncertainties in the urban air toxics emission inventories range from as small as -25 to +30% for Hg to as large as -83 to +243% for As. The key sources of uncertainty in the emission inventory for each toxic are identified based upon sensitivity analysis. Typically, uncertainty in the inventory of a given pollutant can be attributed primarily to a small number of source categories. Priorities for improving the inventories and for refining the probabilistic analysis are discussed.
NASA Astrophysics Data System (ADS)
Sivaguru, Mayandi; Kabir, Mohammad M.; Gartia, Manas Ranjan; Biggs, David S. C.; Sivaguru, Barghav S.; Sivaguru, Vignesh A.; Berent, Zachary T.; Wagoner Johnson, Amy J.; Fried, Glenn A.; Liu, Gang Logan; Sadayappan, Sakthivel; Toussaint, Kimani C.
2017-02-01
Second-harmonic generation (SHG) microscopy is a label-free imaging technique to study collagenous materials in extracellular matrix environment with high resolution and contrast. However, like many other microscopy techniques, the actual spatial resolution achievable by SHG microscopy is reduced by out-of-focus blur and optical aberrations that degrade particularly the amplitude of the detectable higher spatial frequencies. Being a two-photon scattering process, it is challenging to define a point spread function (PSF) for the SHG imaging modality. As a result, in comparison with other two-photon imaging systems like two-photon fluorescence, it is difficult to apply any PSF-engineering techniques to enhance the experimental spatial resolution closer to the diffraction limit. Here, we present a method to improve the spatial resolution in SHG microscopy using an advanced maximum likelihood estimation (AdvMLE) algorithm to recover the otherwise degraded higher spatial frequencies in an SHG image. Through adaptation and iteration, the AdvMLE algorithm calculates an improved PSF for an SHG image and enhances the spatial resolution by decreasing the full-width-at-halfmaximum (FWHM) by 20%. Similar results are consistently observed for biological tissues with varying SHG sources, such as gold nanoparticles and collagen in porcine feet tendons. By obtaining an experimental transverse spatial resolution of 400 nm, we show that the AdvMLE algorithm brings the practical spatial resolution closer to the theoretical diffraction limit. Our approach is suitable for adaptation in micro-nano CT and MRI imaging, which has the potential to impact diagnosis and treatment of human diseases.
Reliability Stress-Strength Models for Dependent Observations with Applications in Clinical Trials
NASA Technical Reports Server (NTRS)
Kushary, Debashis; Kulkarni, Pandurang M.
1995-01-01
We consider the applications of stress-strength models in studies involving clinical trials. When studying the effects and side effects of certain procedures (treatments), it is often the case that observations are correlated due to subject effect, repeated measurements and observing many characteristics simultaneously. We develop maximum likelihood estimator (MLE) and uniform minimum variance unbiased estimator (UMVUE) of the reliability which in clinical trial studies could be considered as the chances of increased side effects due to a particular procedure compared to another. The results developed apply to both univariate and multivariate situations. Also, for the univariate situations we develop simple to use lower confidence bounds for the reliability. Further, we consider the cases when both stress and strength constitute time dependent processes. We define the future reliability and obtain methods of constructing lower confidence bounds for this reliability. Finally, we conduct simulation studies to evaluate all the procedures developed and also to compare the MLE and the UMVUE.
Markov Chain Monte Carlo: an introduction for epidemiologists
Hamra, Ghassan; MacLehose, Richard; Richardson, David
2013-01-01
Markov Chain Monte Carlo (MCMC) methods are increasingly popular among epidemiologists. The reason for this may in part be that MCMC offers an appealing approach to handling some difficult types of analyses. Additionally, MCMC methods are those most commonly used for Bayesian analysis. However, epidemiologists are still largely unfamiliar with MCMC. They may lack familiarity either with he implementation of MCMC or with interpretation of the resultant output. As with tutorials outlining the calculus behind maximum likelihood in previous decades, a simple description of the machinery of MCMC is needed. We provide an introduction to conducting analyses with MCMC, and show that, given the same data and under certain model specifications, the results of an MCMC simulation match those of methods based on standard maximum-likelihood estimation (MLE). In addition, we highlight examples of instances in which MCMC approaches to data analysis provide a clear advantage over MLE. We hope that this brief tutorial will encourage epidemiologists to consider MCMC approaches as part of their analytic tool-kit. PMID:23569196
Chen, Xiaoxin; Qin, Rong; Liu, Ba; Ma, Yan; Su, Yinghao; Yang, Chung S; Glickman, Jonathan N; Odze, Robert D; Shaheen, Nicholas J
2008-01-01
Background In rats, esophagogastroduodenal anastomosis (EGDA) without concomitant chemical carcinogen treatment leads to gastroesophageal reflux disease, multilayered epithelium (MLE, a presumed precursor in intestinal metaplasia), columnar-lined esophagus, dysplasia, and esophageal adenocarcinoma. Previously we have shown that columnar-lined esophagus in EGDA rats resembled human Barrett's esophagus (BE) in its morphology, mucin features and expression of differentiation markers (Lab. Invest. 2004;84:753–765). The purpose of this study was to compare the phenotype of rat MLE with human MLE, in order to gain insight into the nature of MLE and its potential role in the development of BE. Methods Serial sectioning was performed on tissue samples from 32 EGDA rats and 13 patients with established BE. Tissue sections were immunohistochemically stained for a variety of transcription factors and differentiation markers of esophageal squamous epithelium and intestinal columnar epithelium. Results We detected MLE in 56.3% (18/32) of EGDA rats, and in all human samples. As expected, both rat and human squamous epithelium, but not intestinal metaplasia, expressed squamous transcription factors and differentiation markers (p63, Sox2, CK14 and CK4) in all cases. Both rat and human intestinal metaplasia, but not squamous epithelium, expressed intestinal transcription factors and differentiation markers (Cdx2, GATA4, HNF1α, villin and Muc2) in all cases. Rat MLE shared expression patterns of Sox2, CK4, Cdx2, GATA4, villin and Muc2 with human MLE. However, p63 and CK14 were expressed in a higher proportion of rat MLE compared to humans. Conclusion These data indicate that rat MLE shares similar properties to human MLE in its expression pattern of these markers, not withstanding small differences, and support the concept that MLE may be a transitional stage in the metaplastic conversion of squamous to columnar epithelium in BE. PMID:18190713
NASA Technical Reports Server (NTRS)
Pierson, W. J.
1982-01-01
The scatterometer on the National Oceanic Satellite System (NOSS) is studied by means of Monte Carlo techniques so as to determine the effect of two additional antennas for alias (or ambiguity) removal by means of an objective criteria technique and a normalized maximum likelihood estimator. Cells nominally 10 km by 10 km, 10 km by 50 km, and 50 km by 50 km are simulated for winds of 4, 8, 12 and 24 m/s and incidence angles of 29, 39, 47, and 53.5 deg for 15 deg changes in direction. The normalized maximum likelihood estimate (MLE) is correct a large part of the time, but the objective criterion technique is recommended as a reserve, and more quickly computed, procedure. Both methods for alias removal depend on the differences in the present model function at upwind and downwind. For 10 km by 10 km cells, it is found that the MLE method introduces a correlation between wind speed errors and aspect angle (wind direction) errors that can be as high as 0.8 or 0.9 and that the wind direction errors are unacceptably large, compared to those obtained for the SASS for similar assumptions.
Onuk, A. Emre; Akcakaya, Murat; Bardhan, Jaydeep P.; Erdogmus, Deniz; Brooks, Dana H.; Makowski, Lee
2015-01-01
In this paper, we describe a model for maximum likelihood estimation (MLE) of the relative abundances of different conformations of a protein in a heterogeneous mixture from small angle X-ray scattering (SAXS) intensities. To consider cases where the solution includes intermediate or unknown conformations, we develop a subset selection method based on k-means clustering and the Cramér-Rao bound on the mixture coefficient estimation error to find a sparse basis set that represents the space spanned by the measured SAXS intensities of the known conformations of a protein. Then, using the selected basis set and the assumptions on the model for the intensity measurements, we show that the MLE model can be expressed as a constrained convex optimization problem. Employing the adenylate kinase (ADK) protein and its known conformations as an example, and using Monte Carlo simulations, we demonstrate the performance of the proposed estimation scheme. Here, although we use 45 crystallographically determined experimental structures and we could generate many more using, for instance, molecular dynamics calculations, the clustering technique indicates that the data cannot support the determination of relative abundances for more than 5 conformations. The estimation of this maximum number of conformations is intrinsic to the methodology we have used here. PMID:26924916
Minimax Estimation of Functionals of Discrete Distributions
Jiao, Jiantao; Venkat, Kartik; Han, Yanjun; Weissman, Tsachy
2017-01-01
We propose a general methodology for the construction and analysis of essentially minimax estimators for a wide class of functionals of finite dimensional parameters, and elaborate on the case of discrete distributions, where the support size S is unknown and may be comparable with or even much larger than the number of observations n. We treat the respective regions where the functional is nonsmooth and smooth separately. In the nonsmooth regime, we apply an unbiased estimator for the best polynomial approximation of the functional whereas, in the smooth regime, we apply a bias-corrected version of the maximum likelihood estimator (MLE). We illustrate the merit of this approach by thoroughly analyzing the performance of the resulting schemes for estimating two important information measures: 1) the entropy H(P)=∑i=1S−pilnpi and 2) Fα(P)=∑i=1Spiα, α > 0. We obtain the minimax L2 rates for estimating these functionals. In particular, we demonstrate that our estimator achieves the optimal sample complexity n ≍ S/ln S for entropy estimation. We also demonstrate that the sample complexity for estimating Fα(P), 0 < α < 1, is n ≍ S1/α/ln S, which can be achieved by our estimator but not the MLE. For 1 < α < 3/2, we show the minimax L2 rate for estimating Fα(P) is (n ln n)−2(α−1) for infinite support size, while the maximum L2 rate for the MLE is n−2(α−1). For all the above cases, the behavior of the minimax rate-optimal estimators with n samples is essentially that of the MLE (plug-in rule) with n ln n samples, which we term “effective sample size enlargement.” We highlight the practical advantages of our schemes for the estimation of entropy and mutual information. We compare our performance with various existing approaches, and demonstrate that our approach reduces running time and boosts the accuracy. Moreover, we show that the minimax rate-optimal mutual information estimator yielded by our framework leads to significant performance boosts over the Chow–Liu algorithm in learning graphical models. The wide use of information measure estimation suggests that the insights and estimators obtained in this paper could be broadly applicable. PMID:29375152
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kulisek, Jonathan A.; Schweppe, John E.; Stave, Sean C.
2015-06-01
Helicopter-mounted gamma-ray detectors can provide law enforcement officials the means to quickly and accurately detect, identify, and locate radiological threats over a wide geographical area. The ability to accurately distinguish radiological threat-generated gamma-ray signatures from background gamma radiation in real time is essential in order to realize this potential. This problem is non-trivial, especially in urban environments for which the background may change very rapidly during flight. This exacerbates the challenge of estimating background due to the poor counting statistics inherent in real-time airborne gamma-ray spectroscopy measurements. To address this, we have developed a new technique for real-time estimation ofmore » background gamma radiation from aerial measurements. This method is built upon on the noise-adjusted singular value decomposition (NASVD) technique that was previously developed for estimating the potassium (K), uranium (U), and thorium (T) concentrations in soil post-flight. The method can be calibrated using K, U, and T spectra determined from radiation transport simulations along with basis functions, which may be determined empirically by applying maximum likelihood estimation (MLE) to previously measured airborne gamma-ray spectra. The method was applied to both measured and simulated airborne gamma-ray spectra, with and without man-made radiological source injections. Compared to schemes based on simple averaging, this technique was less sensitive to background contamination from the injected man-made sources and may be particularly useful when the gamma-ray background frequently changes during the course of the flight.« less
NASA Astrophysics Data System (ADS)
Langbein, J. O.
2016-12-01
Most time series of geophysical phenomena are contaminated with temporally correlated errors that limit the precision of any derived parameters. Ignoring temporal correlations will result in biased and unrealistic estimates of velocity and its error estimated from geodetic position measurements. Obtaining better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model when there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fn , with frequency, f. Time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. [2012] demonstrate one technique that substantially increases the efficiency of the MLE methods, but it provides only an approximate solution for power-law indices greater than 1.0. That restriction can be removed by simply forming a data-filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified and it provides robust results for a wide range of power-law indices. With the new formulation, the efficiency is typically improved by about a factor of 8 over previous MLE algorithms [Langbein, 2004]. The new algorithm can be downloaded at http://earthquake.usgs.gov/research/software/#est_noise. The main program provides a number of basic functions that can be used to model the time-dependent part of time series and a variety of models that describe the temporal covariance of the data. In addition, the program is packaged with a few companion programs and scripts that can help with data analysis and with interpretation of the noise modeling.
Morinda citrifolia L. leaf extract prevent weight gain in Sprague-Dawley rats fed a high fat diet.
Jambocus, Najla Gooda Sahib; Ismail, Amin; Khatib, Alfi; Mahomoodally, Fawzi; Saari, Nazamid; Mumtaz, Muhammad Waseem; Hamid, Azizah Abdul
2017-01-01
Background : Morinda citrifolia L. is widely used as a folk medicinal food plant to manage a panoply of diseases, though no concrete reports on its potential anti-obesity activity. This study aimed to evaluate the potential of M. citrifolia leaf extracts (MLE60) in the prevention of weight gain in vivo and establish its phytochemical profile. Design : Male Sprague-Dawley rats were divided into groups based on a normal diet (ND) or high fat diet (HFD), with or without MLE60 supplementation (150 and 350 mg/kg body weight) and assessed for any reduction in weight gain. Plasma leptin, insulin, adiponectin, and ghrelin of all groups were determined. 1 H NMR and LCMS methods were employed for phytochemical profiling of MLE60. Results : The supplementation of MLE60 did not affect food intake indicating that appetite suppression might not be the main anti-obesity mechanism involved. In the treated groups, MLE60 prevented weight gain, most likely through an inhibition of pancreatic and lipoprotein activity with a positive influence on the lipid profiles and a reduction in LDL levels . MLE60 also attenuated visceral fat deposition in treated subjects with improvement in the plasma levels of obesity-linked factors . 1 Spectral analysis showed the presence of several bioactive compounds with rutin being more predominant. Conclusion : MLE60 shows promise as an anti-obesity agents and warrants further research.
NASA Technical Reports Server (NTRS)
Pierson, W. J., Jr.
1984-01-01
Backscatter measurements at upwind and crosswind are simulated for five incidence angles by means of the SASS-1 model function. The effects of communication noise and attitude errors are simulated by Monte Carlo methods, and the winds are recovered by both the Sum of Square (SOS) algorithm and a Maximum Likelihood Estimater (MLE). The SOS algorithm is shown to fail for light enough winds at all incidence angles and to fail to show areas of calm because backscatter estimates that were negative or that produced incorrect values of K sub p greater than one were discarded. The MLE performs well for all input backscatter estimates and returns calm when both are negative. The use of the SOS algorithm is shown to have introduced errors in the SASS-1 model function that, in part, cancel out the errors that result from using it, but that also cause disagreement with other data sources such as the AAFE circle flight data at light winds. Implications for future scatterometer systems are given.
GLOBAL RATES OF CONVERGENCE OF THE MLES OF LOG-CONCAVE AND s-CONCAVE DENSITIES
Doss, Charles R.; Wellner, Jon A.
2017-01-01
We establish global rates of convergence for the Maximum Likelihood Estimators (MLEs) of log-concave and s-concave densities on ℝ. The main finding is that the rate of convergence of the MLE in the Hellinger metric is no worse than n−2/5 when −1 < s < ∞ where s = 0 corresponds to the log-concave case. We also show that the MLE does not exist for the classes of s-concave densities with s < −1. PMID:28966409
Berglund, Lars; Garmo, Hans; Lindbäck, Johan; Svärdsudd, Kurt; Zethelius, Björn
2008-09-30
The least-squares estimator of the slope in a simple linear regression model is biased towards zero when the predictor is measured with random error. A corrected slope may be estimated by adding data from a reliability study, which comprises a subset of subjects from the main study. The precision of this corrected slope depends on the design of the reliability study and estimator choice. Previous work has assumed that the reliability study constitutes a random sample from the main study. A more efficient design is to use subjects with extreme values on their first measurement. Previously, we published a variance formula for the corrected slope, when the correction factor is the slope in the regression of the second measurement on the first. In this paper we show that both designs improve by maximum likelihood estimation (MLE). The precision gain is explained by the inclusion of data from all subjects for estimation of the predictor's variance and by the use of the second measurement for estimation of the covariance between response and predictor. The gain of MLE enhances with stronger true relationship between response and predictor and with lower precision in the predictor measurements. We present a real data example on the relationship between fasting insulin, a surrogate marker, and true insulin sensitivity measured by a gold-standard euglycaemic insulin clamp, and simulations, where the behavior of profile-likelihood-based confidence intervals is examined. MLE was shown to be a robust estimator for non-normal distributions and efficient for small sample situations. Copyright (c) 2008 John Wiley & Sons, Ltd.
The effect of mis-specification on mean and selection between the Weibull and lognormal models
NASA Astrophysics Data System (ADS)
Jia, Xiang; Nadarajah, Saralees; Guo, Bo
2018-02-01
The lognormal and Weibull models are commonly used to analyse data. Although selection procedures have been extensively studied, it is possible that the lognormal model could be selected when the true model is Weibull or vice versa. As the mean is important in applications, we focus on the effect of mis-specification on mean. The effect on lognormal mean is first considered if the lognormal sample is wrongly fitted by a Weibull model. The maximum likelihood estimate (MLE) and quasi-MLE (QMLE) of lognormal mean are obtained based on lognormal and Weibull models. Then, the impact is evaluated by computing ratio of biases and ratio of mean squared errors (MSEs) between MLE and QMLE. For completeness, the theoretical results are demonstrated by simulation studies. Next, the effect of the reverse mis-specification on Weibull mean is discussed. It is found that the ratio of biases and the ratio of MSEs are independent of the location and scale parameters of the lognormal and Weibull models. The influence could be ignored if some special conditions hold. Finally, a model selection method is proposed by comparing ratios concerning biases and MSEs. We also present a published data to illustrate the study in this paper.
Mbah, Chamberlain; De Ruyck, Kim; De Schrijver, Silke; De Sutter, Charlotte; Schiettecatte, Kimberly; Monten, Chris; Paelinck, Leen; De Neve, Wilfried; Thierens, Hubert; West, Catharine; Amorim, Gustavo; Thas, Olivier; Veldeman, Liv
2018-05-01
Evaluation of patient characteristics inducing toxicity in breast radiotherapy, using simultaneous modeling of multiple endpoints. In 269 early-stage breast cancer patients treated with whole-breast irradiation (WBI) after breast-conserving surgery, toxicity was scored, based on five dichotomized endpoints. Five logistic regression models were fitted, one for each endpoint and the effect sizes of all variables were estimated using maximum likelihood (MLE). The MLEs are improved with James-Stein estimates (JSEs). The method combines all the MLEs, obtained for the same variable but from different endpoints. Misclassification errors were computed using MLE- and JSE-based prediction models. For associations, p-values from the sum of squares of MLEs were compared with p-values from the Standardized Total Average Toxicity (STAT) Score. With JSEs, 19 highest ranked variables were predictive of the five different endpoints. Important variables increasing radiation-induced toxicity were chemotherapy, age, SATB2 rs2881208 SNP and nodal irradiation. Treatment position (prone position) was most protective and ranked eighth. Overall, the misclassification errors were 45% and 34% for the MLE- and JSE-based models, respectively. p-Values from the sum of squares of MLEs and p-values from STAT score led to very similar conclusions, except for the variables nodal irradiation and treatment position, for which STAT p-values suggested an association with radiosensitivity, whereas p-values from the sum of squares indicated no association. Breast volume was ranked as the most significant variable in both strategies. The James-Stein estimator was used for selecting variables that are predictive for multiple toxicity endpoints. With this estimator, 19 variables were predictive for all toxicities of which four were significantly associated with overall radiosensitivity. JSEs led to almost 25% reduction in the misclassification error rate compared to conventional MLEs. Finally, patient characteristics that are associated with radiosensitivity were identified without explicitly quantifying radiosensitivity.
Morinda citrifolia L. leaf extract prevent weight gain in Sprague-Dawley rats fed a high fat diet
Jambocus, Najla Gooda Sahib; Ismail, Amin; Khatib, Alfi; Mahomoodally, Fawzi; Saari, Nazamid; Mumtaz, Muhammad Waseem; Hamid, Azizah Abdul
2017-01-01
ABSTRACT Background: Morinda citrifolia L. is widely used as a folk medicinal food plant to manage a panoply of diseases, though no concrete reports on its potential anti-obesity activity. This study aimed to evaluate the potential of M. citrifolia leaf extracts (MLE60) in the prevention of weight gain in vivo and establish its phytochemical profile. Design: Male Sprague-Dawley rats were divided into groups based on a normal diet (ND) or high fat diet (HFD), with or without MLE60 supplementation (150 and 350 mg/kg body weight) and assessed for any reduction in weight gain. Plasma leptin, insulin, adiponectin, and ghrelin of all groups were determined. 1H NMR and LCMS methods were employed for phytochemical profiling of MLE60. Results: The supplementation of MLE60 did not affect food intake indicating that appetite suppression might not be the main anti-obesity mechanism involved. In the treated groups, MLE60 prevented weight gain, most likely through an inhibition of pancreatic and lipoprotein activity with a positive influence on the lipid profiles and a reduction in LDL levels . MLE60 also attenuated visceral fat deposition in treated subjects with improvement in the plasma levels of obesity-linked factors . 1Spectral analysis showed the presence of several bioactive compounds with rutin being more predominant. Conclusion: MLE60 shows promise as an anti-obesity agents and warrants further research. PMID:28814950
Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation
ERIC Educational Resources Information Center
Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting
2011-01-01
Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…
A polychromatic adaption of the Beer-Lambert model for spectral decomposition
NASA Astrophysics Data System (ADS)
Sellerer, Thorsten; Ehn, Sebastian; Mechlem, Korbinian; Pfeiffer, Franz; Herzen, Julia; Noël, Peter B.
2017-03-01
We present a semi-empirical forward-model for spectral photon-counting CT which is fully compatible with state-of-the-art maximum-likelihood estimators (MLE) for basis material line integrals. The model relies on a minimum calibration effort to make the method applicable in routine clinical set-ups with the need for periodic re-calibration. In this work we present an experimental verifcation of our proposed method. The proposed method uses an adapted Beer-Lambert model, describing the energy dependent attenuation of a polychromatic x-ray spectrum using additional exponential terms. In an experimental dual-energy photon-counting CT setup based on a CdTe detector, the model demonstrates an accurate prediction of the registered counts for an attenuated polychromatic spectrum. Thereby deviations between model and measurement data lie within the Poisson statistical limit of the performed acquisitions, providing an effectively unbiased forward-model. The experimental data also shows that the model is capable of handling possible spectral distortions introduced by the photon-counting detector and CdTe sensor. The simplicity and high accuracy of the proposed model provides a viable forward-model for MLE-based spectral decomposition methods without the need of costly and time-consuming characterization of the system response.
Superfast maximum-likelihood reconstruction for quantum tomography
NASA Astrophysics Data System (ADS)
Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon
2017-06-01
Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.
A Novel Method for Block Size Forensics Based on Morphological Operations
NASA Astrophysics Data System (ADS)
Luo, Weiqi; Huang, Jiwu; Qiu, Guoping
Passive forensics analysis aims to find out how multimedia data is acquired and processed without relying on pre-embedded or pre-registered information. Since most existing compression schemes for digital images are based on block processing, one of the fundamental steps for subsequent forensics analysis is to detect the presence of block artifacts and estimate the block size for a given image. In this paper, we propose a novel method for blind block size estimation. A 2×2 cross-differential filter is first applied to detect all possible block artifact boundaries, morphological operations are then used to remove the boundary effects caused by the edges of the actual image contents, and finally maximum-likelihood estimation (MLE) is employed to estimate the block size. The experimental results evaluated on over 1300 nature images show the effectiveness of our proposed method. Compared with existing gradient-based detection method, our method achieves over 39% accuracy improvement on average.
Forecasting overhaul or replacement intervals based on estimated system failure intensity
NASA Astrophysics Data System (ADS)
Gannon, James M.
1994-12-01
System reliability can be expressed in terms of the pattern of failure events over time. Assuming a nonhomogeneous Poisson process and Weibull intensity function for complex repairable system failures, the degree of system deterioration can be approximated. Maximum likelihood estimators (MLE's) for the system Rate of Occurrence of Failure (ROCOF) function are presented. Evaluating the integral of the ROCOF over annual usage intervals yields the expected number of annual system failures. By associating a cost of failure with the expected number of failures, budget and program policy decisions can be made based on expected future maintenance costs. Monte Carlo simulation is used to estimate the range and the distribution of the net present value and internal rate of return of alternative cash flows based on the distributions of the cost inputs and confidence intervals of the MLE's.
2017-09-28
SECURITY CLASSIFICATION OF: In forensic DNA analysis, the interpretation of a sample acquired from the environment may be dependent upon the...sample acquired from the environment may be dependent upon the assumption on the number of individuals from which the evidence arose. Degraded and...NOCIt results to those obtained when allele counting or maxiumum likelihood estimator (MLE) methods are employed. NOCIt does not depend upon an AT and
Fast maximum likelihood estimation of mutation rates using a birth-death process.
Wu, Xiaowei; Zhu, Hongxiao
2015-02-07
Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.
Generalizing boundaries for triangular designs, and efficacy estimation at extended follow-ups.
Allison, Annabel; Edwards, Tansy; Omollo, Raymond; Alves, Fabiana; Magirr, Dominic; E Alexander, Neal D
2015-11-16
Visceral leishmaniasis (VL) is a parasitic disease transmitted by sandflies and is fatal if left untreated. Phase II trials of new treatment regimens for VL are primarily carried out to evaluate safety and efficacy, while pharmacokinetic data are also important to inform future combination treatment regimens. The efficacy of VL treatments is evaluated at two time points, initial cure, when treatment is completed and definitive cure, commonly 6 months post end of treatment, to allow for slow response to treatment and detection of relapses. This paper investigates a generalization of the triangular design to impose a minimum sample size for pharmacokinetic or other analyses, and methods to estimate efficacy at extended follow-up accounting for the sequential design and changes in cure status during extended follow-up. We provided R functions that generalize the triangular design to impose a minimum sample size before allowing stopping for efficacy. For estimation of efficacy at a second, extended, follow-up time, the performance of a shrinkage estimator (SHE), a probability tree estimator (PTE) and the maximum likelihood estimator (MLE) for estimation was assessed by simulation. The SHE and PTE are viable approaches to estimate an extended follow-up although the SHE performed better than the PTE: the bias and root mean square error were lower and coverage probabilities higher. Generalization of the triangular design is simple to implement for adaptations to meet requirements for pharmacokinetic analyses. Using the simple MLE approach to estimate efficacy at extended follow-up will lead to biased results, generally over-estimating treatment success. The SHE is recommended in trials of two or more treatments. The PTE is an acceptable alternative for one-arm trials or where use of the SHE is not possible due to computational complexity. NCT01067443 , February 2010.
Tzuriel, D
1999-05-01
The main objectives of this article are to describe the effects of mediated learning experience (MLE) strategies in mother-child interactions on the child's cognitive modifiability, the effects of distal factors (e.g., socioeconomic status, mother's intelligence, child's personality) on MLE interactions, and the effects of situational variables on MLE processes. Methodological aspects of measurement of MLE interactions and of cognitive modifiability, using a dynamic assessment approach, are discussed. Studies with infants showed that the quality of mother-infant MLE interactions predict later cognitive functioning and that MLE patterns and children's cognitive performance change as a result of intervention programs. Studies with preschool and school-aged children showed that MLE interactions predict cognitive modifiability and that distal factors predict MLE interactions but not the child's cognitive modifiability. The child's cognitive modifiability was predicted by MLE interactions in a structured but not in a free-play situation. Mediation for transcendence (e.g., teaching rules and generalizations) appeared to be the strongest predictor of children's cognitive modifiability. Discussion of future research includes the consideration of a holistic transactional approach, which refers to MLE processes, personality, and motivational-affective factors, the cultural context of mediation, perception of the whole family as a mediational unit, and the "mediational normative scripts."
Antweiler, Ronald C.
2015-01-01
The main classes of statistical treatments that have been used to determine if two groups of censored environmental data arise from the same distribution are substitution methods, maximum likelihood (MLE) techniques, and nonparametric methods. These treatments along with using all instrument-generated data (IN), even those less than the detection limit, were evaluated by examining 550 data sets in which the true values of the censored data were known, and therefore “true” probabilities could be calculated and used as a yardstick for comparison. It was found that technique “quality” was strongly dependent on the degree of censoring present in the groups. For low degrees of censoring (<25% in each group), the Generalized Wilcoxon (GW) technique and substitution of √2/2 times the detection limit gave overall the best results. For moderate degrees of censoring, MLE worked best, but only if the distribution could be estimated to be normal or log-normal prior to its application; otherwise, GW was a suitable alternative. For higher degrees of censoring (each group >40% censoring), no technique provided reliable estimates of the true probability. Group size did not appear to influence the quality of the result, and no technique appeared to become better or worse than other techniques relative to group size. Finally, IN appeared to do very well relative to the other techniques regardless of censoring or group size.
Montazeri, Zahra; Yanofsky, Corey M; Bickel, David R
2010-01-01
Research on analyzing microarray data has focused on the problem of identifying differentially expressed genes to the neglect of the problem of how to integrate evidence that a gene is differentially expressed with information on the extent of its differential expression. Consequently, researchers currently prioritize genes for further study either on the basis of volcano plots or, more commonly, according to simple estimates of the fold change after filtering the genes with an arbitrary statistical significance threshold. While the subjective and informal nature of the former practice precludes quantification of its reliability, the latter practice is equivalent to using a hard-threshold estimator of the expression ratio that is not known to perform well in terms of mean-squared error, the sum of estimator variance and squared estimator bias. On the basis of two distinct simulation studies and data from different microarray studies, we systematically compared the performance of several estimators representing both current practice and shrinkage. We find that the threshold-based estimators usually perform worse than the maximum-likelihood estimator (MLE) and they often perform far worse as quantified by estimated mean-squared risk. By contrast, the shrinkage estimators tend to perform as well as or better than the MLE and never much worse than the MLE, as expected from what is known about shrinkage. However, a Bayesian measure of performance based on the prior information that few genes are differentially expressed indicates that hard-threshold estimators perform about as well as the local false discovery rate (FDR), the best of the shrinkage estimators studied. Based on the ability of the latter to leverage information across genes, we conclude that the use of the local-FDR estimator of the fold change instead of informal or threshold-based combinations of statistical tests and non-shrinkage estimators can be expected to substantially improve the reliability of gene prioritization at very little risk of doing so less reliably. Since the proposed replacement of post-selection estimates with shrunken estimates applies as well to other types of high-dimensional data, it could also improve the analysis of SNP data from genome-wide association studies.
Radar cross section models for limited aspect angle windows
NASA Astrophysics Data System (ADS)
Robinson, Mark C.
1992-12-01
This thesis presents a method for building Radar Cross Section (RCS) models of aircraft based on static data taken from limited aspect angle windows. These models statistically characterize static RCS. This is done to show that a limited number of samples can be used to effectively characterize static aircraft RCS. The optimum models are determined by performing both a Kolmogorov and a Chi-Square goodness-of-fit test comparing the static RCS data with a variety of probability density functions (pdf) that are known to be effective at approximating the static RCS of aircraft. The optimum parameter estimator is also determined by the goodness of-fit tests if there is a difference in pdf parameters obtained by the Maximum Likelihood Estimator (MLE) and the Method of Moments (MoM) estimators.
Optimally resolving Lambertian surface orientation
NASA Astrophysics Data System (ADS)
Bertsatos, Ioannis; Makris, Nicholas C.
2003-10-01
Sonar images of remote surfaces are typically corrupted by signal-dependent noise known as speckle. Relative motion between source, surface, and receiver causes the received field to fluctuate over time with circular complex Gaussian random (CCGR) statistics. In many cases of practical importance, Lambert's law is appropriate to model radiant intensity from the surface. In a previous paper, maximum likelihood estimators (MLE) for Lambertian surface orientation have been derived based on CCGR measurements [N. C. Makris, SACLANT Conference Proceedings Series CP-45, 1997, pp. 339-346]. A Lambertian surface needs to be observed from more than one illumination direction for its orientation to be properly constrained. It is found, however, that MLE performance varies significantly with illumination direction due to the inherently nonlinear nature of this problem. It is shown that a large number of samples is often required to optimally resolve surface orientation using the optimality criteria of the MLE derived in Naftali and Makris [J. Acoust. Soc. Am. 110, 1917-1930 (2001)].
Experimental Validation of Pulse Phase Tracking for X-Ray Pulsar Based
NASA Technical Reports Server (NTRS)
Anderson, Kevin
2012-01-01
Pulsars are a form of variable celestial source that have shown to be usable as aids for autonomous, deep space navigation. Particularly those sources emitting in the X-ray band are ideal for navigation due to smaller detector sizes. In this paper X-ray photons arriving from a pulsar are modeled as a non-homogeneous Poisson process. The method of pulse phase tracking is then investigated as a technique to measure the radial distance traveled by a spacecraft over an observation interval. A maximum-likelihood phase estimator (MLE) is used for the case where the observed frequency signal is constant. For the varying signal frequency case, an algorithm is used in which the observation window is broken up into smaller blocks over which an MLE is used. The outputs of this phase estimation process were then looped through a digital phase-locked loop (DPLL) in order to reduce the errors and produce estimates of the doppler frequency. These phase tracking algorithms were tested both in a computer simulation environment and using the NASA Goddard Space flight Center X-ray Navigation Laboratory Testbed (GXLT). This provided an experimental validation with photons being emitted by a modulated X-ray source and detected by a silicon-drift detector. Models of the Crab pulsar and the pulsar B1821-24 were used in order to generate test scenarios. Three different simulated detector trajectories were used to be tracked by the phase tracking algorithm: a stationary case, one with constant velocity, and one with constant acceleration. All three were performed in one-dimension along the line of sight to the pulsar. The first two had a constant signal frequency and the third had a time varying frequency. All of the constant frequency cases were processed using the MLE, and it was shown that they tracked the initial phase within 0.15% for the simulations and 2.5% in the experiments, based on an average of ten runs. The MLE-DPLL cascade version of the phase tracking algorithm was used in the varying frequency case. This resulted in tracking of the phase and frequency by the DPLL outputs in both the simulation and experimental environments. The crab pulsar was experimentally tested with a trajectory with a higher acceleration. In this case the phase error tended toward zero as the observation extended to 250 seconds and the doppler frequency error tended to zero in under 100 seconds.
Long-range persistence in the global mean surface temperature and the global warming "time bomb"
NASA Astrophysics Data System (ADS)
Rypdal, M.; Rypdal, K.
2012-04-01
Detrended Fluctuation Analysis (DFA) and Maximum Likelihood Estimations (MLE) based on instrumental data over the last 160 years indicate that there is Long-Range Persistence (LRP) in Global Mean Surface Temperature (GMST) on time scales of months to decades. The persistence is much higher in sea surface temperature than in land temperatures. Power spectral analysis of multi-model, multi-ensemble runs of global climate models indicate further that this persistence may extend to centennial and maybe even millennial time-scales. We also support these conclusions by wavelet variogram analysis, DFA, and MLE of Northern hemisphere mean surface temperature reconstructions over the last two millennia. These analyses indicate that the GMST is a strongly persistent noise with Hurst exponent H>0.9 on time scales from decades up to at least 500 years. We show that such LRP can be very important for long-term climate prediction and for the establishment of a "time bomb" in the climate system due to a growing energy imbalance caused by the slow relaxation to radiative equilibrium under rising anthropogenic forcing. We do this by the construction of a multi-parameter dynamic-stochastic model for the GMST response to deterministic and stochastic forcing, where LRP is represented by a power-law response function. Reconstructed data for total forcing and GMST over the last millennium are used with this model to estimate trend coefficients and Hurst exponent for the GMST on multi-century time scale by means of MLE. Ensembles of solutions generated from the stochastic model also allow us to estimate confidence intervals for these estimates.
The early maximum likelihood estimation model of audiovisual integration in speech perception.
Andersen, Tobias S
2015-05-01
Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk-MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual integration to speech perception along with three model variations. In early MLE, integration is based on a continuous internal representation before categorization, which can make the model more parsimonious by imposing constraints that reflect experimental designs. The study also shows that cross-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures favored more complex models. This difference between conventional error measures and cross-validation was found to be indicative of over-fitting in more complex models such as the FLMP.
Khan, Shahbaz; Basra, Shahzad Maqsood Ahmed; Afzal, Irfan; Nawaz, Muhammad; Rehman, Hafeez Ur
2017-12-01
Wheat is staple food of region, as it contributes 60% of daily caloric intake, but its delayed sowing reduces yield due to short life span. Moringa leaf extract (MLE) is considered to improve growth and development of field crops. Study comprised of two experiments. First experiment, freshly extracted MLE and in combination with growth-promoting substances were stored at two temperature regimes. Chemical analysis, after 1, 2, and 3 months' storage period, showed that phenolics and ascorbic acid concentrations decreased with increasing storage period. Fresh extracts improved speed and spread of emergence and seedling vigor. Effectiveness of MLE in terms of phenolics and ascorbate concentrations was highest up to 1 month which decreased with prolonged storage. Growth enhancing potential of MLE also reduced with increasing storage duration. Under field conditions, the bio-efficacy of these fresh and stored MLE was compared when applied as foliar spray at tillering and booting stages of wheat. Foliar applied fresh MLE was the most effective in improving growth parameters. Fresh MLE enhanced biochemical and yield attributes in late sown wheat. This growth-promoting potential of MLE decreased with storage time. Application of fresh MLE helped to achieve higher economic yield.
1981-03-03
Government Agencies. The views and conclusions contained in this document are those of the contractor and should not be interpreted as necessarily...resolving closely spaced j optical point targets are compared using Monte Carlo simulation ,esults for three different examples. It is found that the MEM is...although no direct compari- son was given. The objective of this report is to compare the capabilities of MLE and MEM in resolving two optical CSO’s
Radiance and atmosphere propagation-based method for the target range estimation
NASA Astrophysics Data System (ADS)
Cho, Hoonkyung; Chun, Joohwan
2012-06-01
Target range estimation is traditionally based on radar and active sonar systems in modern combat system. However, the performance of such active sensor devices is degraded tremendously by jamming signal from the enemy. This paper proposes a simple range estimation method between the target and the sensor. Passive IR sensors measures infrared (IR) light radiance radiating from objects in dierent wavelength and this method shows robustness against electromagnetic jamming. The measured target radiance of each wavelength at the IR sensor depends on the emissive properties of target material and is attenuated by various factors, in particular the distance between the sensor and the target and atmosphere environment. MODTRAN is a tool that models atmospheric propagation of electromagnetic radiation. Based on the result from MODTRAN and measured radiance, the target range is estimated. To statistically analyze the performance of proposed method, we use maximum likelihood estimation (MLE) and evaluate the Cramer-Rao Lower Bound (CRLB) via the probability density function of measured radiance. And we also compare CRLB and the variance of and ML estimation using Monte-Carlo.
Liu, Yan; Li, Xuemei; Xie, Chen; Luo, Xiuzhen; Bao, Yonggang; Wu, Bin; Hu, Yuchi; Zhong, Zhong; Liu, Chang; Li, MinJie
2016-01-01
For centuries, mulberry leaf has been used in traditional Chinese medicine for the treatment of diabetes. This study aims to test the prevention effects of a proprietary mulberry leaf extract (MLE) and a formula consisting of MLE, fenugreek seed extract, and cinnamon cassia extract (MLEF) on insulin resistance development in animals. MLE was refined to contain 5% 1-deoxynojirimycin by weight. MLEF was formulated by mixing MLE with cinnamon cassia extract and fenugreek seed extract at a 6:5:3 ratio (by weight). First, the acute toxicity effects of MLE on ICR mice were examined at 5 g/kg BW dose. Second, two groups of normal rats were administrated with water or 150 mg/kg BW MLE per day for 29 days to evaluate MLE's effect on normal animals. Third, to examine the effects of MLE and MLEF on model animals, sixty SD rats were divided into five groups, namely, (1) normal, (2) model, (3) high-dose MLE (75 mg/kg BW) treatment; (4) low-dose MLE (15 mg/kg BW) treatment; and (5) MLEF (35 mg/kg BW) treatment. On the second week, rats in groups (2)-(5) were switched to high-energy diet for three weeks. Afterward, the rats were injected (ip) with a single dose of 105 mg/kg BW alloxan. After four more days, fasting blood glucose, post-prandial blood glucose, serum insulin, cholesterol, and triglyceride levels were measured. Last, liver lysates from animals were screened with 650 antibodies for changes in the expression or phosphorylation levels of signaling proteins. The results were further validated by Western blot analysis. We found that the maximum tolerance dose of MLE was greater than 5 g/kg in mice. The MLE at a 150 mg/kg BW dose showed no effect on fast blood glucose levels in normal rats. The MLE at a 75 mg/kg BW dose and MLEF at a 35 mg/kg BW dose, significantly (p < 0.05) reduced fast blood glucose levels in rats with impaired glucose and lipid metabolism. In total, 34 proteins with significant changes in expression and phosphorylation levels were identified. The changes of JNK, IRS1, and PDK1 were confirmed by western blot analysis. In conclusion, this study demonstrated the potential protective effects of MLE and MLEF against hyperglycemia induced by high-energy diet and toxic chemicals in rats for the first time. The most likely mechanism is the promotion of IRS1 phosphorylation, which leads to insulin sensitivity restoration.
Gelabert-Rebato, Miriam; Wiebe, Julia C; Martin-Rincon, Marcos; Gericke, Nigel; Perez-Valera, Mario; Curtelin, David; Galvan-Alvarez, Victor; Lopez-Rios, Laura; Morales-Alamo, David; Calbet, Jose A L
2018-01-01
It remains unknown whether polyphenols such as luteolin (Lut), mangiferin and quercetin (Q) have ergogenic effects during repeated all-out prolonged sprints. Here we tested the effect of Mangifera indica L. leaf extract (MLE) rich in mangiferin (Zynamite®) administered with either quercetin (Q) and tiger nut extract (TNE), or with luteolin (Lut) on sprint performance and recovery from ischemia-reperfusion. Thirty young volunteers were randomly assigned to three treatments 48 h before exercise. Treatment A: placebo (500 mg of maltodextrin/day); B: 140 mg of MLE (60% mangiferin) and 50 mg of Lut/day; and C: 140 mg of MLE, 600 mg of Q and 350 mg of TNE/day. After warm-up, subjects performed two 30 s Wingate tests and a 60 s all-out sprint interspaced by 4 min recovery periods. At the end of the 60 s sprint the circulation of both legs was instantaneously occluded for 20 s. Then, the circulation was re-opened and a 15 s sprint performed, followed by 10 s recovery with open circulation, and another 15 s final sprint. MLE supplements enhanced peak (Wpeak) and mean (Wmean) power output by 5.0-7.0% ( P < 0.01). After ischemia, MLE+Q+TNE increased Wpeak by 19.4 and 10.2% compared with the placebo ( P < 0.001) and MLE+Lut ( P < 0.05), respectively. MLE+Q+TNE increased Wmean post-ischemia by 11.2 and 6.7% compared with the placebo ( P < 0.001) and MLE+Lut ( P = 0.012). Mean VO 2 during the sprints was unchanged, suggesting increased efficiency or recruitment of the anaerobic capacity after MLE ingestion. In women, peak VO 2 during the repeated sprints was 5.8% greater after the administration of MLE, coinciding with better brain oxygenation. MLE attenuated the metaboreflex hyperpneic response post-ischemia, may have improved O 2 extraction by the Vastus Lateralis (MLE+Q+TNE vs. placebo, P = 0.056), and reduced pain during ischemia ( P = 0.068). Blood lactate, acid-base balance, and plasma electrolytes responses were not altered by the supplements. In conclusion, a MLE extract rich in mangiferin combined with either quercetin and tiger nut extract or luteolin exerts a remarkable ergogenic effect, increasing muscle power in fatigued subjects and enhancing peak VO 2 and brain oxygenation in women during prolonged sprinting. Importantly, the combination of MLE+Q+TNE improves skeletal muscle contractile function during ischemia/reperfusion.
Tools of Robustness for Item Response Theory.
ERIC Educational Resources Information Center
Jones, Douglas H.
This paper briefly demonstrates a few of the possibilities of a systematic application of robustness theory, concentrating on the estimation of ability when the true item response model does and does not fit the data. The definition of the maximum likelihood estimator (MLE) of ability is briefly reviewed. After introducing the notion of…
ERIC Educational Resources Information Center
Sinharay, Sandip
2015-01-01
The maximum likelihood estimate (MLE) of the ability parameter of an item response theory model with known item parameters was proved to be asymptotically normally distributed under a set of regularity conditions for tests involving dichotomous items and a unidimensional ability parameter (Klauer, 1990; Lord, 1983). This article first considers…
The MLE Teacher: An Agent of Change or a Cog in the Wheel?
ERIC Educational Resources Information Center
Bedamatta, Urmishree
2014-01-01
This article examines the role of the multilingual education (MLE) teacher in the mother tongue-based MLE program for the Juangas, a tribe in Odisha, an eastern state of India, and is part of a broader study of the MLE program in the state. For the specific purpose of this article, I have adopted Welmond's (2002) three-step process: identifying…
Chen, Xiaoxin; Qin, Rong; Liu, Ba; Ma, Yan; Su, Yinghao; Yang, Chung S; Glickman, Jonathan N; Odze, Robert D; Shaheen, Nicholas J
2008-01-11
In rats, esophagogastroduodenal anastomosis (EGDA) without concomitant chemical carcinogen treatment leads to gastroesophageal reflux disease, multilayered epithelium (MLE, a presumed precursor in intestinal metaplasia), columnar-lined esophagus, dysplasia, and esophageal adenocarcinoma. Previously we have shown that columnar-lined esophagus in EGDA rats resembled human Barrett's esophagus (BE) in its morphology, mucin features and expression of differentiation markers (Lab. Invest. 2004;84:753-765). The purpose of this study was to compare the phenotype of rat MLE with human MLE, in order to gain insight into the nature of MLE and its potential role in the development of BE. Serial sectioning was performed on tissue samples from 32 EGDA rats and 13 patients with established BE. Tissue sections were immunohistochemically stained for a variety of transcription factors and differentiation markers of esophageal squamous epithelium and intestinal columnar epithelium. We detected MLE in 56.3% (18/32) of EGDA rats, and in all human samples. As expected, both rat and human squamous epithelium, but not intestinal metaplasia, expressed squamous transcription factors and differentiation markers (p63, Sox2, CK14 and CK4) in all cases. Both rat and human intestinal metaplasia, but not squamous epithelium, expressed intestinal transcription factors and differentiation markers (Cdx2, GATA4, HNF1alpha, villin and Muc2) in all cases. Rat MLE shared expression patterns of Sox2, CK4, Cdx2, GATA4, villin and Muc2 with human MLE. However, p63 and CK14 were expressed in a higher proportion of rat MLE compared to humans. These data indicate that rat MLE shares similar properties to human MLE in its expression pattern of these markers, not withstanding small differences, and support the concept that MLE may be a transitional stage in the metaplastic conversion of squamous to columnar epithelium in BE.
Estimating relative risks for common outcome using PROC NLP.
Yu, Binbing; Wang, Zhuoqiao
2008-05-01
In cross-sectional or cohort studies with binary outcomes, it is biologically interpretable and of interest to estimate the relative risk or prevalence ratio, especially when the response rates are not rare. Several methods have been used to estimate the relative risk, among which the log-binomial models yield the maximum likelihood estimate (MLE) of the parameters. Because of restrictions on the parameter space, the log-binomial models often run into convergence problems. Some remedies, e.g., the Poisson and Cox regressions, have been proposed. However, these methods may give out-of-bound predicted response probabilities. In this paper, a new computation method using the SAS Nonlinear Programming (NLP) procedure is proposed to find the MLEs. The proposed NLP method was compared to the COPY method, a modified method to fit the log-binomial model. Issues in the implementation are discussed. For illustration, both methods were applied to data on the prevalence of microalbuminuria (micro-protein leakage into urine) for kidney disease patients from the Diabetes Control and Complications Trial. The sample SAS macro for calculating relative risk is provided in the appendix.
Hoseinifar, Seyed Hossein; Khodadadian Zou, Hassan; Kolangi Miandare, Hamed; Van Doan, Hien; Romano, Nicholas; Dadar, Maryam
2017-08-01
A feeding trial was performed to assess the effects of dietary Medlar (Mespilus germanica) leaf extract (MLE) on the growth performance, skin mucus non-specific immune parameters as well as mRNA levels of immune and antioxidant related genes in the skin of common carp (Cyprinus carpio) fingerlings. Fish were fed diets supplemented with graded levels (0, 0.25, 0.50, and 1.00%) of MLE for 49 days. The results revealed an improvement to the growth performance and feed conversion ratio in MLE fed carps (P < 0.05), regardless of the inclusion level. The immunoglobulin levels and interleukin 8 levels in the skin mucous and skin, respectively, revealed significant increment in fish fed 1% MLE (P < 0.05) in comparison with the other MLE treatments and control group. Also, feeding on 0.25% and 0.50% MLE remarkably increased skin mucus lysozyme activity (P < 0.05). However, there were no significant difference between MLE treated groups and control (P > 0.05) in case protease activity in the skin mucous or tumor necrosis factor alpha and interleukin 1 beta gene expression in the skin of carps (P > 0.05). The expression of genes encoding glutathione reductase and glutathione S-transferase alpha were remarkably increased in MLE fed carps compared to the control group (P < 0.05) while carp fed 0.50% or 1.00% MLE had significantly increased glutathione peroxidase expression in their skin (P < 0.05). The present results revealed the potentially beneficial effects of MLE on the mucosal immune system and growth performance in common carp fingerlings. Copyright © 2017 Elsevier Ltd. All rights reserved.
Langdon, Jonathan H; Elegbe, Etana; McAleavey, Stephen A
2015-01-01
Single Tracking Location (STL) Shear wave Elasticity Imaging (SWEI) is a method for detecting elastic differences between tissues. It has the advantage of intrinsic speckle bias suppression compared to Multiple Tracking Location (MTL) variants of SWEI. However, the assumption of a linear model leads to an overestimation of the shear modulus in viscoelastic media. A new reconstruction technique denoted Single Tracking Location Viscosity Estimation (STL-VE) is introduced to correct for this overestimation. This technique utilizes the same raw data generated in STL-SWEI imaging. Here, the STL-VE technique is developed by way of a Maximum Likelihood Estimation (MLE) for general viscoelastic materials. The method is then implemented for the particular case of the Kelvin-Voigt Model. Using simulation data, the STL-VE technique is demonstrated and the performance of the estimator is characterized. Finally, the STL-VE method is used to estimate the viscoelastic parameters of ex-vivo bovine liver. We find good agreement between the STL-VE results and the simulation parameters as well as between the liver shear wave data and the modeled data fit. PMID:26168170
Xie, Chen; Luo, Xiuzhen; Bao, Yonggang; Wu, Bin; Hu, Yuchi; Zhong, Zhong; Liu, Chang; Li, MinJie
2016-01-01
For centuries, mulberry leaf has been used in traditional Chinese medicine for the treatment of diabetes. This study aims to test the prevention effects of a proprietary mulberry leaf extract (MLE) and a formula consisting of MLE, fenugreek seed extract, and cinnamon cassia extract (MLEF) on insulin resistance development in animals. MLE was refined to contain 5% 1-deoxynojirimycin by weight. MLEF was formulated by mixing MLE with cinnamon cassia extract and fenugreek seed extract at a 6:5:3 ratio (by weight). First, the acute toxicity effects of MLE on ICR mice were examined at 5 g/kg BW dose. Second, two groups of normal rats were administrated with water or 150 mg/kg BW MLE per day for 29 days to evaluate MLE’s effect on normal animals. Third, to examine the effects of MLE and MLEF on model animals, sixty SD rats were divided into five groups, namely, (1) normal, (2) model, (3) high-dose MLE (75 mg/kg BW) treatment; (4) low-dose MLE (15 mg/kg BW) treatment; and (5) MLEF (35 mg/kg BW) treatment. On the second week, rats in groups (2)-(5) were switched to high-energy diet for three weeks. Afterward, the rats were injected (ip) with a single dose of 105 mg/kg BW alloxan. After four more days, fasting blood glucose, post-prandial blood glucose, serum insulin, cholesterol, and triglyceride levels were measured. Last, liver lysates from animals were screened with 650 antibodies for changes in the expression or phosphorylation levels of signaling proteins. The results were further validated by Western blot analysis. We found that the maximum tolerance dose of MLE was greater than 5 g/kg in mice. The MLE at a 150 mg/kg BW dose showed no effect on fast blood glucose levels in normal rats. The MLE at a 75 mg/kg BW dose and MLEF at a 35 mg/kg BW dose, significantly (p < 0.05) reduced fast blood glucose levels in rats with impaired glucose and lipid metabolism. In total, 34 proteins with significant changes in expression and phosphorylation levels were identified. The changes of JNK, IRS1, and PDK1 were confirmed by western blot analysis. In conclusion, this study demonstrated the potential protective effects of MLE and MLEF against hyperglycemia induced by high-energy diet and toxic chemicals in rats for the first time. The most likely mechanism is the promotion of IRS1 phosphorylation, which leads to insulin sensitivity restoration. PMID:27054886
Some New Estimation Methods for Weighted Regression When There are Possible Outliers.
1985-01-01
about influential points, and to add to our understanding of the structure of the data In Section 2 we show, by considering the influence function , why... influence function lampel; 1968, 1974) for the maximum likelihood esti- mator is proportional to (EP-l)h(x), where £= (y-x’B)exp[-h’(x)e], and is thus...unbounded. Since the influence function for the MLE is quadratic in the residual c, in theory a point with a sufficiently large residual can have an
Evaluation of some random effects methodology applicable to bird ringing data
Burnham, K.P.; White, Gary C.
2002-01-01
Existing models for ring recovery and recapture data analysis treat temporal variations in annual survival probability (S) as fixed effects. Often there is no explainable structure to the temporal variation in S1,..., Sk; random effects can then be a useful model: Si = E(S) + ??i. Here, the temporal variation in survival probability is treated as random with average value E(??2) = ??2. This random effects model can now be fit in program MARK. Resultant inferences include point and interval estimation for process variation, ??2, estimation of E(S) and var (E??(S)) where the latter includes a component for ??2 as well as the traditional component for v??ar(S??\\S??). Furthermore, the random effects model leads to shrinkage estimates, Si, as improved (in mean square error) estimators of Si compared to the MLE, S??i, from the unrestricted time-effects model. Appropriate confidence intervals based on the Si are also provided. In addition, AIC has been generalized to random effects models. This paper presents results of a Monte Carlo evaluation of inference performance under the simple random effects model. Examined by simulation, under the simple one group Cormack-Jolly-Seber (CJS) model, are issues such as bias of ??s2, confidence interval coverage on ??2, coverage and mean square error comparisons for inference about Si based on shrinkage versus maximum likelihood estimators, and performance of AIC model selection over three models: Si ??? S (no effects), Si = E(S) + ??i (random effects), and S1,..., Sk (fixed effects). For the cases simulated, the random effects methods performed well and were uniformly better than fixed effects MLE for the Si.
Wlan-Based Indoor Localization Using Neural Networks
NASA Astrophysics Data System (ADS)
Saleem, Fasiha; Wyne, Shurjeel
2016-07-01
Wireless indoor localization has generated recent research interest due to its numerous applications. This work investigates Wi-Fi based indoor localization using two variants of the fingerprinting approach. Specifically, we study the application of an artificial neural network (ANN) for implementing the fingerprinting approach and compare its localization performance with a probabilistic fingerprinting method that is based on maximum likelihood estimation (MLE) of the user location. We incorporate spatial correlation of fading into our investigations, which is often neglected in simulation studies and leads to erroneous location estimates. The localization performance is quantified in terms of accuracy, precision, robustness, and complexity. Multiple methods for handling the case of missing APs in online stage are investigated. Our results indicate that ANN-based fingerprinting outperforms the probabilistic approach for all performance metrics considered in this work.
Frey, H Christopher; Zhao, Yuchao
2004-11-15
Probabilistic emission inventories were developed for urban air toxic emissions of benzene, formaldehyde, chromium, and arsenic for the example of Houston. Variability and uncertainty in emission factors were quantified for 71-97% of total emissions, depending upon the pollutant and data availability. Parametric distributions for interunit variability were fit using maximum likelihood estimation (MLE), and uncertainty in mean emission factors was estimated using parametric bootstrap simulation. For data sets containing one or more nondetected values, empirical bootstrap simulation was used to randomly sample detection limits for nondetected values and observations for sample values, and parametric distributions for variability were fit using MLE estimators for censored data. The goodness-of-fit for censored data was evaluated by comparison of cumulative distributions of bootstrap confidence intervals and empirical data. The emission inventory 95% uncertainty ranges are as small as -25% to +42% for chromium to as large as -75% to +224% for arsenic with correlated surrogates. Uncertainty was dominated by only a few source categories. Recommendations are made for future improvements to the analysis.
Load estimator (LOADEST): a FORTRAN program for estimating constituent loads in streams and rivers
Runkel, Robert L.; Crawford, Charles G.; Cohn, Timothy A.
2004-01-01
LOAD ESTimator (LOADEST) is a FORTRAN program for estimating constituent loads in streams and rivers. Given a time series of streamflow, additional data variables, and constituent concentration, LOADEST assists the user in developing a regression model for the estimation of constituent load (calibration). Explanatory variables within the regression model include various functions of streamflow, decimal time, and additional user-specified data variables. The formulated regression model then is used to estimate loads over a user-specified time interval (estimation). Mean load estimates, standard errors, and 95 percent confidence intervals are developed on a monthly and(or) seasonal basis. The calibration and estimation procedures within LOADEST are based on three statistical estimation methods. The first two methods, Adjusted Maximum Likelihood Estimation (AMLE) and Maximum Likelihood Estimation (MLE), are appropriate when the calibration model errors (residuals) are normally distributed. Of the two, AMLE is the method of choice when the calibration data set (time series of streamflow, additional data variables, and concentration) contains censored data. The third method, Least Absolute Deviation (LAD), is an alternative to maximum likelihood estimation when the residuals are not normally distributed. LOADEST output includes diagnostic tests and warnings to assist the user in determining the appropriate estimation method and in interpreting the estimated loads. This report describes the development and application of LOADEST. Sections of the report describe estimation theory, input/output specifications, sample applications, and installation instructions.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John O.
2017-01-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
NASA Astrophysics Data System (ADS)
Langbein, John
2017-08-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Marston, Louise; Peacock, Janet L; Yu, Keming; Brocklehurst, Peter; Calvert, Sandra A; Greenough, Anne; Marlow, Neil
2009-07-01
Studies of prematurely born infants contain a relatively large percentage of multiple births, so the resulting data have a hierarchical structure with small clusters of size 1, 2 or 3. Ignoring the clustering may lead to incorrect inferences. The aim of this study was to compare statistical methods which can be used to analyse such data: generalised estimating equations, multilevel models, multiple linear regression and logistic regression. Four datasets which differed in total size and in percentage of multiple births (n = 254, multiple 18%; n = 176, multiple 9%; n = 10 098, multiple 3%; n = 1585, multiple 8%) were analysed. With the continuous outcome, two-level models produced similar results in the larger dataset, while generalised least squares multilevel modelling (ML GLS 'xtreg' in Stata) and maximum likelihood multilevel modelling (ML MLE 'xtmixed' in Stata) produced divergent estimates using the smaller dataset. For the dichotomous outcome, most methods, except generalised least squares multilevel modelling (ML GH 'xtlogit' in Stata) gave similar odds ratios and 95% confidence intervals within datasets. For the continuous outcome, our results suggest using multilevel modelling. We conclude that generalised least squares multilevel modelling (ML GLS 'xtreg' in Stata) and maximum likelihood multilevel modelling (ML MLE 'xtmixed' in Stata) should be used with caution when the dataset is small. Where the outcome is dichotomous and there is a relatively large percentage of non-independent data, it is recommended that these are accounted for in analyses using logistic regression with adjusted standard errors or multilevel modelling. If, however, the dataset has a small percentage of clusters greater than size 1 (e.g. a population dataset of children where there are few multiples) there appears to be less need to adjust for clustering.
Nakamura, Kenta; Tsonis, Panagiotis A.
2014-01-01
Adult newts (Notophthalmus viridescens) are capable of complete lens regeneration that is mediated through dorsal iris pigment epithelial (IPE) cells transdifferentiation. In contrast, higher vertebrates such as mice demonstrate only limited lens regeneration in the presence of an intact lens capsule with remaining lens epithelial cells. To compare the intrinsic lens regeneration potential of newt IPE versus mouse lens epithelial cells (MLE), we have established a novel culture method that uses cell aggregation before culture in growth factor-reduced Matrigel™. Dorsal newt IPE aggregates demonstrated complete lens formation within 1 to 2 weeks of Matrigel culture without basic fibroblast growth factor (bFGF) supplementation, including the establishment of a peripheral cuboidal epithelial cell layer, and the appearance of central lens fibers that were positive for αA-crystallin. In contrast, the lens-forming potential of MLE cell aggregates cultured in Matrigel was incomplete and resulted in the formation of defined-size lentoids with partial optical transparency. While the peripheral cell layers of MLE aggregates were nucleated, cells in the center of aggregates demonstrated a nonapoptotic nuclear loss over a time period of 3 weeks that was representative of lens fiber formation. Matrigel culture supplementation with bFGF resulted in higher transparent bigger-size MLE aggregates that demonstrated increased appearance of βB1-crystallin expression. Our study demonstrates that bFGF is not required for induction of newt IPE aggregate-dependent lens formation in Matrigel, while the addition of bFGF seems to be beneficial for the formation of MLE aggregate-derived lens-like structures. In conclusion, the three-dimensional aggregate culture of IPE and MLE in Matrigel allows to a higher extent than older models the indepth study of the intrinsic lens-forming potential and the corresponding identification of lentogenic factors. PMID:23672748
Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization
NASA Astrophysics Data System (ADS)
Tsai, F. T.; Li, X.
2006-12-01
Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.
A tool for the estimation of the distribution of landslide area in R
NASA Astrophysics Data System (ADS)
Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.
2012-04-01
We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.
Zhang, Wangshu; Coba, Marcelo P; Sun, Fengzhu
2016-01-11
Protein domains can be viewed as portable units of biological function that defines the functional properties of proteins. Therefore, if a protein is associated with a disease, protein domains might also be associated and define disease endophenotypes. However, knowledge about such domain-disease relationships is rarely available. Thus, identification of domains associated with human diseases would greatly improve our understanding of the mechanism of human complex diseases and further improve the prevention, diagnosis and treatment of these diseases. Based on phenotypic similarities among diseases, we first group diseases into overlapping modules. We then develop a framework to infer associations between domains and diseases through known relationships between diseases and modules, domains and proteins, as well as proteins and disease modules. Different methods including Association, Maximum likelihood estimation (MLE), Domain-disease pair exclusion analysis (DPEA), Bayesian, and Parsimonious explanation (PE) approaches are developed to predict domain-disease associations. We demonstrate the effectiveness of all the five approaches via a series of validation experiments, and show the robustness of the MLE, Bayesian and PE approaches to the involved parameters. We also study the effects of disease modularization in inferring novel domain-disease associations. Through validation, the AUC (Area Under the operating characteristic Curve) scores for Bayesian, MLE, DPEA, PE, and Association approaches are 0.86, 0.84, 0.83, 0.83 and 0.79, respectively, indicating the usefulness of these approaches for predicting domain-disease relationships. Finally, we choose the Bayesian approach to infer domains associated with two common diseases, Crohn's disease and type 2 diabetes. The Bayesian approach has the best performance for the inference of domain-disease relationships. The predicted landscape between domains and diseases provides a more detailed view about the disease mechanisms.
Correcting for bias in the selection and validation of informative diagnostic tests.
Robertson, David S; Prevost, A Toby; Bowden, Jack
2015-04-15
When developing a new diagnostic test for a disease, there are often multiple candidate classifiers to choose from, and it is unclear if any will offer an improvement in performance compared with current technology. A two-stage design can be used to select a promising classifier (if one exists) in stage one for definitive validation in stage two. However, estimating the true properties of the chosen classifier is complicated by the first stage selection rules. In particular, the usual maximum likelihood estimator (MLE) that combines data from both stages will be biased high. Consequently, confidence intervals and p-values flowing from the MLE will also be incorrect. Building on the results of Pepe et al. (SIM 28:762-779), we derive the most efficient conditionally unbiased estimator and exact confidence intervals for a classifier's sensitivity in a two-stage design with arbitrary selection rules; the condition being that the trial proceeds to the validation stage. We apply our estimation strategy to data from a recent family history screening tool validation study by Walter et al. (BJGP 63:393-400) and are able to identify and successfully adjust for bias in the tool's estimated sensitivity to detect those at higher risk of breast cancer. © 2015 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Frazier, Courtney L.; San Filippo, Joseph; Lambowitz, Alan M.; Mills, David A.
2003-01-01
Despite their commercial importance, there are relatively few facile methods for genomic manipulation of the lactic acid bacteria. Here, the lactococcal group II intron, Ll.ltrB, was targeted to insert efficiently into genes encoding malate decarboxylase (mleS) and tetracycline resistance (tetM) within the Lactococcus lactis genome. Integrants were readily identified and maintained in the absence of a selectable marker. Since splicing of the Ll.ltrB intron depends on the intron-encoded protein, targeted invasion with an intron lacking the intron open reading frame disrupted TetM and MleS function, and MleS activity could be partially restored by expressing the intron-encoded protein in trans. Restoration of splicing from intron variants lacking the intron-encoded protein illustrates how targeted group II introns could be used for conditional expression of any gene. Furthermore, the modified Ll.ltrB intron was used to separately deliver a phage resistance gene (abiD) and a tetracycline resistance marker (tetM) into mleS, without the need for selection to drive the integration or to maintain the integrant. Our findings demonstrate the utility of targeted group II introns as a potential food-grade mechanism for delivery of industrially important traits into the genomes of lactococci. PMID:12571038
ERIC Educational Resources Information Center
Russell, Christina; Amod, Zaytoon; Rosenthal, Lesley
2008-01-01
This study addressed the effect of parent-child Mediated Learning Experience (MLE) interaction on cognitive development in early childhood. It measured the MLE interactions of 14 parents with their preschool children in the contexts of free-play and structured tasks. The children were assessed for their manifest cognitive performance and learning…
Kajander, Tommi; Lehtiö, Lari; Schlömann, Michael; Goldman, Adrian
2003-01-01
Bacterial muconate lactonizing enzymes (MLEs) catalyze the conversion of cis,cis-muconate as a part of the β-ketoadipate pathway, and some MLEs are also able to dehalogenate chlorinated muconates (Cl-MLEs). The basis for the Cl-MLEs dehalogenating activity is still unclear. To further elucidate the differences between MLEs and Cl-MLEs, we have solved the structure of Pseudomonas P51 Cl-MLE at 1.95 Å resolution. Comparison of Pseudomonas MLE and Cl-MLE structures reveals the presence of a large cavity in the Cl-MLEs. The cavity may be related to conformational changes on substrate binding in Cl-MLEs, at Gly52. Site-directed mutagenesis on Pseudomonas MLE core positions to the equivalent Cl-MLE residues showed that the variant Thr52Gly was rather inactive, whereas the Thr52Gly-Phe103Ser variant had regained part of the activity. These residues form a hydrogen bond in the Cl-MLEs. The Cl-MLE structure, as a result of the Thr-to-Gly change, is more flexible than MLE: As a mobile loop closes over the active site, a conformational change at Gly52 is observed in Cl-MLEs. The loose packing and structural motions in Cl-MLE may be required for the rotation of the lactone ring in the active site necessary for the dehalogenating activity of Cl-MLEs. Furthermore, we also suggest that differences in the active site mobile loop sequence between MLEs and Cl-MLEs result in lower active site polarity in Cl-MLEs, possibly affecting catalysis. These changes could result in slower product release from Cl-MLEs and make it a better enzyme for dehalogenation of substrate. PMID:12930985
Determinants of Standard Errors of MLEs in Confirmatory Factor Analysis
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Cheng, Ying; Zhang, Wei
2010-01-01
This paper studies changes of standard errors (SE) of the normal-distribution-based maximum likelihood estimates (MLE) for confirmatory factor models as model parameters vary. Using logical analysis, simplified formulas and numerical verification, monotonic relationships between SEs and factor loadings as well as unique variances are found.…
Tzuriel, David; Shomron, Vered
2018-06-01
The theoretical framework of the current study is based on mediated learning experience (MLE) theory, which is similar to the scaffolding concept. The main question of the current study was to what extent mother-child MLE strategies affect psychological resilience and cognitive modifiability of boys with learning disability (LD). Secondary questions were to what extent the home environment, severity of boy's LD, and mother's attitude towards her child's LD affect her MLE strategies and consequently the child's psychological resilience and cognitive modifiability. The main objectives of this study were the following: (a) to investigate the effects of mother-child MLE strategies on psychological resilience and cognitive modifiability among 7- to 10-year-old boys with LD, (b) to study the causal effects of distal factors (i.e., socio-economic status [SES], home environment, severity of child's LD, mother's attitude towards LD) and proximal factors (i.e., MLE strategies) on psychological resilience and cognitive modifiability. A sample of mother-child dyads (n = 100) were videotaped during a short teaching interaction. All children were boys diagnosed as children with LD. The interaction was analysed for MLE strategies by the Observation of Mediation Interaction scale. Children were administered psychological resilience tests and their cognitive modifiability was measured by dynamic assessment using the Analogies subtest from the Cognitive Modifiability Battery. Home environment was rated by the Home Observation for Measurement of the Environment (HOME), and mothers answered a questionnaire of attitudes towards child's LD. The findings showed that mother-child MLE strategies, HOME, and socio-economic level contributed significantly to prediction of psychological resilience (78%) and cognitive modifiability (51%). Psychological resilience was positively correlated with cognitive modifiability (Rc = 0.67). Structural equation modelling analysis supported, in general, the hypotheses about the causal effects of distal and proximal factors of psychological resilience and cognitive modifiability. The findings validate and extend the MLE theory by showing that mother-child MLE strategies significantly predict psychological resilience and cognitive modifiability among boys with LD. Significant correlation between psychological resilience and cognitive modifiability calls for further research exploring the role of MLE strategies in development of both. © 2018 The British Psychological Society.
Transmission potential of Zika virus infection in the South Pacific.
Nishiura, Hiroshi; Kinoshita, Ryo; Mizumoto, Kenji; Yasuda, Yohei; Nah, Kyeongah
2016-04-01
Zika virus has spread internationally through countries in the South Pacific and Americas. The present study aimed to estimate the basic reproduction number, R0, of Zika virus infection as a measurement of the transmission potential, reanalyzing past epidemic data from the South Pacific. Incidence data from two epidemics, one on Yap Island, Federal State of Micronesia in 2007 and the other in French Polynesia in 2013-2014, were reanalyzed. R0 of Zika virus infection was estimated from the early exponential growth rate of these two epidemics. The maximum likelihood estimate (MLE) of R0 for the Yap Island epidemic was in the order of 4.3-5.8 with broad uncertainty bounds due to the small sample size of confirmed and probable cases. The MLE of R0 for French Polynesia based on syndromic data ranged from 1.8 to 2.0 with narrow uncertainty bounds. The transmissibility of Zika virus infection appears to be comparable to those of dengue and chikungunya viruses. Considering that Aedes species are a shared vector, this finding indicates that Zika virus replication within the vector is perhaps comparable to dengue and chikungunya. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
ERIC Educational Resources Information Center
Zhang, Hui; Zhu, Chang; Sang, Guoyuan
2014-01-01
Media literacy is an essential skill for living in the twenty-first century. School-based instruction is a critical part of media literacy education (MLE), while research on teachers' concerns and integration of MLE is not sufficient. The objective of this study is to investigate teachers' stages of concern (SoC), perceived need, school context,…
ERIC Educational Resources Information Center
Tanriverdi, Belgin; Apak, Ozlem
2010-01-01
The purpose of this study is to evaluate the implications of Media Literacy Education (MLE) in Turkey by analyzing the Primary School Curricula in terms of MLE comparatively in Turkey, Ireland and Finland. In this study, the selection of Finland and Ireland curricula is related with those countries' being the pioneering countries in MLE and the…
NASA Astrophysics Data System (ADS)
Idris, N. H.; Deng, X.; Idris, N. H.
2017-05-01
This paper presents the validation of Coastal Altimetry Waveform Retracking Expert System (CAWRES), a novel method to optimize the Jason satellite altimetric sea levels from multiple retracking solutions. The validation is conducted over the region of Prince William Sound in Alaska, USA, where altimetric waveforms are perturbed by emerged land and sea states. Validation is performed in twofold. First, comparison with existing retrackers (i.e. MLE4 and Ice) from the Sensor Geophysical Data Records (SGDR), and second, comparison with in-situ tide gauge data. From the first validation assessment, in general, CAWRES outperforms the MLE4 and Ice retrackers. In 4 out of 6 cases, the value of improvement percentage (standard deviation of difference) is higher (lower) than those of the SGDR retrackers. CAWRES also presents the best performance in producing valid observations, and has the lowest noise when compared to the SGDR retrackers. From the second assessment with tide gauge, CAWRES retracked sea level anomalies (SLAs) are consistent with those of the tide gauge. The accuracy of CAWRES retracked SLAs is slightly better than those of the MLE4. However, the performance of Ice retracker is better than those of CAWRES and MLE4, suggesting the empirical-based retracker is more effective. The results demonstrate that the CAWRES would have potential to be applied to coastal regions elsewhere.
Varadhan, Ravi; Wang, Sue-Jane
2016-01-01
Treatment effect heterogeneity is a well-recognized phenomenon in randomized controlled clinical trials. In this paper, we discuss subgroup analyses with prespecified subgroups of clinical or biological importance. We explore various alternatives to the naive (the traditional univariate) subgroup analyses to address the issues of multiplicity and confounding. Specifically, we consider a model-based Bayesian shrinkage (Bayes-DS) and a nonparametric, empirical Bayes shrinkage approach (Emp-Bayes) to temper the optimism of traditional univariate subgroup analyses; a standardization approach (standardization) that accounts for correlation between baseline covariates; and a model-based maximum likelihood estimation (MLE) approach. The Bayes-DS and Emp-Bayes methods model the variation in subgroup-specific treatment effect rather than testing the null hypothesis of no difference between subgroups. The standardization approach addresses the issue of confounding in subgroup analyses. The MLE approach is considered only for comparison in simulation studies as the “truth” since the data were generated from the same model. Using the characteristics of a hypothetical large outcome trial, we perform simulation studies and articulate the utilities and potential limitations of these estimators. Simulation results indicate that Bayes-DS and Emp-Bayes can protect against optimism present in the naïve approach. Due to its simplicity, the naïve approach should be the reference for reporting univariate subgroup-specific treatment effect estimates from exploratory subgroup analyses. Standardization, although it tends to have a larger variance, is suggested when it is important to address the confounding of univariate subgroup effects due to correlation between baseline covariates. The Bayes-DS approach is available as an R package (DSBayes). PMID:26485117
Ilik, Ibrahim Avsar; Maticzka, Daniel; Georgiev, Plamen; Gutierrez, Noel Marie; Backofen, Rolf; Akhtar, Asifa
2017-01-01
The X chromosome provides an ideal model system to study the contribution of RNA–protein interactions in epigenetic regulation. In male flies, roX long noncoding RNAs (lncRNAs) harbor several redundant domains to interact with the ubiquitin ligase male-specific lethal 2 (MSL2) and the RNA helicase Maleless (MLE) for X-chromosomal regulation. However, how these interactions provide the mechanics of spreading remains unknown. By using the uvCLAP (UV cross-linking and affinity purification) methodology, which provides unprecedented information about RNA secondary structures in vivo, we identified the minimal functional unit of roX2 RNA. By using wild-type and various MLE mutant derivatives, including a catalytically inactive MLE derivative, MLEGET, we show that the minimal roX RNA contains two mutually exclusive stem–loops that exist in a peculiar structural arrangement: When one stem–loop is unwound by MLE, an alternate structure can form, likely trapping MLE in this perpetually structured region. We show that this functional unit is necessary for dosage compensation, as mutations that disrupt this formation lead to male lethality. Thus, we propose that roX2 lncRNA contains an MLE-dependent affinity switch to enable reversible interactions of the MSL complex to allow dosage compensation of the X chromosome. PMID:29066499
Waqas, Muhammad Ahmed; Khan, Imran; Akhter, Muhammad Javaid; Noor, Mehmood Ali; Ashraf, Umair
2017-04-01
Chilling stress hampers the optimal performance of maize under field conditions precipitously by inducing oxidative stress. To confer the damaging effects of chilling stress, the present study aimed to investigate the effects of some natural and synthetic plant growth regulators, i.e., salicylic acid (SA), thiourea (TU), sorghum water extract (SWE), and moringa leaf extract (MLE) on chilling stress tolerance in autumn maize hybrid. Foliar application of growth regulators at low concentrations was carried out at six leaf (V6) and tasseling stages. An increase in crop growth rate (CGR), leaf area index (LAI), leaf area duration (LAD), plant height (PH), grain yield (GY), and total dry matter accumulation (TDM) was observed in exogenously applied plants as compared to control. In addition, improved physio-biochemical, phenological, and grain nutritional quality attributes were noticed in foliar-treated maize plots as compared to non-treated ones. SA-treated plants reduced 20% electrolyte leakage in cell membrane against control. MLE and SA were proved best in improving total phenolic, relative water (19-23%), and chlorophyll contents among other applications. A similar trend was found for photosynthetic and transpiration rates, whereas MLE and SWE were found better in improving CGR, LAI, LAD, TDM, PH, GY, grains per cob, 1000 grain weight, and biological yield among all treatments including control. TU and MLE have significantly reduced the duration in phenological events of crop at the reproductive stage. MLE, TU, and SA also improved the grain protein, oil, and starch contents as compared to control. Enhanced crop water productivity was also observed in MLE-treated plants. Economic analysis suggested that MLE and SA applications were more economical in inducing chilling stress tolerance under field conditions. Although eliciting behavior of all growth regulators improved morpho-physiological attributes against suboptimal temperature stress conditions, MLE and SA acted as leading agents which proved to be better stress alleviators by improving plant physio-biochemical attributes and maize growth.
NASA Astrophysics Data System (ADS)
Luu, Gia Thien; Boualem, Abdelbassit; Duy, Tran Trung; Ravier, Philippe; Butteli, Olivier
Muscle Fiber Conduction Velocity (MFCV) can be calculated from the time delay between the surface electromyographic (sEMG) signals recorded by electrodes aligned with the fiber direction. In order to take into account the non-stationarity during the dynamic contraction (the most daily life situation) of the data, the developed methods have to consider that the MFCV changes over time, which induces time-varying delays and the data is non-stationary (change of Power Spectral Density (PSD)). In this paper, the problem of TVD estimation is considered using a parametric method. First, the polynomial model of TVD has been proposed. Then, the TVD model parameters are estimated by using a maximum likelihood estimation (MLE) strategy solved by a deterministic optimization technique (Newton) and stochastic optimization technique, called simulated annealing (SA). The performance of the two techniques is also compared. We also derive two appropriate Cramer-Rao Lower Bounds (CRLB) for the estimated TVD model parameters and for the TVD waveforms. Monte-Carlo simulation results show that the estimation of both the model parameters and the TVD function is unbiased and that the variance obtained is close to the derived CRBs. A comparison with non-parametric approaches of the TVD estimation is also presented and shows the superiority of the method proposed.
Audiovisual integration increases the intentional step synchronization of side-by-side walkers.
Noy, Dominic; Mouta, Sandra; Lamas, Joao; Basso, Daniel; Silva, Carlos; Santos, Jorge A
2017-12-01
When people walk side-by-side, they often synchronize their steps. To achieve this, individuals might cross-modally match audiovisual signals from the movements of the partner and kinesthetic, cutaneous, visual and auditory signals from their own movements. Because signals from different sensory systems are processed with noise and asynchronously, the challenge of the CNS is to derive the best estimate based on this conflicting information. This is currently thought to be done by a mechanism operating as a Maximum Likelihood Estimator (MLE). The present work investigated whether audiovisual signals from the partner are integrated according to MLE in order to synchronize steps during walking. Three experiments were conducted in which the sensory cues from a walking partner were virtually simulated. In Experiment 1 seven participants were instructed to synchronize with human-sized Point Light Walkers and/or footstep sounds. Results revealed highest synchronization performance with auditory and audiovisual cues. This was quantified by the time to achieve synchronization and by synchronization variability. However, this auditory dominance effect might have been due to artifacts of the setup. Therefore, in Experiment 2 human-sized virtual mannequins were implemented. Also, audiovisual stimuli were rendered in real-time and thus were synchronous and co-localized. All four participants synchronized best with audiovisual cues. For three of the four participants results point toward their optimal integration consistent with the MLE model. Experiment 3 yielded performance decrements for all three participants when the cues were incongruent. Overall, these findings suggest that individuals might optimally integrate audiovisual cues to synchronize steps during side-by-side walking. Copyright © 2017 Elsevier B.V. All rights reserved.
Censored Hurdle Negative Binomial Regression (Case Study: Neonatorum Tetanus Case in Indonesia)
NASA Astrophysics Data System (ADS)
Yuli Rusdiana, Riza; Zain, Ismaini; Wulan Purnami, Santi
2017-06-01
Hurdle negative binomial model regression is a method that can be used for discreate dependent variable, excess zero and under- and overdispersion. It uses two parts approach. The first part estimates zero elements from dependent variable is zero hurdle model and the second part estimates not zero elements (non-negative integer) from dependent variable is called truncated negative binomial models. The discrete dependent variable in such cases is censored for some values. The type of censor that will be studied in this research is right censored. This study aims to obtain the parameter estimator hurdle negative binomial regression for right censored dependent variable. In the assessment of parameter estimation methods used Maximum Likelihood Estimator (MLE). Hurdle negative binomial model regression for right censored dependent variable is applied on the number of neonatorum tetanus cases in Indonesia. The type data is count data which contains zero values in some observations and other variety value. This study also aims to obtain the parameter estimator and test statistic censored hurdle negative binomial model. Based on the regression results, the factors that influence neonatorum tetanus case in Indonesia is the percentage of baby health care coverage and neonatal visits.
The ASEAN economic community and medical qualification
Kittrakulrat, Jathurong; Jongjatuporn, Witthawin; Jurjai, Ravipol; Jarupanich, Nicha; Pongpirul, Krit
2014-01-01
Background In the regional movement toward ASEAN Economic Community (AEC), medical professions including physicians can be qualified to practice medicine in another country. Ensuring comparable, excellent medical qualification systems is crucial but the availability and analysis of relevant information has been lacking. Objective This study had the following aims: 1) to comparatively analyze information on Medical Licensing Examinations (MLE) across ASEAN countries and 2) to assess stakeholders’ view on potential consequences of AEC on the medical profession from a Thai perspective. Design To search for relevant information on MLE, we started with each country's national body as the primary data source. In case of lack of available data, secondary data sources including official websites of medical universities, colleagues in international and national medical student organizations, and some other appropriate Internet sources were used. Feasibility and concerns about validity and reliability of these sources were discussed among investigators. Experts in the region invited through HealthSpace.Asia conducted the final data validation. For the second objective, in-depth interviews were conducted with 13 Thai stakeholders, purposely selected based on a maximum variation sampling technique to represent the points of view of the medical licensing authority, the medical profession, ethicists and economists. Results MLE systems exist in all ASEAN countries except Brunei, but vary greatly. Although the majority has a national MLE system, Singapore, Indonesia, and Vietnam accept results of MLE conducted at universities. Thailand adopted the USA's 3-step approach that aims to check pre-clinical knowledge, clinical knowledge, and clinical skills. Most countries, however, require only one step. A multiple choice question (MCQ) is the most commonly used method of assessment; a modified essay question (MEQ) is the next most common. Although both tests assess candidate's knowledge, the Objective Structured Clinical Examination (OSCE) is used to verify clinical skills of the examinee. The validity of the medical license and that it reflects a consistent and high standard of medical knowledge is a sensitive issue because of potentially unfair movement of physicians and an embedded sense of domination, at least from a Thai perspective. Conclusions MLE systems differ across ASEAN countries in some important aspects that might be of concern from a fairness viewpoint and therefore should be addressed in the movement toward AEC. PMID:25215908
Logarithmic Laplacian Prior Based Bayesian Inverse Synthetic Aperture Radar Imaging.
Zhang, Shuanghui; Liu, Yongxiang; Li, Xiang; Bi, Guoan
2016-04-28
This paper presents a novel Inverse Synthetic Aperture Radar Imaging (ISAR) algorithm based on a new sparse prior, known as the logarithmic Laplacian prior. The newly proposed logarithmic Laplacian prior has a narrower main lobe with higher tail values than the Laplacian prior, which helps to achieve performance improvement on sparse representation. The logarithmic Laplacian prior is used for ISAR imaging within the Bayesian framework to achieve better focused radar image. In the proposed method of ISAR imaging, the phase errors are jointly estimated based on the minimum entropy criterion to accomplish autofocusing. The maximum a posterior (MAP) estimation and the maximum likelihood estimation (MLE) are utilized to estimate the model parameters to avoid manually tuning process. Additionally, the fast Fourier Transform (FFT) and Hadamard product are used to minimize the required computational efficiency. Experimental results based on both simulated and measured data validate that the proposed algorithm outperforms the traditional sparse ISAR imaging algorithms in terms of resolution improvement and noise suppression.
Liang, Zhihua; Das, Atreyee; Beerman, Daniel; Hu, Zhiqiang
2010-06-01
Biomass characteristics and microbial community diversity between a submerged membrane bioreactor with mixed liquor recirculation (MLE/MBR) and a membrane bioreactor with the addition of integrated fixed biofilm medium (IFMBR) were compared for organic carbon and nitrogen removal from wastewater. The two bench-scale MBRs were continuously operated in parallel at a hydraulic retention time (HRT) of 24h and solids retention time (SRT) of 20d. Both MBRs demonstrated good COD removal efficiencies (>97.7%) at incremental inflow organic loading rates. The total nitrogen removal efficiencies were 67% for MLE/MBR and 41% for IFMBR. The recirculation of mixed liquor from aerobic zone to anoxic zone in the MLE/MBR resulted in higher microbial activities of heterotrophic (46.96mgO(2)/gVSSh) and autotrophic bacteria (30.37mgO(2)/gVSSh) in the MLE/MBR compared to those from IFMBR. Terminal Restriction Fragment Length Polymorphism analysis indicated that the higher nitrifying activities were correlated with more diversity of nitrifying bacterial populations in the MLE/MBR. Membrane fouling due to bacterial growth was evident in both the reactors. Even though the trans-membrane pressure and flux profiles of MLE/MBR and IFMBR were different, the patterns of total membrane resistance changes had no considerable difference under the same operating conditions. The results suggest that metabolic selection via alternating anoxic/aerobic processes has the potential of having higher bacterial activities and improved nutrient removal in MBR systems. Copyright 2010 Elsevier Ltd. All rights reserved.
Age of the Mono Lake excursion and associated tephra
Benson, L.; Liddicoat, J.; Smoot, J.; Sarna-Wojcicki, A.; Negrini, R.; Lund, S.
2003-01-01
The Mono Lake excursion (MLE) is an important time marker that has been found in lake and marine sediments across much of the Northern Hemisphere. Dating of this event at its type locality, the Mono Basin of California, has yielded controversial results with the most recent effort concluding that the MLE may actually be the Laschamp excursion (Earth Planet. Sci. Lett. 197 (2002) 151). We show that a volcanic tephra (Ash #15) that occurs near the midpoint of the MLE has a date (not corrected for reservoir effect) of 28,620 ?? 300 14C yr BP (??? 32,400 GISP2 yr BP) in the Pyramid Lake Basin of Nevada. Given the location of Ash #15 and the duration of the MLE in the Mono Basin, the event occurred between 31,500 and 33,300 GISP2 yr BP, an age range consistent with the position and age of the uppermost of two paleointensity minima in the NAPIS-75 stack that has been associated with the MLE (Philos. Trans. R. Soc. London Ser. A 358 (2000) 1009). The lower paleointensity minimum in the NAPIS-75 stack is considered to be the Laschamp excursion (Philos. Trans. R. Soc. London Ser. A 358 (2000) 1009).
Aktas, M; Özübek, S
2017-07-01
This study investigated possible transovarial and transstadial transmission of Hepatozoon canis by Rhipicephalus sanguineus (Latreille) ticks collected from naturally infected dogs in a municipal dog shelter and the grounds of the shelter. Four hundred sixty-five engorged nymphs were collected from 16 stray dogs that were found to be infected with H. canis by blood smear and PCR analyses and maintained in an incubator at 28 °C for moulting. Four hundred eighteen nymphs moulted to adults 14-16 d post collection. Unfed ticks from the shelter grounds comprised 1,500 larvae, 2,100 nymphs, and 85 adults; were sorted according to origin, developmental stage, and sex into 117 pools; and screened by 18S rRNA PCR for Hepatozoon infection. Of 60 adult tick pools examined, 51 were infected with H. canis. The overall maximum likelihood estimate (MLE) of infection rate was calculated as 21.0% (CI 15.80-28.21). Hepatozoon canis was detected in 31 out of 33 female pools (MLE 26.96%, CI 17.64-44.33) and 20 out of 27 male pools (MLE 14.82%, CI 20.15-46.41). Among 42 unfed nymph pools collected from the shelter, 26 were infected with H. canis, and MLE of infection was calculated as 1.9% (CI 1.25-2.77). No H. canis DNA was detected in any of the gDNA pools consisting of larva specimens. Partial sequences of the 18S rRNA gene shared 99-100% similarity with the corresponding H. canis isolates. Our results revealed the transstadial transmission of H. canis by R. sanguineus, both from larva to nymph and from nymph to adult, in field conditions. However, there were no evidence of transovarial transmission. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Lim, Hyun Hwa; Lee, Sung Ok; Kim, Sun Yeou; Yang, Soo Jin; Lim, Yunsook
2013-10-01
The purpose of this study was to investigate the anti-inflammatory and antiobesity effect of combinational mulberry leaf extract (MLE) and mulberry fruit extract (MFE) in a high-fat (HF) diet-induced obese mice. Mice were fed a control diet or a HF diet for nine weeks. After obesity was induced, the mice were administered with single MLE at low dose (133 mg/kg/day, LMLE) and high dose (333 mg/kg/day, HMLE) or combinational MLE and MFE (MLFE) at low dose (133 mg MLE and 67 mg MFE/kg/day, LMLFE) and high dose (333 mg MLE and 167 mg MFE/kg/day, HMLFE) by stomach gavage for 12 weeks. The mulberry leaf and fruit extract treatment for 12 weeks did not show liver toxicity. The single MLE and combinational MLFE treatments significantly decreased plasma triglyceride, liver lipid peroxidation levels and adipocyte size and improved hepatic steatosis as compared with the HF group. The combinational MLFE treatment significantly decreased body weight gain, fasting plasma glucose and insulin, and homeostasis model assessment of insulin resistance. HMLFE treatment significantly improved glucose control during intraperitoneal glucose tolerance test compared with the HF group. Moreover, HMLFE treatment reduced protein levels of oxidative stress markers (manganese superoxide dismutase) and inflammatory markers (monocyte chemoattractant protein-1, inducible nitric oxide synthase, C-reactive protein, tumour necrosis factor-α and interleukin-1) in liver and adipose tissue. Taken together, combinational MLFE treatment has potential antiobesity and antidiabetic effects through modulation of obesity-induced inflammation and oxidative stress in HF diet-induced obesity.
Using a multinomial tree model for detecting mixtures in perceptual detection
Chechile, Richard A.
2014-01-01
In the area of memory research there have been two rival approaches for memory measurement—signal detection theory (SDT) and multinomial processing trees (MPT). Both approaches provide measures for the quality of the memory representation, and both approaches provide for corrections for response bias. In recent years there has been a strong case advanced for the MPT approach because of the finding of stochastic mixtures on both target-present and target-absent tests. In this paper a case is made that perceptual detection, like memory recognition, involves a mixture of processes that are readily represented as a MPT model. The Chechile (2004) 6P memory measurement model is modified in order to apply to the case of perceptual detection. This new MPT model is called the Perceptual Detection (PD) model. The properties of the PD model are developed, and the model is applied to some existing data of a radiologist examining CT scans. The PD model brings out novel features that were absent from a standard SDT analysis. Also the topic of optimal parameter estimation on an individual-observer basis is explored with Monte Carlo simulations. These simulations reveal that the mean of the Bayesian posterior distribution is a more accurate estimator than the corresponding maximum likelihood estimator (MLE). Monte Carlo simulations also indicate that model estimates based on only the data from an individual observer can be improved upon (in the sense of being more accurate) by an adjustment that takes into account the parameter estimate based on the data pooled across all the observers. The adjustment of the estimate for an individual is discussed as an analogous statistical effect to the improvement over the individual MLE demonstrated by the James–Stein shrinkage estimator in the case of the multiple-group normal model. PMID:25018741
Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.
Han, Lei; Zhang, Yu; Zhang, Tong
2016-08-01
The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.
Wang, Tao; Li, Hua; Wang, Hua; Su, Jing
2015-04-16
The present study established a typing method with NotI-based pulsed-field gel electrophoresis (PFGE) and stress response gene schemed multilocus sequence typing (MLST) for 55 Oenococcus oeni strains isolated from six individual regions in China and two model strains PSU-1 (CP000411) and ATCC BAA-1163 (AAUV00000000). Seven stress response genes, cfa, clpL, clpP, ctsR, mleA, mleP and omrA, were selected for MLST testing, and positive selective pressure was detected for these genes. Furthermore, both methods separated the strains into two clusters. The PFGE clusters are correlated with the region, whereas the sequence types (STs) formed by the MLST confirm the two clusters identified by PFGE. In addition, the population structure was a mixture of evolutionary pathways, and the strains exhibited both clonal and panmictic characteristics. Copyright © 2015 Elsevier B.V. All rights reserved.
Estimating cost ratio distribution between fatal and non-fatal road accidents in Malaysia
NASA Astrophysics Data System (ADS)
Hamdan, Nurhidayah; Daud, Noorizam
2014-07-01
Road traffic crashes are a global major problem, and should be treated as a shared responsibility. In Malaysia, road accident tragedies kill 6,917 people and injure or disable 17,522 people in year 2012, and government spent about RM9.3 billion in 2009 which cost the nation approximately 1 to 2 percent loss of gross domestic product (GDP) reported annually. The current cost ratio for fatal and non-fatal accident used by Ministry of Works Malaysia simply based on arbitrary value of 6:4 or equivalent 1.5:1 depends on the fact that there are six factors involved in the calculation accident cost for fatal accident while four factors for non-fatal accident. The simple indication used by the authority to calculate the cost ratio is doubted since there is lack of mathematical and conceptual evidence to explain how this ratio is determined. The main aim of this study is to determine the new accident cost ratio for fatal and non-fatal accident in Malaysia based on quantitative statistical approach. The cost ratio distributions will be estimated based on Weibull distribution. Due to the unavailability of official accident cost data, insurance claim data both for fatal and non-fatal accident have been used as proxy information for the actual accident cost. There are two types of parameter estimates used in this study, which are maximum likelihood (MLE) and robust estimation. The findings of this study reveal that accident cost ratio for fatal and non-fatal claim when using MLE is 1.33, while, for robust estimates, the cost ratio is slightly higher which is 1.51. This study will help the authority to determine a more accurate cost ratio between fatal and non-fatal accident as compared to the official ratio set by the government, since cost ratio is an important element to be used as a weightage in modeling road accident related data. Therefore, this study provides some guidance tips to revise the insurance claim set by the Malaysia road authority, hence the appropriate method that suitable to implement in Malaysia can be analyzed.
NASA Astrophysics Data System (ADS)
Kulisek, J. A.; Schweppe, J. E.; Stave, S. C.; Bernacki, B. E.; Jordan, D. V.; Stewart, T. N.; Seifert, C. E.; Kernan, W. J.
2015-06-01
Helicopter-mounted gamma-ray detectors can provide law enforcement officials the means to quickly and accurately detect, identify, and locate radiological threats over a wide geographical area. The ability to accurately distinguish radiological threat-generated gamma-ray signatures from background gamma radiation in real time is essential in order to realize this potential. This problem is non-trivial, especially in urban environments for which the background may change very rapidly during flight. This exacerbates the challenge of estimating background due to the poor counting statistics inherent in real-time airborne gamma-ray spectroscopy measurements. To address this challenge, we have developed a new technique for real-time estimation of background gamma radiation from aerial measurements without the need for human analyst intervention. The method can be calibrated using radiation transport simulations along with data from previous flights over areas for which the isotopic composition need not be known. Over the examined measured and simulated data sets, the method generated accurate background estimates even in the presence of a strong, 60Co source. The potential to track large and abrupt changes in background spectral shape and magnitude was demonstrated. The method can be implemented fairly easily in most modern computing languages and environments.
Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F
2018-01-01
Classically understood as a deficit in spatial vision, amblyopia is increasingly recognized to also impair audiovisual multisensory processing. Studies to date, however, have not determined whether the audiovisual abnormalities reflect a failure of multisensory integration, or an optimal strategy in the face of unisensory impairment. We use the ventriloquism effect and the maximum-likelihood estimation (MLE) model of optimal integration to investigate integration of audiovisual spatial information in amblyopia. Participants with unilateral amblyopia (n = 14; mean age 28.8 years; 7 anisometropic, 3 strabismic, 4 mixed mechanism) and visually normal controls (n = 16, mean age 29.2 years) localized brief unimodal auditory, unimodal visual, and bimodal (audiovisual) stimuli during binocular viewing using a location discrimination task. A subset of bimodal trials involved the ventriloquism effect, an illusion in which auditory and visual stimuli originating from different locations are perceived as originating from a single location. Localization precision and bias were determined by psychometric curve fitting, and the observed parameters were compared with predictions from the MLE model. Spatial localization precision was significantly reduced in the amblyopia group compared with the control group for unimodal visual, unimodal auditory, and bimodal stimuli. Analyses of localization precision and bias for bimodal stimuli showed no significant deviations from the MLE model in either the amblyopia group or the control group. Despite pervasive deficits in localization precision for visual, auditory, and audiovisual stimuli, audiovisual integration remains intact and optimal in unilateral amblyopia.
NASA Astrophysics Data System (ADS)
Hasan, Husna; Radi, Noor Fadhilah Ahmad; Kassim, Suraiya
2012-05-01
Extreme share return in Malaysia is studied. The monthly, quarterly, half yearly and yearly maximum returns are fitted to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are performed to test for stationarity, while Mann-Kendall (MK) test is for the presence of monotonic trend. Maximum Likelihood Estimation (MLE) is used to estimate the parameter while L-moments estimate (LMOM) is used to initialize the MLE optimization routine for the stationary model. Likelihood ratio test is performed to determine the best model. Sherman's goodness of fit test is used to assess the quality of convergence of the GEV distribution by these monthly, quarterly, half yearly and yearly maximum. Returns levels are then estimated for prediction and planning purposes. The results show all maximum returns for all selection periods are stationary. The Mann-Kendall test indicates the existence of trend. Thus, we ought to model for non-stationary model too. Model 2, where the location parameter is increasing with time is the best for all selection intervals. Sherman's goodness of fit test shows that monthly, quarterly, half yearly and yearly maximum converge to the GEV distribution. From the results, it seems reasonable to conclude that yearly maximum is better for the convergence to the GEV distribution especially if longer records are available. Return level estimates, which is the return level (in this study return amount) that is expected to be exceeded, an average, once every t time periods starts to appear in the confidence interval of T = 50 for quarterly, half yearly and yearly maximum.
Study on constant-step stress accelerated life tests in white organic light-emitting diodes.
Zhang, J P; Liu, C; Chen, X; Cheng, G L; Zhou, A X
2014-11-01
In order to obtain reliability information for a white organic light-emitting diode (OLED), two constant and one step stress tests were conducted with its working current increased. The Weibull function was applied to describe the OLED life distribution, and the maximum likelihood estimation (MLE) and its iterative flow chart were used to calculate shape and scale parameters. Furthermore, the accelerated life equation was determined using the least squares method, a Kolmogorov-Smirnov test was performed to assess if the white OLED life follows a Weibull distribution, and self-developed software was used to predict the average and the median lifetimes of the OLED. The numerical results indicate that white OLED life conforms to a Weibull distribution, and that the accelerated life equation completely satisfies the inverse power law. The estimated life of a white OLED may provide significant guidelines for its manufacturers and customers. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Chen, Dar-Hsin; Chou, Heng-Chih; Wang, David; Zaabar, Rim
2011-06-01
Most empirical research of the path-dependent, exotic-option credit risk model focuses on developed markets. Taking Taiwan as an example, this study investigates the bankruptcy prediction performance of the path-dependent, barrier option model in the emerging market. We adopt Duan's (1994) [11], (2000) [12] transformed-data maximum likelihood estimation (MLE) method to directly estimate the unobserved model parameters, and compare the predictive ability of the barrier option model to the commonly adopted credit risk model, Merton's model. Our empirical findings show that the barrier option model is more powerful than Merton's model in predicting bankruptcy in the emerging market. Moreover, we find that the barrier option model predicts bankruptcy much better for highly-leveraged firms. Finally, our findings indicate that the prediction accuracy of the credit risk model can be improved by higher asset liquidity and greater financial transparency.
WE-H-207A-06: Hypoxia Quantification in Static PET Images: The Signal in the Noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, H; Yeung, I; Milosevic, M
2016-06-15
Purpose: Quantification of hypoxia from PET images is of considerable clinical interest. In the absence of dynamic PET imaging the hypoxic fraction (HF) of a tumor has to be estimated from voxel values of activity concentration of a radioactive hypoxia tracer. This work is part of an effort to standardize quantification of tumor hypoxic fraction from PET images. Methods: A simple hypoxia imaging model in the tumor was developed. The distribution of the tracer activity was described as the sum of two different probability distributions, one for the normoxic (and necrotic), the other for the hypoxic voxels. The widths ofmore » the distributions arise due to variability of the transport, tumor tissue inhomogeneity, tracer binding kinetics, and due to PET image noise. Quantification of HF was performed for various levels of variability using two different methodologies: a) classification thresholds between normoxic and hypoxic voxels based on a non-hypoxic surrogate (muscle), and b) estimation of the (posterior) probability distributions based on maximizing likelihood optimization that does not require a surrogate. Data from the hypoxia imaging model and from 27 cervical cancer patients enrolled in a FAZA PET study were analyzed. Results: In the model, where the true value of HF is known, thresholds usually underestimate the value for large variability. For the patients, a significant uncertainty of the HF values (an average intra-patient range of 17%) was caused by spatial non-uniformity of image noise which is a hallmark of all PET images. Maximum likelihood estimation (MLE) is able to directly optimize for the weights of both distributions, however, may suffer from poor optimization convergence. For some patients, MLE-based HF values showed significant differences to threshold-based HF-values. Conclusion: HF-values depend critically on the magnitude of the different sources of tracer uptake variability. A measure of confidence should also be reported.« less
Holiga, Štefan; Mueller, Karsten; Möller, Harald E.; Urgošík, Dušan; Růžička, Evžen; Schroeter, Matthias L.; Jech, Robert
2015-01-01
During implantation of deep-brain stimulation (DBS) electrodes in the target structure, neurosurgeons and neurologists commonly observe a “microlesion effect” (MLE), which occurs well before initiating subthalamic DBS. This phenomenon typically leads to a transitory improvement of motor symptoms of patients suffering from Parkinson's disease (PD). Mechanisms behind MLE remain poorly understood. In this work, we exploited the notion of ranking to assess spontaneous brain activity in PD patients examined by resting-state functional magnetic resonance imaging in response to penetration of DBS electrodes in the subthalamic nucleus. In particular, we employed a hypothesis-free method, eigenvector centrality (EC), to reveal motor-communication-hubs of the highest rank and their reorganization following the surgery; providing a unique opportunity to evaluate the direct impact of disrupting the PD motor circuitry in vivo without prior assumptions. Penetration of electrodes was associated with increased EC of functional connectivity in the brainstem. Changes in connectivity were quantitatively related to motor improvement, which further emphasizes the clinical importance of the functional integrity of the brainstem. Surprisingly, MLE and DBS were associated with anatomically different EC maps despite their similar clinical benefit on motor functions. The DBS solely caused an increase in connectivity of the left premotor region suggesting separate pathophysiological mechanisms of both interventions. While the DBS acts at the cortical level suggesting compensatory activation of less affected motor regions, the MLE affects more fundamental circuitry as the dysfunctional brainstem predominates in the beginning of PD. These findings invigorate the overlooked brainstem perspective in the understanding of PD and support the current trend towards its early diagnosis. PMID:26509113
Holiga, Štefan; Mueller, Karsten; Möller, Harald E; Urgošík, Dušan; Růžička, Evžen; Schroeter, Matthias L; Jech, Robert
2015-01-01
During implantation of deep-brain stimulation (DBS) electrodes in the target structure, neurosurgeons and neurologists commonly observe a "microlesion effect" (MLE), which occurs well before initiating subthalamic DBS. This phenomenon typically leads to a transitory improvement of motor symptoms of patients suffering from Parkinson's disease (PD). Mechanisms behind MLE remain poorly understood. In this work, we exploited the notion of ranking to assess spontaneous brain activity in PD patients examined by resting-state functional magnetic resonance imaging in response to penetration of DBS electrodes in the subthalamic nucleus. In particular, we employed a hypothesis-free method, eigenvector centrality (EC), to reveal motor-communication-hubs of the highest rank and their reorganization following the surgery; providing a unique opportunity to evaluate the direct impact of disrupting the PD motor circuitry in vivo without prior assumptions. Penetration of electrodes was associated with increased EC of functional connectivity in the brainstem. Changes in connectivity were quantitatively related to motor improvement, which further emphasizes the clinical importance of the functional integrity of the brainstem. Surprisingly, MLE and DBS were associated with anatomically different EC maps despite their similar clinical benefit on motor functions. The DBS solely caused an increase in connectivity of the left premotor region suggesting separate pathophysiological mechanisms of both interventions. While the DBS acts at the cortical level suggesting compensatory activation of less affected motor regions, the MLE affects more fundamental circuitry as the dysfunctional brainstem predominates in the beginning of PD. These findings invigorate the overlooked brainstem perspective in the understanding of PD and support the current trend towards its early diagnosis.
Hyperspherical von Mises-Fisher mixture (HvMF) modelling of high angular resolution diffusion MRI.
Bhalerao, Abhir; Westin, Carl-Fredrik
2007-01-01
A mapping of unit vectors onto a 5D hypersphere is used to model and partition ODFs from HARDI data. This mapping has a number of useful and interesting properties and we make a link to interpretation of the second order spherical harmonic decompositions of HARDI data. The paper presents the working theory and experiments of using a von Mises-Fisher mixture model for directional samples. The MLE of the second moment of the HvMF pdf can also be related to fractional anisotropy. We perform error analysis of the estimation scheme in single and multi-fibre regions and then show how a penalised-likelihood model selection method can be employed to differentiate single and multiple fibre regions.
Statistical inferences with jointly type-II censored samples from two Pareto distributions
NASA Astrophysics Data System (ADS)
Abu-Zinadah, Hanaa H.
2017-08-01
In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.
Distributed Practicum Supervision in a Managed Learning Environment (MLE)
ERIC Educational Resources Information Center
Carter, David
2005-01-01
This evaluation-research feasibility study piloted the creation of a technology-mediated managed learning environment (MLE) involving the implementation of one of a new generation of instructionally driven management information systems (IMISs). The system, and supporting information and communications technology (ICT) was employed to support…
Ashley, Madeleine; Dixon, Mike; Prasad, Krishna
2014-10-01
Differences in length and circumference of cigarettes may influence smoker behaviour and exposure to smoke constituents. Superslim king-size (KSSS) cigarettes (17mm circumference versus 25mm circumference of conventional king-size [KS] cigarettes), have gained popularity in several countries, including Russia. Some smoke constituents are lower in machine-smoked KSSS versus KS cigarettes, but few data exist on actual exposure in smokers. We investigated mouth-level exposure (MLE) to tar and nicotine in Russian smokers of KSSS versus KS cigarettes and measured smoke constituents under machine-smoking conditions. MLE to tar was similar for smokers of 1mg ISO tar yield products, but lower for smokers of 4mg and 7mg KSSS versus KS cigarettes. MLE to nicotine was lower in smokers of 4mg KSSS versus KS cigarettes, but not for other tar bands. No gender differences were observed for nicotine or tar MLE. Under International Organization for Standardization, Health Canada Intense and Massachusetts regimes, KSSS cigarettes tended to yield less carbon monoxide, acetaldehyde, nitric oxide, acrylonitrile, benzene, 1,3-butadiene and tobacco-specific nitrosamines, but more formaldehyde, than KS cigarettes. In summary, differences in MLE were observed between cigarette formats, but not systematically across pack tar bands. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Using Mediated Learning Experiences To Enhance Children's Thinking.
ERIC Educational Resources Information Center
Seng, SeokHoon
This paper focuses on the relationship between adult-child interactions and the developing cognitive competence of young children as rated by the Mediated Learning Experience (MLE) Scale. The scale was devised to reflect 10 criteria of adult-child interaction hypothesized to comprise an MLE and therefore to enhance children's cognitive…
Reliability-Weighted Integration of Audiovisual Signals Can Be Modulated by Top-down Attention
Noppeney, Uta
2018-01-01
Abstract Behaviorally, it is well established that human observers integrate signals near-optimally weighted in proportion to their reliabilities as predicted by maximum likelihood estimation. Yet, despite abundant behavioral evidence, it is unclear how the human brain accomplishes this feat. In a spatial ventriloquist paradigm, participants were presented with auditory, visual, and audiovisual signals and reported the location of the auditory or the visual signal. Combining psychophysics, multivariate functional MRI (fMRI) decoding, and models of maximum likelihood estimation (MLE), we characterized the computational operations underlying audiovisual integration at distinct cortical levels. We estimated observers’ behavioral weights by fitting psychometric functions to participants’ localization responses. Likewise, we estimated the neural weights by fitting neurometric functions to spatial locations decoded from regional fMRI activation patterns. Our results demonstrate that low-level auditory and visual areas encode predominantly the spatial location of the signal component of a region’s preferred auditory (or visual) modality. By contrast, intraparietal sulcus forms spatial representations by integrating auditory and visual signals weighted by their reliabilities. Critically, the neural and behavioral weights and the variance of the spatial representations depended not only on the sensory reliabilities as predicted by the MLE model but also on participants’ modality-specific attention and report (i.e., visual vs. auditory). These results suggest that audiovisual integration is not exclusively determined by bottom-up sensory reliabilities. Instead, modality-specific attention and report can flexibly modulate how intraparietal sulcus integrates sensory signals into spatial representations to guide behavioral responses (e.g., localization and orienting). PMID:29527567
Misencik, Michael J.; Grubaugh, Nathan D.; Andreadis, Theodore G.; Ebel, Gregory D.
2016-01-01
Abstract The genus Flavivirus includes a number of newly recognized viruses that infect and replicate only within mosquitoes. To determine whether insect-specific flaviviruses (ISFs) may infect Culiseta (Cs.) melanura mosquitoes, we screened pools of field-collected mosquitoes for virus infection by RT-PCR targeting conserved regions of the NS5 gene. NS5 nucleotide sequences amplified from Cs. melanura pools were genetically similar to other ISFs and most closely matched Calbertado virus from Culex tarsalis, sharing 68.7% nucleotide and 76.1% amino acid sequence identity. The complete genome of one virus isolate was sequenced to reveal a primary open reading frame (ORF) encoding a viral polyprotein characteristic of the genus Flavivirus. Phylogenetic analysis showed that this virus represents a distinct evolutionary lineage that belongs to the classical ISF group. The virus was detected solely in Cs. melanura pools, occurred in sampled populations from Connecticut, New York, New Hampshire, and Maine, and infected both adult and larval stages of the mosquito. Maximum likelihood estimate infection rates (MLE-IR) were relatively stable in overwintering Cs. melanura larvae collected monthly from November of 2012 through May of 2013 (MLE-IR = 0.7–2.1/100 mosquitoes) and in host-seeking females collected weekly from June through October of 2013 (MLE-IR = 3.8–11.5/100 mosquitoes). Phylogenetic analysis of viral sequences revealed limited genetic variation that lacked obvious geographic structure among strains in the northeastern United States. This new virus is provisionally named Culiseta flavivirus on the basis of its host association with Cs. melanura. PMID:26807512
Aydin, Mehmet Fatih; Aktas, Munir; Dumanli, Nazir
2015-01-01
A molecular survey was undertaken in the Black Sea region of Turkey to determine the presence of Theileria and Babesia species of medical and veterinary importance. The ticks were removed from sheep and goats, pooled according to species and locations, and analyzed by PCR-based reverse line blot (RLB) and sequencing. A total of 2241 ixodid ticks belonging to 5 genus and 12 species were collected and divided into 310 pools. Infection rates were calculated as the maximum likelihood estimation (MLE) with 95% confidence intervals (CI). Of the 310 pools tested, 46 (14.83%) were found to be infected with Theileria or Babesia species, and the overall MLE of the infection rate was calculated as 2.27% (CI 1.67-2.99). The MLE of the infection rates were calculated as 0.691% (CI 0.171-1.78) in Haemaphysalis parva, 1.47% (CI 0.081-6.37) in Rhipicephalus sanguineus, 1.84% (CI 0.101-7.87) in Ixodes ricinus, 2.86% (CI 1.68-4.48) in Rhipicephalus turanicus, 5.57% (CI 0.941-16.3) in Hyalomma marginatum, and 6.2% (CI 4.02-9.02) in Rhipicephalus bursa. Pathogens identified in ticks included Theileria ovis, Babesia ovis, Babesia bigemina, and Babesia microti. Most tick pools were infected with a single pathogen. However, five pools displayed mixed infections with T. ovis and B. ovis. This study provides the first molecular evidence for the presence of B. microti in ticks in Turkey.
Subpixel based defocused points removal in photon-limited volumetric dataset
NASA Astrophysics Data System (ADS)
Muniraj, Inbarasan; Guo, Changliang; Malallah, Ra'ed; Maraka, Harsha Vardhan R.; Ryle, James P.; Sheridan, John T.
2017-03-01
The asymptotic property of the maximum likelihood estimator (MLE) has been utilized to reconstruct three-dimensional (3D) sectional images in the photon counting imaging (PCI) regime. At first, multiple 2D intensity images, known as Elemental images (EI), are captured. Then the geometric ray-tracing method is employed to reconstruct the 3D sectional images at various depth cues. We note that a 3D sectional image consists of both focused and defocused regions, depending on the reconstructed depth position. The defocused portion is redundant and should be removed in order to facilitate image analysis e.g., 3D object tracking, recognition, classification and navigation. In this paper, we present a subpixel level three-step based technique (i.e. involving adaptive thresholding, boundary detection and entropy based segmentation) to discard the defocused sparse-samples from the reconstructed photon-limited 3D sectional images. Simulation results are presented demonstrating the feasibility and efficiency of the proposed method.
The Coming of Age of Media Literacy
ERIC Educational Resources Information Center
Domine, Vanessa
2011-01-01
A decade into a new millennium marks a coming of age for media literacy education (MLE). Born from teaching the critical analysis of media texts, MLE has evolved into helping individuals of all ages "develop the habits of inquiry and skills of expression that they need to be critical thinkers, effective communicators and active citizens in…
NASA Astrophysics Data System (ADS)
Degaudenzi, Riccardo; Vanghi, Vieri
1994-02-01
In all-digital Trellis-Coded 8PSK (TC-8PSK) demodulator well suited for VLSI implementation, including maximum likelihood estimation decision-directed (MLE-DD) carrier phase and clock timing recovery, is introduced and analyzed. By simply removing the trellis decoder the demodulator can efficiently cope with uncoded 8PSK signals. The proposed MLE-DD synchronization algorithm requires one sample for the phase and two samples per symbol for the timing loop. The joint phase and timing discriminator characteristics are analytically derived and numerical results checked by means of computer simulations. An approximated expression for steady-state carrier phase and clock timing mean square error has been derived and successfully checked with simulation findings. Synchronizer deviation from the Cramer Rao bound is also discussed. Mean acquisition time for the digital synchronizer has also been computed and checked, using the Monte Carlo simulation technique. Finally, TC-8PSK digital demodulator performance in terms of bit error rate and mean time to lose lock, including digital interpolators and synchronization loops, is presented.
Undesirable Features of the Medical Learning Environment: A Narrative Review of the Literature
ERIC Educational Resources Information Center
Benbassat, Jochanan
2013-01-01
The objective of this narrative review of the literature is to draw attention to four undesirable features of the medical learning environment (MLE). First, students' fears of personal inadequacy and making errors are enhanced rather than alleviated by the hidden curriculum of the clinical teaching setting; second, the MLE projects a denial…
Games and Machine Learning: A Powerful Combination in an Artificial Intelligence Course
ERIC Educational Resources Information Center
Wallace, Scott A.; McCartney, Robert; Russell, Ingrid
2010-01-01
Project MLeXAI [Machine Learning eXperiences in Artificial Intelligence (AI)] seeks to build a set of reusable course curriculum and hands on laboratory projects for the artificial intelligence classroom. In this article, we describe two game-based projects from the second phase of project MLeXAI: Robot Defense--a simple real-time strategy game…
Games and machine learning: a powerful combination in an artificial intelligence course
NASA Astrophysics Data System (ADS)
Wallace, Scott A.; McCartney, Robert; Russell, Ingrid
2010-03-01
Project MLeXAI (Machine Learning eXperiences in Artificial Intelligence (AI)) seeks to build a set of reusable course curriculum and hands on laboratory projects for the artificial intelligence classroom. In this article, we describe two game-based projects from the second phase of project MLeXAI: Robot Defense - a simple real-time strategy game and Checkers - a classic turn-based board game. From the instructors' prospective, we examine aspects of design and implementation as well as the challenges and rewards of using the curricula. We explore students' responses to the projects via the results of a common survey. Finally, we compare the student perceptions from the game-based projects to non-game based projects from the first phase of Project MLeXAI.
Zhan, Tingting; Chevoneva, Inna; Iglewicz, Boris
2010-01-01
The family of weighted likelihood estimators largely overlaps with minimum divergence estimators. They are robust to data contaminations compared to MLE. We define the class of generalized weighted likelihood estimators (GWLE), provide its influence function and discuss the efficiency requirements. We introduce a new truncated cubic-inverse weight, which is both first and second order efficient and more robust than previously reported weights. We also discuss new ways of selecting the smoothing bandwidth and weighted starting values for the iterative algorithm. The advantage of the truncated cubic-inverse weight is illustrated in a simulation study of three-components normal mixtures model with large overlaps and heavy contaminations. A real data example is also provided. PMID:20835375
Robust and efficient estimation with weighted composite quantile regression
NASA Astrophysics Data System (ADS)
Jiang, Xuejun; Li, Jingzhi; Xia, Tian; Yan, Wanfeng
2016-09-01
In this paper we introduce a weighted composite quantile regression (CQR) estimation approach and study its application in nonlinear models such as exponential models and ARCH-type models. The weighted CQR is augmented by using a data-driven weighting scheme. With the error distribution unspecified, the proposed estimators share robustness from quantile regression and achieve nearly the same efficiency as the oracle maximum likelihood estimator (MLE) for a variety of error distributions including the normal, mixed-normal, Student's t, Cauchy distributions, etc. We also suggest an algorithm for the fast implementation of the proposed methodology. Simulations are carried out to compare the performance of different estimators, and the proposed approach is used to analyze the daily S&P 500 Composite index, which verifies the effectiveness and efficiency of our theoretical results.
Yuki, Dai; Sakaguchi, Chikako; Kikuchi, Akira; Futamura, Yasuyuki
2017-07-01
The objective of this clinical study was to investigate the pharmacokinetics of nicotine following the use of a prototype novel tobacco vapor (PNTV) product in comparison to a conventional cigarette (CC1). The study was conducted in Japanese healthy adult male smokers, using an open-label, randomized, two-period crossover design, to assess the pharmacokinetics of nicotine after controlled use of a PNTV product or CC1. During the study period, blood samples were drawn from subjects for the measurement of plasma nicotine concentrations and nicotine intake was estimated from the mouth level exposure (MLE). The C max and AUC last following the use of PNTV product were 45.7% and 68.3%, respectively, of those obtained with CC1 and there were no significant differences in the tmax and t 1/2 between PNTV product and CC1. The estimated MLE following the use of PNTV product was approximately two-thirds of that obtained following the smoking of CC1, but the relative bioavailability of PNTV product to CC1 was approximately 104%. The differences in C max and AUC last between PNTV product and CC1 therefore are explained by differences in nicotine intake. These results suggest that the PNTV product shows a similar pharmacokinetic profile to CC1, while delivering less nicotine following controlled use. Copyright © 2017 Japan Tobacco Inc. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Pierini, J. O.; Restrepo, J. C.; Aguirre, J.; Bustamante, A. M.; Velásquez, G. J.
2017-04-01
A measure of the variability in seasonal extreme streamflow was estimated for the Colombian Caribbean coast, using monthly time series of freshwater discharge from ten watersheds. The aim was to detect modifications in the streamflow monthly distribution, seasonal trends, variance and extreme monthly values. A 20-year length time moving window, with 1-year successive shiftments, was applied to the monthly series to analyze the seasonal variability of streamflow. The seasonal-windowed data were statistically fitted through the Gamma distribution function. Scale and shape parameters were computed using the Maximum Likelihood Estimation (MLE) and the bootstrap method for 1000 resample. A trend analysis was performed for each windowed-serie, allowing to detect the window of maximum absolute values for trends. Significant temporal shifts in seasonal streamflow distribution and quantiles (QT), were obtained for different frequencies. Wet and dry extremes periods increased significantly in the last decades. Such increase did not occur simultaneously through the region. Some locations exhibited continuous increases only at minimum QT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiong, Shanshan; Pan, Xiujie; Xu, Long
Purpose: Radiation-induced pulmonary fibrosis results from thoracic radiation therapy and severely limits radiation therapy approaches. CD4{sup +}CD25{sup +}FoxP3{sup +} regulatory T cells (Tregs) as well as epithelium-to-mesenchyme transition (EMT) cells are involved in pulmonary fibrosis induced by multiple factors. However, the mechanisms of Tregs and EMT cells in irradiation-induced pulmonary fibrosis remain unclear. In the present study, we investigated the influence of Tregs on EMT in radiation-induced pulmonary fibrosis. Methods and Materials: Mice thoraxes were irradiated (20 Gy), and Tregs were depleted by intraperitoneal injection of a monoclonal anti-CD25 antibody 2 hours after irradiation and every 7 days thereafter. Mice were treated onmore » days 3, 7, and 14 and 1, 3, and 6 months post irradiation. The effectiveness of Treg depletion was assayed via flow cytometry. EMT and β-catenin in lung tissues were detected by immunohistochemistry. Tregs isolated from murine spleens were cultured with mouse lung epithelial (MLE) 12 cells, and short interfering RNA (siRNA) knockdown of β-catenin in MLE 12 cells was used to explore the effects of Tregs on EMT and β-catenin via flow cytometry and Western blotting. Results: Anti-CD25 antibody treatment depleted Tregs efficiently, attenuated the process of radiation-induced pulmonary fibrosis, hindered EMT, and reduced β-catenin accumulation in lung epithelial cells in vivo. The coculture of Tregs with irradiated MLE 12 cells showed that Tregs could promote EMT in MLE 12 cells and that the effect of Tregs on EMT was partially abrogated by β-catenin knockdown in vitro. Conclusions: Tregs can promote EMT in accelerating radiation-induced pulmonary fibrosis. This process is partially mediated through β-catenin. Our study suggests a new mechanism for EMT, promoted by Tregs, that accelerates radiation-induced pulmonary fibrosis.« less
Sinzelle, Ludivine; Chesneau, Albert; Bigot, Yves; Mazabraud, André; Pollet, Nicolas
2006-01-01
Mariner-like elements (MLEs) belong to the Tc1-mariner superfamily of DNA transposons, which is very widespread in animal genomes. We report here the first complete description of a MLE, Xtmar1, within the genome of a poikilotherm vertebrate, the amphibian Xenopus tropicalis. A close relative, XlMLE, is also characterized within the genome of a sibling species, Xenopus laevis. The phylogenetic analysis of the relationships between MLE transposases reveals that Xtmar1 is closely related to Hsmar2 and Bytmar1 and that together they form a second distinct lineage of the irritans subfamily. All members of this lineage are also characterized by the 36- to 43-bp size of their imperfectly conserved inverted terminal repeats and by the -8-bp motif located at their outer extremity. Since XlMLE, Xlmar1, and Hsmar2 are present in species located at both extremities of the vertebrate evolutionary tree, we looked for MLE relatives belonging to the same subfamily in the available sequencing projects using the amino acid consensus sequence of the Hsmar2 transposase as an in silico probe. We found that irritans MLEs are present in chordate genomes including most craniates. This therefore suggests that these elements have been present within chordate genomes for 750 Myr and that the main way they have been maintained in these species has been via vertical transmission. The very small number of stochastic losses observed in the data available suggests that their inactivation during evolution has been very slow.
Byrne, Patrick A; Crawford, J Douglas
2010-06-01
It is not known how egocentric visual information (location of a target relative to the self) and allocentric visual information (location of a target relative to external landmarks) are integrated to form reach plans. Based on behavioral data from rodents and humans we hypothesized that the degree of stability in visual landmarks would influence the relative weighting. Furthermore, based on numerous cue-combination studies we hypothesized that the reach system would act like a maximum-likelihood estimator (MLE), where the reliability of both cues determines their relative weighting. To predict how these factors might interact we developed an MLE model that weighs egocentric and allocentric information based on their respective reliabilities, and also on an additional stability heuristic. We tested the predictions of this model in 10 human subjects by manipulating landmark stability and reliability (via variable amplitude vibration of the landmarks and variable amplitude gaze shifts) in three reach-to-touch tasks: an egocentric control (reaching without landmarks), an allocentric control (reaching relative to landmarks), and a cue-conflict task (involving a subtle landmark "shift" during the memory interval). Variability from all three experiments was used to derive parameters for the MLE model, which was then used to simulate egocentric-allocentric weighting in the cue-conflict experiment. As predicted by the model, landmark vibration--despite its lack of influence on pointing variability (and thus allocentric reliability) in the control experiment--had a strong influence on egocentric-allocentric weighting. A reduced model without the stability heuristic was unable to reproduce this effect. These results suggest heuristics for extrinsic cue stability are at least as important as reliability for determining cue weighting in memory-guided reaching.
ERIC Educational Resources Information Center
Tzuriel, David; Shomron, Vered
2018-01-01
Background: The theoretical framework of the current study is based on mediated learning experience (MLE) theory, which is similar to the scaffolding concept. The main question of the current study was to what extent mother-child MLE strategies affect psychological resilience and cognitive modifiability of boys with learning disability (LD).…
Bayesian analysis of physiologically based toxicokinetic and toxicodynamic models.
Hack, C Eric
2006-04-17
Physiologically based toxicokinetic (PBTK) and toxicodynamic (TD) models of bromate in animals and humans would improve our ability to accurately estimate the toxic doses in humans based on available animal studies. These mathematical models are often highly parameterized and must be calibrated in order for the model predictions of internal dose to adequately fit the experimentally measured doses. Highly parameterized models are difficult to calibrate and it is difficult to obtain accurate estimates of uncertainty or variability in model parameters with commonly used frequentist calibration methods, such as maximum likelihood estimation (MLE) or least squared error approaches. The Bayesian approach called Markov chain Monte Carlo (MCMC) analysis can be used to successfully calibrate these complex models. Prior knowledge about the biological system and associated model parameters is easily incorporated in this approach in the form of prior parameter distributions, and the distributions are refined or updated using experimental data to generate posterior distributions of parameter estimates. The goal of this paper is to give the non-mathematician a brief description of the Bayesian approach and Markov chain Monte Carlo analysis, how this technique is used in risk assessment, and the issues associated with this approach.
NASA Astrophysics Data System (ADS)
Bhushan, Awani; Panda, S. K.
2018-05-01
The influence of bimodularity (different stress ∼ strain behaviour in tension and compression) on fracture behaviour of graphite specimens has been studied with fracture toughness (KIc), critical J-integral (JIc) and critical strain energy release rate (GIc) as the characterizing parameter. Bimodularity index (ratio of tensile Young's modulus to compression Young's modulus) of graphite specimens has been obtained from the normalized test data of tensile and compression experimentation. Single edge notch bend (SENB) testing of pre-cracked specimens from the same lot have been carried out as per ASTM standard D7779-11 to determine the peak load and critical fracture parameters KIc, GIc and JIc using digital image correlation technology of crack opening displacements. Weibull weakest link theory has been used to evaluate the mean peak load, Weibull modulus and goodness of fit employing two parameter least square method (LIN2), biased (MLE2-B) and unbiased (MLE2-U) maximum likelihood estimator. The stress dependent elasticity problem of three-dimensional crack progression behaviour for the bimodular graphite components has been solved as an iterative finite element procedure. The crack characterizing parameters critical stress intensity factor and critical strain energy release rate have been estimated with the help of Weibull distribution plot between peak loads versus cumulative probability of failure. Experimental and Computational fracture parameters have been compared qualitatively to describe the significance of bimodularity. The bimodular influence on fracture behaviour of SENB graphite has been reflected on the experimental evaluation of GIc values only, which has been found to be different from the calculated JIc values. Numerical evaluation of bimodular 3D J-integral value is found to be close to the GIc value whereas the unimodular 3D J-value is nearer to the JIc value. The significant difference between the unimodular JIc and bimodular GIc indicates that GIc should be considered as the standard fracture parameter for bimodular brittle specimens.
Large signal-to-noise ratio quantification in MLE for ARARMAX models
NASA Astrophysics Data System (ADS)
Zou, Yiqun; Tang, Xiafei
2014-06-01
It has been shown that closed-loop linear system identification by indirect method can be generally transferred to open-loop ARARMAX (AutoRegressive AutoRegressive Moving Average with eXogenous input) estimation. For such models, the gradient-related optimisation with large enough signal-to-noise ratio (SNR) can avoid the potential local convergence in maximum likelihood estimation. To ease the application of this condition, the threshold SNR needs to be quantified. In this paper, we build the amplitude coefficient which is an equivalence to the SNR and prove the finiteness of the threshold amplitude coefficient within the stability region. The quantification of threshold is achieved by the minimisation of an elaborately designed multi-variable cost function which unifies all the restrictions on the amplitude coefficient. The corresponding algorithm based on two sets of physically realisable system input-output data details the minimisation and also points out how to use the gradient-related method to estimate ARARMAX parameters when local minimum is present as the SNR is small. Then, the algorithm is tested on a theoretical AutoRegressive Moving Average with eXogenous input model for the derivation of the threshold and a gas turbine engine real system for model identification, respectively. Finally, the graphical validation of threshold on a two-dimensional plot is discussed.
Image Processing and Computer Aided Diagnosis in Computed Tomography of the Breast
2007-10-01
Brian Harrawood, Ronald Pedroni, Alexander Crowell, Robert Macri, Mathew Kiser, Richard Walter ,Werner 111 Tornow , Neutron Stimulated Emission...1( kkkk k nn kkk n k n k w PBbywbb σσσ += +−⋅+=+ , (2) MLE estimate is known to increase high frequency image noise. To overcome this, some...contrast to noise ratio results for the three images shown in Figure 5. With grid w /o grid w /o grid; scatter reduction RSF 11% 45% 10% CNR 7.04 6.99
Limited Translation Shrinkage Estimation of Loss Rates in Marine Corps Manpower Models.
1986-03-01
scale. 15 •. .". -+ . " i:."i ;": ,":":". -: -.". :" " --+- - . , m ’:-." ., . . . p p p C. AGGREGATION The current Marine Corps attrition rate... m % ’ TABLE 5 AVIATION FIGURES OF MERIT 1981 1982 1983 TRANSFORMED FOM 1st LT AGG ORIG 6.405 9.645 11.393 AGG TRANS 197.621 202.022 208.808 MLE 3.914...wvr - m - . . , - . . . . ..k -"- - V. CONCLUSIONS AND RECOMMENDATIONS A. CONCLUSIONS This study investigated the performance of various attrition
Acute Dermal Toxicity of Diethyleneglycol Dinitrate in Rabbits
1988-09-01
ACC# ANtAL ID SNc DLW21OSIS 38260 85F157 Mile Not remarkable (NR) 38261 85F158 Mile Purulent otitis media , bilateral 38262 85F159 Mle NR 38263 85F160...Mle NR 38264 85F161 Male Purulent otitis media , left ear 38263 85F164 Female MR 38266 85F166 Female NR 38267 85F167 Female NR 38268 85F168 Female t
Liljegren, Mats; Ekberg, Kerstin
2009-01-01
The aim of the present study was to examine the cross-sectional and 2-year longitudinal associations between perceived organizational justice, self-rated health and burnout. The study used questionnaire data from 428 Swedish employment officers and the data was analyzed with Structural Equation Modeling, SEM. Two different models were tested: a global organizational justice model (with and without correlated measurement errors) and a differentiated (distributive, procedural and interactional organizational justice) justice model (with and without correlated measurement errors). The global justice model with autocorrelations had the most satisfactory goodness-of-fit indices. Global justice showed statistically significant (p < 0.01) cross-sectional (0.80 {mle 0.84) and longitudinal positive associations (0.76 mle 0.82) between organizational justice and self-rated health, and significant (p < 0.01) negative associations between organizational justice and burnout (cross-sectional: mle = -0.85, longitudinal -0.83 mle -0.84). The global justice construct showed better goodness-of-fit indices than the threefold justice construct but a differentiated organizational justice concept could give valuable information about health related risk factors: if they are structural (distributive justice), procedural (procedural justice) or inter-personal (interactional justice). The two approaches to study organizational justice should therefore be regarded as complementary rather than exclusive.
Xue, Jinkai; Zhang, Yanyan; Liu, Yang; Gamal El-Din, Mohamed
2016-01-01
The release of oil sands process-affected water (OSPW) into the environment is a concern because it contains persistent organic pollutants that are toxic to aquatic life. A modified Ludzack-Ettinger membrane bioreactor (MLE-MBR) with a submerged ceramic membrane was continuously operated for 425 days to evaluate its feasibility on OSPW treatment. A stabilized biomass concentration of 3730 mg mixed liquor volatile suspended solids per litre and a naphthenic acid (NA) removal of 24.7% were observed in the reactor after 361 days of operation. Ultra Performance Liquid Chromatography/High Resolution Mass Spectrometry analysis revealed that the removal of individual NA species declined with increased ring numbers. Pyrosequencing analysis revealed that Betaproteobacteria were dominant in sludge samples from the MLE-MBR, with microorganisms such as Rhodocyclales and Sphingobacteriales capable of degrading hydrocarbon and aromatic compounds. During 425 days of continuous operation, no severe membrane fouling was observed as the transmembrane pressure (TMP) of the MLE-MBR never exceeded -20 kPa given that the manufacturer's suggested critical TMP for chemical cleaning is -35 kPa. Our results indicated that the proposed MLE-MBR has a good potential for removing recalcitrant organics in OSPW. Copyright © 2015 Elsevier Ltd. All rights reserved.
Degradation data analysis based on a generalized Wiener process subject to measurement error
NASA Astrophysics Data System (ADS)
Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar
2017-09-01
Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.
Zika and Chikungunya virus detection in naturally infected Aedes aegypti in Ecuador.
Cevallos, Varsovia; Ponce, Patricio; Waggoner, Jesse J; Pinsky, Benjamin A; Coloma, Josefina; Quiroga, Cristina; Morales, Diego; Cárdenas, Maria José
2018-01-01
The wide and rapid spread of Chikungunya (CHIKV) and Zika (ZIKV) viruses represent a global public health problem, especially for tropical and subtropical environments. The early detection of CHIKV and ZIKV in mosquitoes may help to understand the dynamics of the diseases in high-risk areas, and to design data based epidemiological surveillance to activate the preparedness and response of the public health system and vector control programs. This study was done to detect ZIKV and CHIKV viruses in naturally infected fed female Aedes aegypti (L.) mosquitoes from active epidemic urban areas in Ecuador. Pools (n=193; 22 pools) and individuals (n=22) of field collected Ae. aegypti mosquitoes from high-risk arboviruses infection sites in Ecuador were analyzed for the presence of CHIKV and ZIKV using RT-PCR. Phylogenetic analysis demonstrated that both ZIKV and CHIKV viruses circulating in Ecuador correspond to the Asian lineages. Minimum infection rate (MIR) of CHIKV for Esmeraldas city was 2.3% and the maximum likelihood estimation (MLE) was 3.3%. The minimum infection rate (MIR) of ZIKV for Portoviejo city was 5.3% and for Manta city was 2.1%. Maximum likelihood estimation (MLE) for Portoviejo city was 6.9% and 2.6% for Manta city. Detection of arboviruses and infection rates in the arthropod vectors may help to predict an outbreak and serve as a warning tool in surveillance programs. Copyright © 2017 Elsevier B.V. All rights reserved.
Calibration of a stochastic health evolution model using NHIS data
NASA Astrophysics Data System (ADS)
Gupta, Aparna; Li, Zhisheng
2011-10-01
This paper presents and calibrates an individual's stochastic health evolution model. In this health evolution model, the uncertainty of health incidents is described by a stochastic process with a finite number of possible outcomes. We construct a comprehensive health status index (HSI) to describe an individual's health status, as well as a health risk factor system (RFS) to classify individuals into different risk groups. Based on the maximum likelihood estimation (MLE) method and the method of nonlinear least squares fitting, model calibration is formulated in terms of two mixed-integer nonlinear optimization problems. Using the National Health Interview Survey (NHIS) data, the model is calibrated for specific risk groups. Longitudinal data from the Health and Retirement Study (HRS) is used to validate the calibrated model, which displays good validation properties. The end goal of this paper is to provide a model and methodology, whose output can serve as a crucial component of decision support for strategic planning of health related financing and risk management.
Are the fluctuations in dynamic anterior surface aberrations of the human eye chaotic?
Jayakumar, Varadharajan; Thapa, Damber; Hutchings, Natalie; Lakshminarayanan, Vasudevan
2013-12-15
The purpose of the study is to measure chaos in dynamic anterior surface aberrations and examine how it varies between the eyes of an individual. Noninvasive tear breakup time and dynamic corneal surface aberrations were measured for two open-eye intervals of 15 s. The maximal Lyapunov exponent (MLE) was calculated to test the nature of the fluctuations of the dynamic anterior surface aberrations. The average MLE for total higher-order aberration (HOA) was found to be small (+0.0102±0.0072) μm/s. No significant difference in MLE was found between the eyes for HOA (t-test; p=0.131). Data analysis was carried out for individual Zernike coefficients, including vertical prism as it gives a direct measure of the thickness of the tear film over time. The results show that the amount of chaos was small for each Zernike coefficient and not significantly correlated between the eyes.
Exponential series approaches for nonparametric graphical models
NASA Astrophysics Data System (ADS)
Janofsky, Eric
Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.
NASA Astrophysics Data System (ADS)
Gottschalk, Ian P.; Hermans, Thomas; Knight, Rosemary; Caers, Jef; Cameron, David A.; Regnery, Julia; McCray, John E.
2017-12-01
Geophysical data have proven to be very useful for lithological characterization. However, quantitatively integrating the information gained from acquiring geophysical data generally requires colocated lithological and geophysical data for constructing a rock-physics relationship. In this contribution, the issue of integrating noncolocated geophysical and lithological data is addressed, and the results are applied to simulate groundwater flow in a heterogeneous aquifer in the Prairie Waters Project North Campus aquifer recharge site, Colorado. Two methods of constructing a rock-physics transform between electrical resistivity tomography (ERT) data and lithology measurements are assessed. In the first approach, a maximum likelihood estimation (MLE) is used to fit a bimodal lognormal distribution to horizontal crosssections of the ERT resistivity histogram. In the second approach, a spatial bootstrap is applied to approximate the rock-physics relationship. The rock-physics transforms provide soft data for multiple point statistics (MPS) simulations. Subsurface models are used to run groundwater flow and tracer test simulations. Each model's uncalibrated, predicted breakthrough time is evaluated based on its agreement with measured subsurface travel time values from infiltration basins to selected groundwater recovery wells. We find that incorporating geophysical information into uncalibrated flow models reduces the difference with observed values, as compared to flow models without geophysical information incorporated. The integration of geophysical data also narrows the variance of predicted tracer breakthrough times substantially. Accuracy is highest and variance is lowest in breakthrough predictions generated by the MLE-based rock-physics transform. Calibrating the ensemble of geophysically constrained models would help produce a suite of realistic flow models for predictive purposes at the site. We find that the success of breakthrough predictions is highly sensitive to the definition of the rock-physics transform; it is therefore important to model this transfer function accurately.
APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES.
Han, Qiyang; Wellner, Jon A
2016-01-01
In this paper, we study the approximation and estimation of s -concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s -concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [ Ann. Statist. 38 (2010) 2998-3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q : if Q n → Q in the Wasserstein metric, then the projected densities converge in weighted L 1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s -concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s -concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s -concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s -concave.
APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES
Han, Qiyang; Wellner, Jon A.
2017-01-01
In this paper, we study the approximation and estimation of s-concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s-concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [Ann. Statist. 38 (2010) 2998–3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q: if Qn → Q in the Wasserstein metric, then the projected densities converge in weighted L1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s-concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s-concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s-concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s-concave. PMID:28966410
Al Omairi, Naif E; Radwan, Omyma K; Alzahrani, Yahea A; Kassab, Rami B
2018-03-20
Due to the high ability of cadmium to cross the blood-brain barrier, cadmium (Cd) causes severe neurological damages. Hence, the purpose of this study was to investigate the possible protective effect of Mangifera indica leaf extract (MLE) against Cd-induced neurotoxicity. Rats were divided into eight groups. Group 1 served as vehicle control group, groups 2, 3 and 4 received MLE (100, 200, 300 mg /kg b.wt, respectively). Group 5 was treated with CdCl 2 (5 mg/kg b.wt). Groups 6, 7 and 8 were co-treated with MLE and CdCl 2 using the same doses. All treatments were orally administered for 28 days. Cortical oxidative stress biomarkers [Malondialdehyde (MDA), nitric oxide (NO), glutathione content (GSH), oxidized form of glutathione (GSSG), 8-hydroxy-2-deoxyguanosine (8-OHdG), superoxide dismutase (SOD), catalase (CAT) and glutathione peroxidase (GPx)], inflammatory cytokines [tumor necrosis factor (TNF-α) and interlukin-1β (IL-1β)], biogenic amines [norepinephrine (NE), dopamine (DA) and serotonin (5-HT)], some biogenic metabolites [3,4-dihydroxyphenylacetic acid (DOPAC), homovanillic acid (HVA) and 5-hydroxyindoleacetic acid (5-HIAA)], acetylcholine esterase activity (AChE) and purinergic compound [adenosine triphosphate (ATP)] were determined in frontal cortex of rats. Results indicated that Cd increased levels of the oxidative biomarkers (MDA, NO, GSSG and 8-OHdG) and the inflammatory mediators (TNF-α and IL-1β), while lowered GSH, SOD, CAT, GPx and ATP levels. Also, Cd significantly decreased the AChE activity and the tested biogenic amines while elevated the tested metabolites in the frontal cortex. Levels of all disrupted cortical parameters were alleviated by MLE co-administration. The MLE induced apparent protective effect on Cd-induced neurotoxicity in concern with its medium and higher doses which may be due to its antioxidant and anti-inflammatory activities.
Literacy Instruction in the Mother Tongue: The Case of Pupils Using Mixed Vocabularies
ERIC Educational Resources Information Center
Sanchez, Alma Sonia Q.
2013-01-01
In the institutionalization of the mother tongue-based multilingual education (MTB-MLE) in the country, several trainings were conducted introducing its unique features such as the use of the two-track method in teaching reading based on the frequency of the sounds of the first language (L1). This study attempted to find out how the accuracy track…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins-Fekete, Charles-Antoine; Beaulieu, Luc; Se
2016-08-15
To present two related developments of proton radiography (pRad) to minimize range uncertainty in proton therapy. The first combines a pRad with an X-ray CT to produce a patient-specific relative stopping power (RSP) map. The second aims to improve the pRad spatial resolution for accurate registration prior to the first. The enhanced-pRad can also be used in a novel proton-CT reconstruction algorithm. Monte Carlo pRad were computed from three phantoms; the Gammex, the Catphan and an anthropomorphic head. An optimized cubic-spline estimator derives the most likely path. The length crossed by the protons voxel-by-voxel was calculated by combining their estimatedmore » paths with the CT. The difference between the theoretical (length×RSP) and measured energy loss was minimized through a least squares optimization (LSO) algorithm yielding the RSP map. To increase pRad spatial resolution for registration with the CT, the phantom was discretized into voxels columns. The average column RSP was optimized to maximize the proton energy loss likelihood (MLE). Simulations showed precise RSP (<0.75%) for Gammex materials except low-density lung (<1.2%). For the head, accurate RSP were obtained (µ=−0.10%1.5σ=1.12%) and the range precision was improved (ΔR80 of −0.20±0.35%). Spatial resolution was increased in pRad (2.75 to 6.71 lp/cm) and pCT from MLE-enhanced pRad (2.83 to 5.86 lp/cm). The LSO decreases the range uncertainty (R80σ<1.0%) while the MLE-enhanced pRad spatial resolution (+244%) and is a great candidate for pCT reconstruction.« less
NASA Astrophysics Data System (ADS)
Tarnopolski, Mariusz
2018-01-01
The Chirikov standard map and the 2D Froeschlé map are investigated. A few thousand values of the Hurst exponent (HE) and the maximal Lyapunov exponent (mLE) are plotted in a mixed space of the nonlinear parameter versus the initial condition. Both characteristic exponents reveal remarkably similar structures in this space. A tight correlation between the HEs and mLEs is found, with the Spearman rank ρ = 0 . 83 and ρ = 0 . 75 for the Chirikov and 2D Froeschlé maps, respectively. Based on this relation, a machine learning (ML) procedure, using the nearest neighbor algorithm, is performed to reproduce the HE distribution based on the mLE distribution alone. A few thousand HE and mLE values from the mixed spaces were used for training, and then using 2 - 2 . 4 × 105 mLEs, the HEs were retrieved. The ML procedure allowed to reproduce the structure of the mixed spaces in great detail.
Hamelmann, Paul; Vullings, Rik; Schmitt, Lars; Kolen, Alexander F; Mischi, Massimo; van Laar, Judith O E H; Bergmans, Jan W M
2017-09-21
Doppler ultrasound (US) is the most commonly applied method to measure the fetal heart rate (fHR). When the fetal heart is not properly located within the ultrasonic beam, fHR measurements often fail. As a consequence, clinical staff need to reposition the US transducer on the maternal abdomen, which can be a time consuming and tedious task. In this article, a method is presented to aid clinicians with the positioning of the US transducer to produce robust fHR measurements. A maximum likelihood estimation (MLE) algorithm is developed, which provides information on fetal heart location using the power of the Doppler signals received in the individual elements of a standard US transducer for fHR recordings. The performance of the algorithm is evaluated with simulations and in vitro experiments performed on a beating-heart setup. Both the experiments and the simulations show that the heart location can be accurately determined with an error of less than 7 mm within the measurement volume of the employed US transducer. The results show that the developed algorithm can be used to provide accurate feedback on fetal heart location for improved positioning of the US transducer, which may lead to improved measurements of the fHR.
Seo, Dong Gi; Choi, Jeongwook
2018-05-17
Computerized adaptive testing (CAT) has been adopted in license examinations due to a test efficiency and accuracy. Many research about CAT have been published to prove the efficiency and accuracy of measurement. This simulation study investigated scoring method and item selection methods to implement CAT in Korean medical license examination (KMLE). This study used post-hoc (real data) simulation design. The item bank used in this study was designed with all items in a 2017 KMLE. All CAT algorithms for this study were implemented by a 'catR' package in R program. In terms of accuracy, Rasch and 2parametric logistic (PL) model performed better than 3PL model. Modal a Posteriori (MAP) or Expected a Posterior (EAP) provided more accurate estimates than MLE and WLE. Furthermore Maximum posterior weighted information (MPWI) or Minimum expected posterior variance (MEPV) performed better than other item selection methods. In terms of efficiency, Rasch model was recommended to reduce test length. Simulation study should be performed under varied test conditions before adopting a live CAT. Based on a simulation study, specific scoring and item selection methods should be predetermined before implementing a live CAT.
Nonparametric predictive inference for combining diagnostic tests with parametric copula
NASA Astrophysics Data System (ADS)
Muhammad, Noryanti; Coolen, F. P. A.; Coolen-Maturi, T.
2017-09-01
Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine and health care. The Receiver Operating Characteristic (ROC) curve is a popular statistical tool for describing the performance of diagnostic tests. The area under the ROC curve (AUC) is often used as a measure of the overall performance of the diagnostic test. In this paper, we interest in developing strategies for combining test results in order to increase the diagnostic accuracy. We introduce nonparametric predictive inference (NPI) for combining two diagnostic test results with considering dependence structure using parametric copula. NPI is a frequentist statistical framework for inference on a future observation based on past data observations. NPI uses lower and upper probabilities to quantify uncertainty and is based on only a few modelling assumptions. While copula is a well-known statistical concept for modelling dependence of random variables. A copula is a joint distribution function whose marginals are all uniformly distributed and it can be used to model the dependence separately from the marginal distributions. In this research, we estimate the copula density using a parametric method which is maximum likelihood estimator (MLE). We investigate the performance of this proposed method via data sets from the literature and discuss results to show how our method performs for different family of copulas. Finally, we briefly outline related challenges and opportunities for future research.
MSFC shuttle lightning research
NASA Technical Reports Server (NTRS)
Vaughan, Otha H., Jr.
1993-01-01
The shuttle mesoscale lightning experiment (MLE), flown on earlier shuttle flights, and most recently flown on the following space transportation systems (STS's), STS-31, -32, -35, -37, -38, -40, -41, and -48, has continued to focus on obtaining additional quantitative measurements of lightning characteristics and to create a data base for use in demonstrating observation simulations for future spaceborne lightning mapping systems. These flights are also providing design criteria data for the design of a proposed shuttle MLE-type lightning research instrument called mesoscale lightning observational sensors (MELOS), which are currently under development here at MSFC.
Novel optical-based methods and analyses for elucidating cellular mechanics and dynamics
NASA Astrophysics Data System (ADS)
Koo, Peter K.
Resolving distinct biochemical interaction states by analyzing the diffusive behaviors of individual protein trajectories is challenging due to the limited statistics provided by short trajectories and experimental noise sources, which are intimately coupled into each proteins localization. In the first part of this thesis, we introduce a novel, a machine-learning based classification methodology, called perturbation expectation-maximization (pEM), which simultaneously analyzes a population of protein trajectories to uncover the system of short-time diffusive behaviors which collectively result from distinct biochemical interactions. We then discuss an experimental application of pEM to Rho GTPase, an integral regulator of cytoskeletal dynamics and cellular homeostasis, inside live cells. We also derive the maximum likelihood estimator (MLE) for driven diffusion, confined diffusion, and fractional Brownian motion. We demonstrate that MLE yields improved estimates in comparison with traditional diffusion analysis, namely mean squared displacement analysis. In addition, we also introduce mleBayes, which is an empirical Bayesian model selection scheme to classify an individual protein trajectory to a given diffusion mode. By employing mleBayes on simulated data, we demonstrate that accurate determination of the underlying diffusive properties, beyond normal diffusion, remains challenging when analyzing particle trajectories on an individual basis. To improve upon the statistical limitations of classification from analyzing trajectories on an individual basis, we extend pEM with a new version (pEMv2) to simultaneously analyzing a collection of particle trajectories to uncover the system of interactions which give rise to unique normal or non-normal diffusive states. We test the performance of pEMv2 on various sets of simulated particle trajectories which transition between various modes of normal and non-normal diffusive states to highlight considerations when employing pEMv2 analysis. We envision the presented methodologies will be applicable to a wide range of single protein tracking data where different interactions result in distinct diffusive behaviors. More generally, this study brings us an important step closer to the possibility of monitoring the endogenous biochemistry of diffusing proteins within live cells with single molecule resolution. In the second part of this thesis, the role of chromatin association to the nuclear envelope in nuclear mechanics is explored. Changes in the mechanical properties of the nucleus are increasingly found to be critical for development and disease. However, relatively little is known about the variables that cells modulate to define nuclear mechanics. The best understood player is lamin A, a protein linked to a diverse set of genetic diseases termed laminopathies. The properties of lamin A that are compromised in these diseases (and therefore underlie their pathology) remains poorly understood. One model focuses on a mechanical role for a polymeric network of lamins associated with the nuclear envelope (NE), which supports nuclear integrity. However, because heterochromatin is strongly associated with lamina, it remains unclear whether it is the lamin polymer, the associated chromatin, or both that allow the lamina to mechanically stabilize nuclei. Decoupling the impact of the lamin polymer itself from that of the associated chromatin has proven very challenging. Here, we take advantage of the model organism, S pombe, which does not express lamies, as an experimental framework in which to address the impact of chromatin and its association with the nuclear periphery on nuclear mechanics. Using a combination of new image analysis tools for in vivo imaging of nuclear dynamics and a novel optical tweezers assay capable of directly probing nuclear mechanics, we find that the association of chromatin with the NE through integral membrane proteins plays a critical role in supporting nuclear integrity. When chromatin is decoupled from the NE, nuclei are softer, undergo much larger nuclear fluctuations in vivo in response to microtubule forces, and are defective at resolving nuclear deformations. Our data further suggest that association of chromatin with the NE attenuates the flow of chromatin into nuclear fluctuations thereby preventing permanent changes in nuclear shape.
Maximum likelihood techniques applied to quasi-elastic light scattering
NASA Technical Reports Server (NTRS)
Edwards, Robert V.
1992-01-01
There is a necessity of having an automatic procedure for reliable estimation of the quality of the measurement of particle size from QELS (Quasi-Elastic Light Scattering). Getting the measurement itself, before any error estimates can be made, is a problem because it is obtained by a very indirect measurement of a signal derived from the motion of particles in the system and requires the solution of an inverse problem. The eigenvalue structure of the transform that generates the signal is such that an arbitrarily small amount of noise can obliterate parts of any practical inversion spectrum. This project uses the Maximum Likelihood Estimation (MLE) as a framework to generate a theory and a functioning set of software to oversee the measurement process and extract the particle size information, while at the same time providing error estimates for those measurements. The theory involved verifying a correct form of the covariance matrix for the noise on the measurement and then estimating particle size parameters using a modified histogram approach.
Wang, Xing; Chen, Qiuhua; Tian, Wenjuan; Wang, Jianqing; Cheng, Lu; Lu, Jun; Chen, Mingqi; Pei, Yinhao; Li, Can; Chen, Gong; Gu, Ning
2017-01-01
Energy metabolism may alter pattern differences in acute lung injury (ALI) as one of the causes but the detailed features at single-cellular level remain unclear. Changes in intercellular temperature and adenosine triphosphate (ATP) concentration within the single cell may help to understand the role of energy metabolism in causing ALI. ALI in vitro models were established by treating mice lung epithelial (MLE-12) cells with lipopolysaccharide (LPS), hydrogen peroxide (H2O2), hydrochloric acid (HCl) and cobalt chloride (CoCl2, respectively. 100 nm micro thermocouple probe (TMP) was inserted into the cytosol by micromanipulation system and thermoelectric readings were recorded to calculate the intracellular temperature based on standard curve. The total ATP contents for the MLE-12 cells were evaluated at different time intervals after treatments. A significant increase of intracellular temperature was observed after 10 or 20 μg/L LPS and HCl treatments. The HCl increased the temperature in a dose-dependent manner. On the contrary, H2O2 induced a significant decline of intracellular temperature after treatment. No significant difference in intracellular temperature was observed after CoCl2 exposure. The intracellular ATP levels decreased in a time-dependent manner after treatment with H2O2 and HCl, while the LPS and CoCl2 had no significant effect on ATP levels. The intracellular temperature responses varied in different ALI models. The concentration of ATP in the MLE-12 cells played part in the intracellular temperature changes. No direct correlation was observed between the intracellular temperature and concentration of ATP in the MLE-12 cells.
Russo, Daniela; Miglionico, Rocchina; Carmosino, Monica; Bisaccia, Faustino; Armentano, Maria Francesca
2018-01-01
Sclerocarya birrea (A.Rich.) Hochst (Anacardiaceae) is a savannah tree that has long been used in sub-Saharan Africa as a medicinal remedy for numerous ailments. The purpose of this study was to increase the scientific knowledge about this plant by evaluating the total content of polyphenols, flavonoids, and tannins in the methanol extracts of the leaves and bark (MLE and MBE, respectively), as well as the in vitro antioxidant activity and biological activities of these extracts. Reported results show that MLE is rich in flavonoids (132.7 ± 10.4 mg of quercetin equivalents/g), whereas MBE has the highest content of tannins (949.5 ± 29.7 mg of tannic acid equivalents/g). The antioxidant activity was measured using four different in vitro tests: β-carotene bleaching (BCB), 2,2′-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) (ABTS), O2−•, and nitric oxide (NO•) assays. In all cases, MBE was the most active compared to MLE and the standards used (Trolox and ascorbic acid). Furthermore, MBE and MLE were tested to evaluate their activity in HepG2 and fibroblast cell lines. A higher cytotoxic activity of MBE was evidenced and confirmed by more pronounced alterations in cell morphology. MBE induced cell death, triggering the intrinsic apoptotic pathway by reactive oxygen species (ROS) generation, which led to a loss of mitochondrial membrane potential with subsequent cytochrome c release from the mitochondria into the cytosol. Moreover, MBE showed lower cytotoxicity in normal human dermal fibroblasts, suggesting its potential as a selective anticancer agent. PMID:29316691
Russo, Daniela; Miglionico, Rocchina; Carmosino, Monica; Bisaccia, Faustino; Andrade, Paula B; Valentão, Patrícia; Milella, Luigi; Armentano, Maria Francesca
2018-01-08
Sclerocarya birrea (A.Rich.) Hochst (Anacardiaceae) is a savannah tree that has long been used in sub-Saharan Africa as a medicinal remedy for numerous ailments. The purpose of this study was to increase the scientific knowledge about this plant by evaluating the total content of polyphenols, flavonoids, and tannins in the methanol extracts of the leaves and bark (MLE and MBE, respectively), as well as the in vitro antioxidant activity and biological activities of these extracts. Reported results show that MLE is rich in flavonoids (132.7 ± 10.4 mg of quercetin equivalents/g), whereas MBE has the highest content of tannins (949.5 ± 29.7 mg of tannic acid equivalents/g). The antioxidant activity was measured using four different in vitro tests: β-carotene bleaching (BCB), 2,2'-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) (ABTS), O₂ -• , and nitric oxide (NO • ) assays. In all cases, MBE was the most active compared to MLE and the standards used (Trolox and ascorbic acid). Furthermore, MBE and MLE were tested to evaluate their activity in HepG2 and fibroblast cell lines. A higher cytotoxic activity of MBE was evidenced and confirmed by more pronounced alterations in cell morphology. MBE induced cell death, triggering the intrinsic apoptotic pathway by reactive oxygen species (ROS) generation, which led to a loss of mitochondrial membrane potential with subsequent cytochrome c release from the mitochondria into the cytosol. Moreover, MBE showed lower cytotoxicity in normal human dermal fibroblasts, suggesting its potential as a selective anticancer agent.
Hyperthermia promotes and prevents respiratory epithelial apoptosis through distinct mechanisms.
Nagarsekar, Ashish; Tulapurkar, Mohan E; Singh, Ishwar S; Atamas, Sergei P; Shah, Nirav G; Hasday, Jeffrey D
2012-12-01
Hyperthermia has been shown to confer cytoprotection and to augment apoptosis in different experimental models. We analyzed the mechanisms of both effects in the same mouse lung epithelial (MLE) cell line (MLE15). Exposing MLE15 cells to heat shock (HS; 42°C, 2 h) or febrile-range hyperthermia (39.5°C) concurrent with activation of the death receptors, TNF receptor 1 or Fas, greatly accelerated apoptosis, which was detectable within 30 minutes and was associated with accelerated activation of caspase-2, -8, and -10, and the proapoptotic protein, Bcl2-interacting domain (Bid). Caspase-3 activation and cell death were partially blocked by inhibitors targeting all three initiator caspases. Cells expressing the IκB superrepessor were more susceptible than wild-type cells to TNF-α-induced apoptosis at 37°C, but HS and febrile-range hyperthermia still increased apoptosis in these cells. Delaying HS for 3 hours after TNF-α treatment abrogated its proapoptotic effect in wild-type cells, but not in IκB superrepressor-expression cells, suggesting that TNF-α stimulates delayed resistance to the proapoptotic effects of HS through an NF-κB-dependent mechanism. Pre-exposure to 2-hour HS beginning 6 to16 hours before TNF-α treatment or Fas activation reduced apoptosis in MLE15 cells. The antiapoptotic effects of HS pretreatment were reduced in TNF-α-treated embryonic fibroblasts from heat shock factor-1 (HSF1)-deficient mice, but the proapoptotic effects of concurrent HS were preserved. Thus, depending on the temperature and timing relative to death receptor activation, hyperthermia can exert pro- and antiapoptotic effects through distinct mechanisms.
NASA Astrophysics Data System (ADS)
Ramos, M. Rosário; Carolino, E.; Viegas, Carla; Viegas, Sandra
2016-06-01
Health effects associated with occupational exposure to particulate matter have been studied by several authors. In this study were selected six industries of five different areas: Cork company 1, Cork company 2, poultry, slaughterhouse for cattle, riding arena and production of animal feed. The measurements tool was a portable device for direct reading. This tool provides information on the particle number concentration for six different diameters, namely 0.3 µm, 0.5 µm, 1 µm, 2.5 µm, 5 µm and 10 µm. The focus on these features is because they might be more closely related with adverse health effects. The aim is to identify the particles that better discriminate the industries, with the ultimate goal of classifying industries regarding potential negative effects on workers' health. Several methods of discriminant analysis were applied to data of occupational exposure to particulate matter and compared with respect to classification accuracy. The selected methods were linear discriminant analyses (LDA); linear quadratic discriminant analysis (QDA), robust linear discriminant analysis with selected estimators (MLE (Maximum Likelihood Estimators), MVE (Minimum Volume Elipsoid), "t", MCD (Minimum Covariance Determinant), MCD-A, MCD-B), multinomial logistic regression and artificial neural networks (ANN). The predictive accuracy of the methods was accessed through a simulation study. ANN yielded the highest rate of classification accuracy in the data set under study. Results indicate that the particle number concentration of diameter size 0.5 µm is the parameter that better discriminates industries.
Song, Kang; Suenaga, Toshikazu; Harper, Willie F; Hori, Tomoyuki; Riya, Shohei; Hosomi, Masaaki; Terada, Akihiko
2015-12-01
Nitrous oxide (N2O) is emitted from a modified Ludzak-Ettinger (MLE) process, as a primary activated sludge system, which requires mitigation. The effects of aeration rates and internal recycle flow (IRF) ratios on N2O emission were investigated in an MLE process fed with glycerol. Reducing the aeration rate from 1.5 to 0.5 L/min increased gaseous the N2O concentration from the aerobic tank and the dissolved N2O concentration in the anoxic tank by 54.4 and 53.4 %, respectively. During the period of higher aeration, the N2O-N conversion ratio was 0.9 % and the potential N2O reducers were predominantly Rhodobacter, which accounted for 21.8 % of the total population. Increasing the IRF ratio from 3.6 to 7.2 decreased the N2O emission rate from the aerobic tank and the dissolved N2O concentration in the anoxic tank by 56 and 48 %, respectively. This study suggests effective N2O mitigation strategies for MLE systems.
Maximum likelihood estimates, from censored data, for mixed-Weibull distributions
NASA Astrophysics Data System (ADS)
Jiang, Siyuan; Kececioglu, Dimitri
1992-06-01
A new algorithm for estimating the parameters of mixed-Weibull distributions from censored data is presented. The algorithm follows the principle of maximum likelihood estimate (MLE) through the expectation and maximization (EM) algorithm, and it is derived for both postmortem and nonpostmortem time-to-failure data. It is concluded that the concept of the EM algorithm is easy to understand and apply (only elementary statistics and calculus are required). The log-likelihood function cannot decrease after an EM sequence; this important feature was observed in all of the numerical calculations. The MLEs of the nonpostmortem data were obtained successfully for mixed-Weibull distributions with up to 14 parameters in a 5-subpopulation, mixed-Weibull distribution. Numerical examples indicate that some of the log-likelihood functions of the mixed-Weibull distributions have multiple local maxima; therefore, the algorithm should start at several initial guesses of the parameter set.
A health risk benchmark for the neurologic effects of styrene: comparison with NOAEL/LOAEL approach.
Rabovsky, J; Fowles, J; Hill, M D; Lewis, D C
2001-02-01
Benchmark dose (BMD) analysis was used to estimate an inhalation benchmark concentration for styrene neurotoxicity. Quantal data on neuropsychologic test results from styrene-exposed workers [Mutti et al. (1984). American Journal of Industrial Medicine, 5, 275-286] were used to quantify neurotoxicity, defined as the percent of tested workers who responded abnormally to > or = 1, > or = 2, or > or = 3 out of a battery of eight tests. Exposure was based on previously published results on mean urinary mandelic- and phenylglyoxylic acid levels in the workers, converted to air styrene levels (15, 44, 74, or 115 ppm). Nonstyrene-exposed workers from the same region served as a control group. Maximum-likelihood estimates (MLEs) and BMDs at 5 and 10% response levels of the exposed population were obtained from log-normal analysis of the quantal data. The highest MLE was 9 ppm (BMD = 4 ppm) styrene and represents abnormal responses to > or = 3 tests by 10% of the exposed population. The most health-protective MLE was 2 ppm styrene (BMD = 0.3 ppm) and represents abnormal responses to > or = 1 test by 5% of the exposed population. A no observed adverse effect level/lowest observed adverse effect level (NOAEL/LOAEL) analysis of the same quantal data showed workers in all styrene exposure groups responded abnormally to > or = 1, > or = 2, or > or = 3 tests, compared to controls, and the LOAEL was 15 ppm. A comparison of the BMD and NOAEL/LOAEL analyses suggests that at air styrene levels below the LOAEL, a segment of the worker population may be adversely affected. The benchmark approach will be useful for styrene noncancer risk assessment purposes by providing a more accurate estimate of potential risk that should, in turn, help to reduce the uncertainty that is a common problem in setting exposure levels.
Influence of Iterative Reconstruction Algorithms on PET Image Resolution
NASA Astrophysics Data System (ADS)
Karpetas, G. E.; Michail, C. M.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.
2015-09-01
The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The simulated PET scanner was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the modulation transfer function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL, the ordered subsets separable paraboloidal surrogate (OSSPS), the median root prior (MRP) and OSMAPOSL with quadratic prior, algorithms. OSMAPOSL reconstruction was assessed by using fixed subsets and various iterations, as well as by using various beta (hyper) parameter values. MTF values were found to increase with increasing iterations. MTF also improves by using lower beta values. The simulated PET evaluation method, based on the TLC plane source, can be useful in the resolution assessment of PET scanners.
Schlichting, Nadine; de Jong, Ritske; van Rijn, Hedderik
2018-06-20
Certain EEG components (e.g., the contingent negative variation, CNV, or beta oscillations) have been linked to the perception of temporal magnitudes specifically. However, it is as of yet unclear whether these EEG components are really unique to time perception or reflect the perception of magnitudes in general. In the current study we recorded EEG while participants had to make judgments about duration (time condition) or numerosity (number condition) in a comparison task. This design allowed us to directly compare EEG signals between the processing of time and number. Stimuli consisted of a series of blue dots appearing and disappearing dynamically on a black screen. Each stimulus was characterized by its duration and the total number of dots that it consisted of. Because it is known that tasks like these elicit perceptual interference effects that we used a maximum-likelihood estimation (MLE) procedure to determine, for each participant and dimension separately, to what extent time and numerosity information were taken into account when making a judgement in an extensive post hoc analysis. This approach enabled us to capture individual differences in behavioral performance and, based on the MLE estimates, to select a subset of participants who suppressed task-irrelevant information. Even for this subset of participants, who showed no or only small interference effects and thus were thought to truly process temporal information in the time condition and numerosity information in the number condition, we found CNV patterns in the time-domain EEG signals for both tasks that was more pronounced in the time-task. We found no substantial evidence for differences between the processing of temporal and numerical information in the time-frequency domain.
SPOTting model parameters using a ready-made Python package
NASA Astrophysics Data System (ADS)
Houska, Tobias; Kraft, Philipp; Breuer, Lutz
2015-04-01
The selection and parameterization of reliable process descriptions in ecological modelling is driven by several uncertainties. The procedure is highly dependent on various criteria, like the used algorithm, the likelihood function selected and the definition of the prior parameter distributions. A wide variety of tools have been developed in the past decades to optimize parameters. Some of the tools are closed source. Due to this, the choice for a specific parameter estimation method is sometimes more dependent on its availability than the performance. A toolbox with a large set of methods can support users in deciding about the most suitable method. Further, it enables to test and compare different methods. We developed the SPOT (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of modules, to analyze and optimize parameters of (environmental) models. SPOT comes along with a selected set of algorithms for parameter optimization and uncertainty analyses (Monte Carlo, MC; Latin Hypercube Sampling, LHS; Maximum Likelihood, MLE; Markov Chain Monte Carlo, MCMC; Scuffled Complex Evolution, SCE-UA; Differential Evolution Markov Chain, DE-MCZ), together with several likelihood functions (Bias, (log-) Nash-Sutcliff model efficiency, Correlation Coefficient, Coefficient of Determination, Covariance, (Decomposed-, Relative-, Root-) Mean Squared Error, Mean Absolute Error, Agreement Index) and prior distributions (Binomial, Chi-Square, Dirichlet, Exponential, Laplace, (log-, multivariate-) Normal, Pareto, Poisson, Cauchy, Uniform, Weibull) to sample from. The model-independent structure makes it suitable to analyze a wide range of applications. We apply all algorithms of the SPOT package in three different case studies. Firstly, we investigate the response of the Rosenbrock function, where the MLE algorithm shows its strengths. Secondly, we study the Griewank function, which has a challenging response surface for optimization methods. Here we see simple algorithms like the MCMC struggling to find the global optimum of the function, while algorithms like SCE-UA and DE-MCZ show their strengths. Thirdly, we apply an uncertainty analysis of a one-dimensional physically based hydrological model build with the Catchment Modelling Framework (CMF). The model is driven by meteorological and groundwater data from a Free Air Carbon Enrichment (FACE) experiment in Linden (Hesse, Germany). Simulation results are evaluated with measured soil moisture data. We search for optimal parameter sets of the van Genuchten-Mualem function and find different equally optimal solutions with some of the algorithms. The case studies reveal that the implemented SPOT methods work sufficiently well. They further show the benefit of having one tool at hand that includes a number of parameter search methods, likelihood functions and a priori parameter distributions within one platform independent package.
Characterizing the performance of the Conway-Maxwell Poisson generalized linear model.
Francis, Royce A; Geedipally, Srinivas Reddy; Guikema, Seth D; Dhavala, Soma Sekhar; Lord, Dominique; LaRocca, Sarah
2012-01-01
Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway-Maxwell Poisson (COM-Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM-Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM-Poisson GLM, and (2) estimate the prediction accuracy of the COM-Poisson GLM using simulated data sets. The results of the study indicate that the COM-Poisson GLM is flexible enough to model under-, equi-, and overdispersed data sets with different sample mean values. The results also show that the COM-Poisson GLM yields accurate parameter estimates. The COM-Poisson GLM provides a promising and flexible approach for performing count data regression. © 2011 Society for Risk Analysis.
Bayesian-based localization of wireless capsule endoscope using received signal strength.
Nadimi, Esmaeil S; Blanes-Vidal, Victoria; Tarokh, Vahid; Johansen, Per Michael
2014-01-01
In wireless body area sensor networking (WBASN) applications such as gastrointestinal (GI) tract monitoring using wireless video capsule endoscopy (WCE), the performance of out-of-body wireless link propagating through different body media (i.e. blood, fat, muscle and bone) is still under investigation. Most of the localization algorithms are vulnerable to the variations of path-loss coefficient resulting in unreliable location estimation. In this paper, we propose a novel robust probabilistic Bayesian-based approach using received-signal-strength (RSS) measurements that accounts for Rayleigh fading, variable path-loss exponent and uncertainty in location information received from the neighboring nodes and anchors. The results of this study showed that the localization root mean square error of our Bayesian-based method was 1.6 mm which was very close to the optimum Cramer-Rao lower bound (CRLB) and significantly smaller than that of other existing localization approaches (i.e. classical MDS (64.2mm), dwMDS (32.2mm), MLE (36.3mm) and POCS (2.3mm)).
Multi-Entity Bayesian Networks Learning in Predictive Situation Awareness
2013-06-01
evaluated on a case study from PROGNOS. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18...algorithm for MEBN. The methods are evaluated on a case study from PROGNOS. 1 INTRODUCTION Over the past two decades, machine learning has...the MFrag of the child node. Lastly, in the third For-Loop, for all resident nodes in the MTheory, LPDs are generated by MLE. 5 CASE STUDY
Jeong, Sekyoo; Lee, Sin Hee; Park, Byeong Deog; Wu, Yan; Man, George; Man, Mao-Qiang
2016-03-01
The management of sensitive skin, which affects over 60% of the general population, has been a long-standing challenge for both patients and clinicians. Because defective epidermal permeability barrier is one of the clinical features of sensitive skin, barrier-enhancing products could be an optimal regimen for sensitive skin. In the present study, we evaluated the efficacy and safety of two barrier-enhancing products, i.e., Atopalm (®) Multi-Lamellar Emulsion (MLE) Cream and Physiogel (®) Intensive Cream for sensitive skin. 60 patients with sensitive skin, aged 22-40 years old, were randomly assigned to one group treated with Atopalm MLE Cream, and another group treated with Physiogel Intensive Cream twice daily for 4 weeks. Lactic acid stinging test scores (LASTS), stratum hydration (SC) and transepidermal water loss (TEWL) were assessed before, 2 and 4 weeks after the treatment. Atopalm MLE Cream significantly lowered TEWL after 2 and 4 weeks of treatment (p < 0.01). In contrast, Physiogel Intensive Cream significantly increased TEWL after 2 weeks of treatment (p < 0.05) while TEWL significantly decreased after 4-week treatments. Moreover, both Atopalm MLE Cream and Physiogel Intensive Cream significantly increased SC hydration, and improved LASTS after 4 weeks of treatment. Both barrier-enhancing products are effective and safe for improving epidermal functions, including permeability barrier, SC hydration and LASTS, in sensitive skin. These products could be a valuable alternative for management of sensitive skin. Veterans Affairs Medical Center, San Francisco, California, USA, and NeoPharm Co., Ltd., Daejeon, Korea.
Lim, Hyun Hwa; Yang, Soo Jin; Kim, Yuri; Lee, Myoungsook
2013-01-01
Abstract The aim of this study was to investigate whether a combined treatment of mulberry leaf extract (MLE) and mulberry fruit extract (MFE) was effective for improving obesity and obesity-related inflammation and oxidative stress in high fat (HF) diet-induced obese mice. After obesity was induced by HF diet for 9 weeks, the mice were divided into eight groups: (1) lean control, (2) HF diet-induced obese control, (3) 1:1 ratio of MLE and MFE at doses of 200 (L1:1), (4) 500 (M1:1), and (5) 1000 (H1:1) mg/kg per day, and (6) 2:1 ratio of MLE and MFE at doses of 200 (L2:1), (7) 500 (M2:1), and (8) 1000 (H2:1) mg/kg per day. All six combined treatments significantly lowered body weight gain, plasma triglycerides, and lipid peroxidation levels after the 12-week treatment period. Additionally, all combined treatments suppressed hepatic fat accumulation and reduced epididymal adipocyte size. These improvements were accompanied by decreases in protein levels of proinflammatory markers (tumor necrosis factor-alpha, C-reactive protein, interleukin-1, inducible nitric oxide synthase, and phospho-nuclear factor-kappa B inhibitor alpha) and oxidative stress markers (heme oxygenase-1 and manganese superoxide dismutase). M2:1 was the most effective ratio and dose for the improvements in obesity, inflammation, and oxidative stress. These results demonstrate that a combined MLE and MFE treatment ameliorated obesity and obesity-related metabolic stressors and suggest that it can be used as a means to prevent and/or treat obesity. PMID:23957352
Lim, Hyun Hwa; Yang, Soo Jin; Kim, Yuri; Lee, Myoungsook; Lim, Yunsook
2013-08-01
The aim of this study was to investigate whether a combined treatment of mulberry leaf extract (MLE) and mulberry fruit extract (MFE) was effective for improving obesity and obesity-related inflammation and oxidative stress in high fat (HF) diet-induced obese mice. After obesity was induced by HF diet for 9 weeks, the mice were divided into eight groups: (1) lean control, (2) HF diet-induced obese control, (3) 1:1 ratio of MLE and MFE at doses of 200 (L1:1), (4) 500 (M1:1), and (5) 1000 (H1:1) mg/kg per day, and (6) 2:1 ratio of MLE and MFE at doses of 200 (L2:1), (7) 500 (M2:1), and (8) 1000 (H2:1) mg/kg per day. All six combined treatments significantly lowered body weight gain, plasma triglycerides, and lipid peroxidation levels after the 12-week treatment period. Additionally, all combined treatments suppressed hepatic fat accumulation and reduced epididymal adipocyte size. These improvements were accompanied by decreases in protein levels of proinflammatory markers (tumor necrosis factor-alpha, C-reactive protein, interleukin-1, inducible nitric oxide synthase, and phospho-nuclear factor-kappa B inhibitor alpha) and oxidative stress markers (heme oxygenase-1 and manganese superoxide dismutase). M2:1 was the most effective ratio and dose for the improvements in obesity, inflammation, and oxidative stress. These results demonstrate that a combined MLE and MFE treatment ameliorated obesity and obesity-related metabolic stressors and suggest that it can be used as a means to prevent and/or treat obesity.
Removing the Threat of Diclofenac to Critically Endangered Asian Vultures
Swan, Gerry; Naidoo, Vinasan; Cuthbert, Richard; Pain, Deborah J; Swarup, Devendra; Prakash, Vibhu; Taggart, Mark; Bekker, Lizette; Das, Devojit; Diekmann, Jörg; Diekmann, Maria; Killian, Elmarié; Meharg, Andy; Patra, Ramesh Chandra; Saini, Mohini; Wolter, Kerri
2006-01-01
Veterinary use of the nonsteroidal anti-inflammatory (NSAID) drug diclofenac in South Asia has resulted in the collapse of populations of three vulture species of the genusGyps to the most severe category of global extinction risk. Vultures are exposed to diclofenac when scavenging on livestock treated with the drug shortly before death. Diclofenac causes kidney damage, increased serum uric acid concentrations, visceral gout, and death. Concern about this issue led the Indian Government to announce its intention to ban the veterinary use of diclofenac by September 2005. Implementation of a ban is still in progress late in 2005, and to facilitate this we sought potential alternative NSAIDs by obtaining information from captive bird collections worldwide. We found that the NSAID meloxicam had been administered to 35 captiveGyps vultures with no apparent ill effects. We then undertook a phased programme of safety testing of meloxicam on the African white-backed vultureGyps africanus, which we had previously established to be as susceptible to diclofenac poisoning as the endangered AsianGyps vultures. We estimated the likely maximum level of exposure (MLE) of wild vultures and dosed birds by gavage (oral administration) with increasing quantities of the drug until the likely MLE was exceeded in a sample of 40G. africanus. Subsequently, sixG. africanus were fed tissues from cattle which had been treated with a higher than standard veterinary course of meloxicam prior to death. In the final phase, ten Asian vultures of two of the endangered species(Gyps bengalensis,Gyps indicus) were dosed with meloxicam by gavage; five of them at more than the likely MLE dosage. All meloxicam-treated birds survived all treatments, and none suffered any obvious clinical effects. Serum uric acid concentrations remained within the normal limits throughout, and were significantly lower than those from birds treated with diclofenac in other studies. We conclude that meloxicam is of low toxicity toGyps vultures and that its use in place of diclofenac would reduce vulture mortality substantially in the Indian subcontinent. Meloxicam is already available for veterinary use in India. PMID:16435886
2013-01-01
Background Deer tick virus, DTV, is a genetically and ecologically distinct lineage of Powassan virus (POWV) also known as lineage II POWV. Human incidence of POW encephalitis has increased in the last 15 years potentially due to the emergence of DTV, particularly in the Hudson Valley of New York State. We initiated an extensive sampling campaign to determine whether POWV was extant throughout the Hudson Valley in tick vectors and/or vertebrate hosts. Methods More than 13,000 ticks were collected from hosts or vegetation and tested for the presence of DTV using molecular and virus isolation techniques. Vertebrate hosts of Ixodes scapularis (black-legged tick) were trapped (mammals) or netted (birds) and blood samples analyzed for the presence of neutralizing antibodies to POWV. Maximum likelihood estimates (MLE) were calculated to determine infection rates in ticks at each study site. Results Evidence of DTV was identified each year from 2007 to 2012, in nymphal and adult I. scapularis collected from the Hudson Valley. 58 tick pools were positive for virus and/or RNA. Infection rates were higher in adult ticks collected from areas east of the Hudson River. MLE limits ranged from 0.2-6.0 infected adults per 100 at sites where DTV was detected. Virginia opossums, striped skunks and raccoons were the source of infected nymphal ticks collected as replete larvae. Serologic evidence of POWV infection was detected in woodchucks (4/6), an opossum (1/6), and birds (4/727). Lineage I, prototype POWV, was not detected. Conclusions These data demonstrate widespread enzootic transmission of DTV throughout the Hudson Valley, in particular areas east of the river. High infection rates were detected in counties where recent POW encephalitis cases have been identified, supporting the hypothesis that lineage II POWV, DTV, is responsible for these human infections. PMID:24016533
Lee, J-H; Lee, S-M; Choi, G-C; Park, H-S; Kang, D-H; Park, J-J
2011-01-01
Spent sulfidic caustic (SSC) produced from petrochemical plants contains a high concentration of hydrogen sulfide and alkalinity, and some almost non-biodegradable organic compounds such as benzene, toluene, ethylbenzene and xylenes (BTEX). SSC is mainly incinerated with auxiliary fuel, leading to secondary pollution problems. The reuse of this waste is becoming increasingly important from economic and environmental viewpoints. To denitrify wastewater with low COD/N ratio, additional carbon sources are required. Thus, autotrophic denitrification has attracted increasing attention. In this study, SSC was injected as an electron donor for sulfur-based autotrophic denitrification in the modified Ludzack-Ettinger (MLE) process. The efficiencies of nitrification, COD, and total nitrogen (TN) removal were evaluated with varying SSC dosage. Adequate SSC injection exhibited stable autotrophic denitrification. No BTEX were detected in the monitored BTEX concentrations of the effluent. To analyse the microbial community of the MLE process, PCR-DGGE based on 16 S rDNA with EUB primers, TD primers and nirK gene with nirK primers was performed in order to elucidate the application of the MLE process to SSC.
Test of the Hill Stability Criterion against Chaos Indicators
NASA Astrophysics Data System (ADS)
Satyal, Suman; Quarles, Billy; Hinse, Tobias
2012-10-01
The efficacy of Hill Stability (HS) criterion is tested against other known chaos indicators such as Maximum Lyapunov Exponents (MLE) and Mean Exponential Growth of Nearby Orbits (MEGNO) maps. First, orbits of four observationally verified binary star systems: γ Cephei, Gliese-86, HD41004, and HD196885 are integrated using standard integration packages (MERCURY, SWIFTER, NBI, C/C++). The HS which measures orbital perturbation of a planet around the primary star due to the secondary star is calculated for each system. The LEs spectra are generated to measure the divergence/convergence rate of stable manifolds and the MEGNO maps are generated by using the variational equations of the system during the integration process. These maps allow to accurately differentiate between stable and unstable dynamical systems. Then the results obtained from the analysis of HS, MLE, and MEGNO maps are checked for their dynamical variations and resemblance. The HS of most of the planets seems to be stable, quasi-periodic for at least ten million years. The MLE and the MEGNO maps also indicate the local quasi-periodicity and global stability in relatively short integration period. The HS criterion is found to be a comparably efficient tool to measure the stability of planetary orbits.
Shi, Fanrong; Tuo, Xianguo; Yang, Simon X.; Li, Huailiang; Shi, Rui
2017-01-01
Wireless sensor networks (WSNs) have been widely used to collect valuable information in Structural Health Monitoring (SHM) of bridges, using various sensors, such as temperature, vibration and strain sensors. Since multiple sensors are distributed on the bridge, accurate time synchronization is very important for multi-sensor data fusion and information processing. Based on shape of the bridge, a spanning tree is employed to build linear topology WSNs and achieve time synchronization in this paper. Two-way time message exchange (TTME) and maximum likelihood estimation (MLE) are employed for clock offset estimation. Multiple TTMEs are proposed to obtain a subset of TTME observations. The time out restriction and retry mechanism are employed to avoid the estimation errors that are caused by continuous clock offset and software latencies. The simulation results show that the proposed algorithm could avoid the estimation errors caused by clock drift and minimize the estimation error due to the large random variable delay jitter. The proposed algorithm is an accurate and low complexity time synchronization algorithm for bridge health monitoring. PMID:28471418
Shi, Fanrong; Tuo, Xianguo; Yang, Simon X; Li, Huailiang; Shi, Rui
2017-05-04
Wireless sensor networks (WSNs) have been widely used to collect valuable information in Structural Health Monitoring (SHM) of bridges, using various sensors, such as temperature, vibration and strain sensors. Since multiple sensors are distributed on the bridge, accurate time synchronization is very important for multi-sensor data fusion and information processing. Based on shape of the bridge, a spanning tree is employed to build linear topology WSNs and achieve time synchronization in this paper. Two-way time message exchange (TTME) and maximum likelihood estimation (MLE) are employed for clock offset estimation. Multiple TTMEs are proposed to obtain a subset of TTME observations. The time out restriction and retry mechanism are employed to avoid the estimation errors that are caused by continuous clock offset and software latencies. The simulation results show that the proposed algorithm could avoid the estimation errors caused by clock drift and minimize the estimation error due to the large random variable delay jitter. The proposed algorithm is an accurate and low complexity time synchronization algorithm for bridge health monitoring.
NASA Astrophysics Data System (ADS)
Gronewold, A. D.; Wolpert, R. L.; Reckhow, K. H.
2007-12-01
Most probable number (MPN) and colony-forming-unit (CFU) are two estimates of fecal coliform bacteria concentration commonly used as measures of water quality in United States shellfish harvesting waters. The MPN is the maximum likelihood estimate (or MLE) of the true fecal coliform concentration based on counts of non-sterile tubes in serial dilution of a sample aliquot, indicating bacterial metabolic activity. The CFU is the MLE of the true fecal coliform concentration based on the number of bacteria colonies emerging on a growth plate after inoculation from a sample aliquot. Each estimating procedure has intrinsic variability and is subject to additional uncertainty arising from minor variations in experimental protocol. Several versions of each procedure (using different sized aliquots or different numbers of tubes, for example) are in common use, each with its own levels of probabilistic and experimental error and uncertainty. It has been observed empirically that the MPN procedure is more variable than the CFU procedure, and that MPN estimates are somewhat higher on average than CFU estimates, on split samples from the same water bodies. We construct a probabilistic model that provides a clear theoretical explanation for the observed variability in, and discrepancy between, MPN and CFU measurements. We then explore how this variability and uncertainty might propagate into shellfish harvesting area management decisions through a two-phased modeling strategy. First, we apply our probabilistic model in a simulation-based analysis of future water quality standard violation frequencies under alternative land use scenarios, such as those evaluated under guidelines of the total maximum daily load (TMDL) program. Second, we apply our model to water quality data from shellfish harvesting areas which at present are closed (either conditionally or permanently) to shellfishing, to determine if alternative laboratory analysis procedures might have led to different management decisions. Our research results indicate that the (often large) observed differences between MPN and CFU values for the same water body are well within the ranges predicted by our probabilistic model. Our research also indicates that the probability of violating current water quality guidelines at specified true fecal coliform concentrations depends on the laboratory procedure used. As a result, quality-based management decisions, such as opening or closing a shellfishing area, may also depend on the laboratory procedure used.
Neural Network Technique for Continous Transition from Ocean to Coastal Retrackers
NASA Astrophysics Data System (ADS)
Hazrina Idris, Nurul; Deng, Xiaoli; Hawani Idris, Nurul
2017-04-01
This paper presents the development of neural network for continuous transition of altimeter sea surface heights when switching from ocean to coastal waveform retrackers. In attempting to produce precise coastal sea level anomaly (SLA) via retracking waveforms, issue arose when employing multiple retracking algorithms (i.e. MLE-4, sub-waveform and threshold). The existence of relative offset between those retrackers creates 'jump' in the retracked SLA profiles. In this study, the offset between retrackers is minimized using multi-layer feed forward neural network technique. The technique reduces the offset values by modelling the complicated functions of those retracked SLAs. The technique is tested over the region of the Great Barrier Reef (GBR), Australia. The validation with Townsville and Bundaberg tide gauges shows that the threshold retracker achieves temporal correlations (r) of 0.84 and 0.75, respectively, and root mean square (RMS) error is 16 cm for both stations, indicating that the retracker produces more accurate SLAs than those of two retrackers. Meanwhile, values of r (RMS error) for MLE-4 is only 0.79 (18 cm) and 0.71 (16 cm), respectively, and for sub-waveform is 0.82 (16 cm) and 0.67 (16 cm), respectively. Therefore, with the neural network, retracked SLAs from MLE-4 and sub-waveform are aligned to those of the threshold retracker. The performance of neural network is compared with the normal procedure of offset removal, which is based on the mean of SLA differences (mean method). The performance is assessed by computing the standard deviation of difference (STD) between the SLAs above a referenced ellipsoid and the geoidal height, and the improvement of percentage (IMP). The results indicate that the neural network provides improvement in SLA precision in all 12 cases, while the mean method provides improvement in 10 out of 12 cases and deterioration is seen in two cases. In terms of STD and IMP, neural network reduces the offset better than those of the mean method. The IMPs with neural network reaches up to 67% for Jason-1 and 73% for Jason-2, meanwhile with mean method the IMPs only reaches up to 28% and 46%, respectively. In conclusion, the neural network technique is efficient to reduce the offset among retrackers by handling the linear and nonlinear relationship between retrackers, thus providing seamless transition from the open ocean to the coast, and vice versa. Studies in currently on-going are to consider other geophysical parameters, such as significant wave height that might be related to the variation of the offset, in the neural network.
Weak and Dynamic GNSS Signal Tracking Strategies for Flight Missions in the Space Service Volume
Jing, Shuai; Zhan, Xingqun; Liu, Baoyu; Chen, Maolin
2016-01-01
Weak-signal and high-dynamics are of two primary concerns of space navigation using GNSS (Global Navigation Satellite System) in the space service volume (SSV). The paper firstly defines a reference assumption third-order phase-locked loop (PLL) as the baseline of an onboard GNSS receiver, and proves the incompetence of this conventional architecture. Then an adaptive four-state Kalman filter (KF)-based algorithm is introduced to realize the optimization of loop noise bandwidth, which can adaptively regulate its filter gain according to the received signal power and line-of-sight (LOS) dynamics. To overcome the matter of losing lock in weak-signal and high-dynamic environments, an open loop tracking strategy aided by an inertial navigation system (INS) is recommended, and the traditional maximum likelihood estimation (MLE) method is modified in a non-coherent way by reconstructing the likelihood cost function. Furthermore, a typical mission with combined orbital maneuvering and non-maneuvering arcs is taken as a destination object to test the two proposed strategies. Finally, the experiment based on computer simulation identifies the effectiveness of an adaptive four-state KF-based strategy under non-maneuvering conditions and the virtue of INS-assisted methods under maneuvering conditions. PMID:27598164
Weak and Dynamic GNSS Signal Tracking Strategies for Flight Missions in the Space Service Volume.
Jing, Shuai; Zhan, Xingqun; Liu, Baoyu; Chen, Maolin
2016-09-02
Weak-signal and high-dynamics are of two primary concerns of space navigation using GNSS (Global Navigation Satellite System) in the space service volume (SSV). The paper firstly defines a reference assumption third-order phase-locked loop (PLL) as the baseline of an onboard GNSS receiver, and proves the incompetence of this conventional architecture. Then an adaptive four-state Kalman filter (KF)-based algorithm is introduced to realize the optimization of loop noise bandwidth, which can adaptively regulate its filter gain according to the received signal power and line-of-sight (LOS) dynamics. To overcome the matter of losing lock in weak-signal and high-dynamic environments, an open loop tracking strategy aided by an inertial navigation system (INS) is recommended, and the traditional maximum likelihood estimation (MLE) method is modified in a non-coherent way by reconstructing the likelihood cost function. Furthermore, a typical mission with combined orbital maneuvering and non-maneuvering arcs is taken as a destination object to test the two proposed strategies. Finally, the experiment based on computer simulation identifies the effectiveness of an adaptive four-state KF-based strategy under non-maneuvering conditions and the virtue of INS-assisted methods under maneuvering conditions.
Sidewall GaAs tunnel junctions fabricated using molecular layer epitaxy
Ohno, Takeo; Oyama, Yutaka
2012-01-01
In this article we review the fundamental properties and applications of sidewall GaAs tunnel junctions. Heavily impurity-doped GaAs epitaxial layers were prepared using molecular layer epitaxy (MLE), in which intermittent injections of precursors in ultrahigh vacuum were applied, and sidewall tunnel junctions were fabricated using a combination of device mesa wet etching of the GaAs MLE layer and low-temperature area-selective regrowth. The fabricated tunnel junctions on the GaAs sidewall with normal mesa orientation showed a record peak current density of 35 000 A cm-2. They can potentially be used as terahertz devices such as a tunnel injection transit time effect diode or an ideal static induction transistor. PMID:27877466
Galaviz-Silva, Lucio; Pérez-Treviño, Karla Carmelita; Molina-Garza, Zinnia J
2013-12-01
This study aimed to document the geographic distribution of Ixodes tick species in dogs and the prevalence of Borrelia burgdorferi s.l. in adult ticks and blood samples by amplification of the ospA region of the B. burgdorferi genome. The study area included nine localities in Nuevo León state. DNA amplification was performed on pools of ticks to calculate the maximum likelihood estimation (MLE), and the community composition (prevalence, abundance, and intensity of infestation) was recorded. A total of 2,543 adult ticks, representing four species, Rhipicephalus sanguineus, Dermacentor variabilis, Rhipicephalus (Boophilus) annulatus, and Amblyomma cajennense, were recorded from 338 infested dogs. Statistically significant correlations were observed between female dogs and infestation (P = 0.0003) and between R. sanguineus and locality (P = 0.0001). Dogs sampled in Guadalupe and Estanzuela were positive by PCR (0.9 %) for B. burgdorferi. Rhipicephalus sanguineus had the highest abundance, intensity, and prevalence (10.57, 7.12 and 94.6, respectively). PCR results from 256 pools showed that four pools were positive for D. variabilis (1.6 %), with an MLE of 9.2 %; nevertheless, it is important to consider that in the area under examination probably other reservoir hosts for D. variabilis and B. burgdorferi are present that, very likely, play a much more important role in the ecology of Lyme borreliosis than dogs, which could be considered in future studies.
NASA Technical Reports Server (NTRS)
Peters, C. (Principal Investigator)
1980-01-01
A general theorem is given which establishes the existence and uniqueness of a consistent solution of the likelihood equations given a sequence of independent random vectors whose distributions are not identical but have the same parameter set. In addition, it is shown that the consistent solution is a MLE and that it is asymptotically normal and efficient. Two applications are discussed: one in which independent observations of a normal random vector have missing components, and the other in which the parameters in a mixture from an exponential family are estimated using independent homogeneous sample blocks of different sizes.
NASA Astrophysics Data System (ADS)
Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Karion, A.; Mueller, K.; Gourdji, S.; Martin, C.; Whetstone, J. R.
2017-12-01
The National Institute of Standards and Technology (NIST) supports the North-East Corridor Baltimore Washington (NEC-B/W) project and Indianapolis Flux Experiment (INFLUX) aiming to quantify sources of Greenhouse Gas (GHG) emissions as well as their uncertainties. These projects employ different flux estimation methods including top-down inversion approaches. The traditional Bayesian inversion method estimates emission distributions by updating prior information using atmospheric observations of Green House Gases (GHG) coupled to an atmospheric and dispersion model. The magnitude of the update is dependent upon the observed enhancement along with the assumed errors such as those associated with prior information and the atmospheric transport and dispersion model. These errors are specified within the inversion covariance matrices. The assumed structure and magnitude of the specified errors can have large impact on the emission estimates from the inversion. The main objective of this work is to build a data-adaptive model for these covariances matrices. We construct a synthetic data experiment using a Kalman Filter inversion framework (Lopez et al., 2017) employing different configurations of transport and dispersion model and an assumed prior. Unlike previous traditional Bayesian approaches, we estimate posterior emissions using regularized sample covariance matrices associated with prior errors to investigate whether the structure of the matrices help to better recover our hypothetical true emissions. To incorporate transport model error, we use ensemble of transport models combined with space-time analytical covariance to construct a covariance that accounts for errors in space and time. A Kalman Filter is then run using these covariances along with Maximum Likelihood Estimates (MLE) of the involved parameters. Preliminary results indicate that specifying sptio-temporally varying errors in the error covariances can improve the flux estimates and uncertainties. We also demonstrate that differences between the modeled and observed meteorology can be used to predict uncertainties associated with atmospheric transport and dispersion modeling which can help improve the skill of an inversion at urban scales.
Sheng, Jiangyun; Baldeck, Jeremiah D.; Nguyen, Phuong T.M.; Quivey, Robert G.; Marquis, Robert E.
2011-01-01
Alkali production by oral streptococci is considered important for dental plaque ecology and caries moderation. Recently, malolactic fermentation (MLF) was identified as a major system for alkali production by oral streptococci, including Streptococcus mutans. Our major objectives in the work described in this paper were to further define the physiology and genetics of MLF of oral streptococci and its roles in protection against metabolic stress damage. l-Malic acid was rapidly fermented to l-lactic acid and CO2 by induced cells of wild-type S. mutans, but not by deletion mutants for mleS (malolactic enzyme) or mleP (malate permease). Mutants for mleR (the contiguous regulator gene) had intermediate capacities for MLF. Loss of capacity to catalyze MLF resulted in loss of capacity for protection against lethal acidification. MLF was also found to be protective against oxidative and starvation damage. The capacity of S. mutans to produce alkali from malate was greater than its capacity to produce acid from glycolysis at low pH values of 4 or 5. MLF acted additively with the arginine deiminase system for alkali production by Streptococcus sanguinis, but not with urease of Streptococcus salivarius. Malolactic fermentation is clearly a major process for alkali generation by oral streptococci and for protection against environmental stresses. PMID:20651853
Sheng, Jiangyun; Baldeck, Jeremiah D; Nguyen, Phuong T M; Quivey, Robert G; Marquis, Robert E
2010-07-01
Alkali production by oral streptococci is considered important for dental plaque ecology and caries moderation. Recently, malolactic fermentation (MLF) was identified as a major system for alkali production by oral streptococci, including Streptococcus mutans. Our major objectives in the work described in this paper were to further define the physiology and genetics of MLF of oral streptococci and its roles in protection against metabolic stress damage. L-Malic acid was rapidly fermented to L-lactic acid and CO(2) by induced cells of wild-type S. mutans, but not by deletion mutants for mleS (malolactic enzyme) or mleP (malate permease). Mutants for mleR (the contiguous regulator gene) had intermediate capacities for MLF. Loss of capacity to catalyze MLF resulted in loss of capacity for protection against lethal acidification. MLF was also found to be protective against oxidative and starvation damage. The capacity of S. mutans to produce alkali from malate was greater than its capacity to produce acid from glycolysis at low pH values of 4 or 5. MLF acted additively with the arginine deiminase system for alkali production by Streptococcus sanguinis, but not with urease of Streptococcus salivarius. Malolactic fermentation is clearly a major process for alkali generation by oral streptococci and for protection against environmental stresses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mudiyanselage, Kumudu; Yi, Cheol-Woo; Szanyi, Janos
2011-05-31
The adsorption and reaction of NO2 on BaO (<1, ~3, and >20 monolayer equivalent (MLE))/Pt(111) model systems were studied with temperature programmed desorption (TPD), X-ray photoelectron spectroscopy (XPS), and infrared reflection absorption spectroscopy (IRAS) under ultra-high vacuum (UHV) as well as elevated pressure conditions. NO2 reacts with sub-monolayer BaO (<1 MLE) to form nitrites only, whereas the reaction of NO2 with BaO (~3 MLE)/Pt(111) produces mainly nitrites and a small amount of nitrates under UHV conditions (PNO2 ~ 1.0 × 10-9 Torr) at 300 K. In contrast, a thick BaO(>20 MLE) layer on Pt(111) reacts with NO2 to form nitrite-nitratemore » ion pairs under the same conditions. At elevated NO2 pressures (≥ 1.0 × 10-5 Torr), however, BaO layers at all these three coverages convert to amorphous barium nitrates at 300 K. Upon annealing to 500 K, these amorphous barium nitrate layers transform into crystalline phases. The thermal decomposition of the thus-formed Ba(NOx)2 species is also influenced by the coverage of BaO on the Pt(111) substrate: at low BaO coverages, these species decompose at significantly lower temperatures in comparison with those formed on thick BaO films due to the presence of Ba(NOx)2/Pt interface where the decomposition can proceed at lower temperatures. However, the thermal decomposition of the thick Ba(NO3)2 films follows that of bulk nitrates. Results obtained from these BaO/Pt(111) model systems under UHV and elevated pressure conditions clearly demonstrate that both the BaO film thickness and the applied NO2 pressure are critical in the Ba(NOx)2 formation and subsequent thermal decomposition processes.« less
A comparison of PCA/ICA for data preprocessing in remote sensing imagery classification
NASA Astrophysics Data System (ADS)
He, Hui; Yu, Xianchuan
2005-10-01
In this paper a performance comparison of a variety of data preprocessing algorithms in remote sensing image classification is presented. These selected algorithms are principal component analysis (PCA) and three different independent component analyses, ICA (Fast-ICA (Aapo Hyvarinen, 1999), Kernel-ICA (KCCA and KGV (Bach & Jordan, 2002), EFFICA (Aiyou Chen & Peter Bickel, 2003). These algorithms were applied to a remote sensing imagery (1600×1197), obtained from Shunyi, Beijing. For classification, a MLC method is used for the raw and preprocessed data. The results show that classification with the preprocessed data have more confident results than that with raw data and among the preprocessing algorithms, ICA algorithms improve on PCA and EFFICA performs better than the others. The convergence of these ICA algorithms (for data points more than a million) are also studied, the result shows EFFICA converges much faster than the others. Furthermore, because EFFICA is a one-step maximum likelihood estimate (MLE) which reaches asymptotic Fisher efficiency (EFFICA), it computers quite small so that its demand of memory come down greatly, which settled the "out of memory" problem occurred in the other algorithms.
NASA Astrophysics Data System (ADS)
Hambach, U.; Hark, M.; Zeeden, C.; Reddersen, B.; Zöller, L.; Fuchs, M.
2009-04-01
One of the youngest and worldwide documented geomagnetic excursions in the Brunhes Chron is the Mono Lake excursion (MLE). It has been detected in marine and terrestrial sedimentary archives as well as in lavas. Recent age determinations and age estimates for the MLE centre around an age interval of approximately 31 - 34 ka. Likewise the Laschamp excursion the MLE goes along with a distinct peak in cosmogenic radionuclides in ice cores and sedimentary archives. It provides therefore an additional geomagnetic time marker for various geoarchives to synchronise different climate archives. Here we report on a detailed record of the MLE from a loess site at Krems, Lower Austria. The site is situated on the southern slope of the Wachtberg hill in the vicinity of the old city centre of Krems. The archive comprises Middle to Upper Würmian (Late Pleistocene) loess in which an Upper Palaeolithic (Early Gravettian) cultural layer is embedded. The most spectacular finds are a double infant burial found in 2005 and a single burial discovered in 2006 (Einwögerer et al., 2006). Generally, archaeological findings show an extraordinarily good preservation due to embedding in rapidly sedimented loess (Händel et al., 2008). The about 10 m thick loess pile consists of calcareous sandy, coarse silt which is rich in mica indicating local sources. It is well stratified with brownish horizons representing embryonic soils pointing to incipient pedogenesis. Some of the pedo-horizons show occasionally indications of minor erosion and bedding-parallel sediment transport, but no linear erosional features. Pale greyish horizons are the result of partial gleying under permafrost conditions. No strong pedogenesis including decalcification and clay formation is present. The cultural layer is still covered by more than 5 m of loess, and dated by radiocarbon to ~27 ka 14C BP (Einwögerer et al., 2006). Below this layer up to 2.5 m of loess resting on Lower Pleistocene fluvial gravels are preserved. Thus, the loess section represents a palaeoclimatic record of alternating cold-dry and warm-humid conditions on millennial scale. Optical stimulated luminescence dating of aeolian loess around the cultural layer reveals ages of 30 to 32 ka which is supported by thermoluminesence dating of burnt loess from a hearth belonging to the archaeological living floor. In summer 2005 and 2006, two overlapping sections were continuously sampled in for palaeomagnetic investigations. The sampled sections are located outside the centre of the main archaeological occupation in the northwestern corner of the excavation pit. Sample spacing is strictly 2.1 cm, measured from centre to centre of the specimens. In total, 432 individually oriented specimens were recovered from the almost 8 m thick section. Magnetic susceptibility (MS) as function of depth resembles generally the lithology. Low MS-values represent pure unaltered or weakly gleyed loess, whereas higher values represent the enhancement of magnetic minerals caused by incipient soil formation. Anhysteretic remanent magnetisation (ARM) versus MS reveals an enhancement of super-paramagnetic particles where MS is increased. Consequently, the rock magnetic variations with depth can be taken as a palaeoclimatic record representing the climatic variations between drier and slightly more humid conditions at the transition from Middle to Upper Pleniglacial. Based on the ARM/MS record a correlation of the geoarchive at the Krems-Wachtberg site with the NORTH-GRIP isotopic record (NGRIP Members, 2004) and with sedimentological data from Maar-lake sediments of the Eifel area (ELSA; Schaber and Sirocko, 2005), Germany can be established. The general correlation suggests the dating of the loess at the excavation site to a time interval between approx. 20 to 40 ka, covering Greenland interstadials (GI) 2 to 8 and Heinrich Events 3 and 4 (top). The Gravettian living floor is assigned to the base of GI 5 and thus to an age of 32 to 33 ka. The directional palaeomagnetic record is of high quality and shows variations in the bandwidth of secular variation in the upper and in the lower part of the section, whereas in the central part shallow (? 30Ë ) and oversteep inclinations reveal the record of a geomagnetic excursion just above the find horizon. The shallow inclinations are preceded by and go along with westerly declinations, whereas the steep inclinations are preceded by easterly declinations. This directional pattern is similar to what was found at the Mono Lake in California (e.g. Liddicoat and Coe, 1979; Lund et al., 1988). A relative palaeointensity (RPI) record was constructed by using MS and ARM as normalisers. This record corresponds quite well to the GLOPIS (Laj et al., 2004) and thus provides additional dating. The peak of the directional excursion coincides with a relative minimum of RPI. The average RPI during the excursional interval, however, is significantly higher than during normal periods, contrary to what is usually reported. Furthermore, just before and after the directional excursion the highest values of RPI occur. The largest amplitude of the directional excursion does not correspond to the well defined minimum in RPI preceding this interval which is usually taken for the MLE in the marine RPI records. This offset between the RPI and the directional record may indicate the presence of strong non-dipole components and may also explain the blur in dating of the MLE. The calculated VGPs of the directional excursion lie over North America but do not correspond to the looping behaviour as reported from the Mono Lake VGPs itself (Liddicoat and Coe, 1979). The cultural layer at the Krems-Wachtberg site is located in the centre of the RPI minimum which is slightly older than the peak of the directional excursion. The radiocarbon ages from the cultural layer (~27 ka 14C age BP = ~32 ka calendric age calBP) fit well to the age estimates of the MLE at the Mono Lake based on radiocarbon dating and tephrochronology (31.5 - 33.3 ka; Benson et al., 2003). Furthermore, the recently published 40Ar/39Ar ages of one excursional group (Auckland cluster 1: 31.6 ± 1.8 ka) from the Auckland volcanic field, New Zealand correspond to the ages discussed above. Thus, the MLE is a perfect time marker occurring globally but is probably dominated by strong non-dipole components. Benson et al. (2003). Quaternary Science Reviews, 22,135-140; Cassata et al. (2008). Earth and Planetary Science Letters, 268, 76-88; Einwögerer et al. (2006). Nature, 444, 285; Händel et al. (in press). Quaternary International; Laj et al. (2004). Geophysical Monograph Series, 145, 255-265; Liddicoat and Coe (1979). Journal of Geophysical Research, 84, 261-271; Lund et al. (1988). Geophysical Research Letters,15,10, 1101-1104; North Greenland Ice Core Project Members (2004). Nature 431, 147-151; Schaber and Sirocko (2005). Mainzer geowiss. Mitt., 33, 295-340.
1988-05-01
the meet ehidmli i thm e mpesm of rmbrme pap Ii bprmaeIea s, IDA Mwmaim Ampad le eI.te umm emOw casm d One IqIammeis er~ wh eMA ls is mmidsmwkdMle...in turn, is controlled by the units above it. Dynamic programming is a mathematical technique well suited for optimization of multistage models. This...interval to a desired accuracy. Several region elimination methods have been discussed in the literature, including the Golden Section, Fibonacci
Gooda Sahib Jambocus, Najla; Saari, Nazamid; Ismail, Amin; Mahomoodally, Mohamad Fawzi; Abdul Hamid, Azizah
2016-01-01
The prevalence of obesity is increasing worldwide, with high fat diet (HFD) as one of the main contributing factors. Obesity increases the predisposition to other diseases such as diabetes through various metabolic pathways. Limited availability of antiobesity drugs and the popularity of complementary medicine have encouraged research in finding phytochemical strategies to this multifaceted disease. HFD induced obese Sprague-Dawley rats were treated with an extract of Morinda citrifolia L. leaves (MLE 60). After 9 weeks of treatment, positive effects were observed on adiposity, fecal fat content, plasma lipids, and insulin and leptin levels. The inducement of obesity and treatment with MLE 60 on metabolic alterations were then further elucidated using a 1H NMR based metabolomics approach. Discriminating metabolites involved were products of various metabolic pathways, including glucose metabolism and TCA cycle (lactate, 2-oxoglutarate, citrate, succinate, pyruvate, and acetate), amino acid metabolism (alanine, 2-hydroxybutyrate), choline metabolism (betaine), creatinine metabolism (creatinine), and gut microbiome metabolism (hippurate, phenylacetylglycine, dimethylamine, and trigonelline). Treatment with MLE 60 resulted in significant improvement in the metabolic perturbations caused obesity as demonstrated by the proximity of the treated group to the normal group in the OPLS-DA score plot and the change in trajectory movement of the diseased group towards the healthy group upon treatment. PMID:26798649
Mishima, Katsuaki; Moritani, Norifumi; Nakano, Hiroyuki; Matsushita, Asuka; Iida, Seiji; Ueyama, Yoshiya
2013-12-01
The purpose of this study was to explore the voice characteristics of patients with mandibular prognathism, and to investigate the effects of mandibular setback surgery on these characteristics using nonlinear dynamics and conventional acoustic analyses. Sixteen patients (8 males and 8 females) who had skeletal 3, class III malocclusion without cleft palate, and who underwent a bilateral sagittal split ramus osteotomy (BSSRO), were enrolled. As controls, 50 healthy adults (25 males and 25 females) were enrolled. The mean first LEs (mLE1) computed for each one-second interval, and the fundamental frequency (F0) and frequencies of the first and second formant (F1, F2) were calculated for each Japanese vowel. The mLE1s for /u/ in males, and /o/ in females and the F2s for /i/ and /u/ in males, changed significantly after BSSRO. Class III voice characteristics were observed in the mLE1s for /i/ in both males and females, in the F0 for /a/, /i/, /u/ and /o/ in females, and in the F1 and F2 for /a/ in males, and the F1 for /u/ and the F2 for /i/ in females. Most of these characteristics were preserved after BSSRO. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Gooda Sahib Jambocus, Najla; Saari, Nazamid; Ismail, Amin; Khatib, Alfi; Mahomoodally, Mohamad Fawzi; Abdul Hamid, Azizah
2016-01-01
The prevalence of obesity is increasing worldwide, with high fat diet (HFD) as one of the main contributing factors. Obesity increases the predisposition to other diseases such as diabetes through various metabolic pathways. Limited availability of antiobesity drugs and the popularity of complementary medicine have encouraged research in finding phytochemical strategies to this multifaceted disease. HFD induced obese Sprague-Dawley rats were treated with an extract of Morinda citrifolia L. leaves (MLE 60). After 9 weeks of treatment, positive effects were observed on adiposity, fecal fat content, plasma lipids, and insulin and leptin levels. The inducement of obesity and treatment with MLE 60 on metabolic alterations were then further elucidated using a (1)H NMR based metabolomics approach. Discriminating metabolites involved were products of various metabolic pathways, including glucose metabolism and TCA cycle (lactate, 2-oxoglutarate, citrate, succinate, pyruvate, and acetate), amino acid metabolism (alanine, 2-hydroxybutyrate), choline metabolism (betaine), creatinine metabolism (creatinine), and gut microbiome metabolism (hippurate, phenylacetylglycine, dimethylamine, and trigonelline). Treatment with MLE 60 resulted in significant improvement in the metabolic perturbations caused obesity as demonstrated by the proximity of the treated group to the normal group in the OPLS-DA score plot and the change in trajectory movement of the diseased group towards the healthy group upon treatment.
Estimating Animal Abundance in Ground Beef Batches Assayed with Molecular Markers
Hu, Xin-Sheng; Simila, Janika; Platz, Sindey Schueler; Moore, Stephen S.; Plastow, Graham; Meghen, Ciaran N.
2012-01-01
Estimating animal abundance in industrial scale batches of ground meat is important for mapping meat products through the manufacturing process and for effectively tracing the finished product during a food safety recall. The processing of ground beef involves a potentially large number of animals from diverse sources in a single product batch, which produces a high heterogeneity in capture probability. In order to estimate animal abundance through DNA profiling of ground beef constituents, two parameter-based statistical models were developed for incidence data. Simulations were applied to evaluate the maximum likelihood estimate (MLE) of a joint likelihood function from multiple surveys, showing superiority in the presence of high capture heterogeneity with small sample sizes, or comparable estimation in the presence of low capture heterogeneity with a large sample size when compared to other existing models. Our model employs the full information on the pattern of the capture-recapture frequencies from multiple samples. We applied the proposed models to estimate animal abundance in six manufacturing beef batches, genotyped using 30 single nucleotide polymorphism (SNP) markers, from a large scale beef grinding facility. Results show that between 411∼1367 animals were present in six manufacturing beef batches. These estimates are informative as a reference for improving recall processes and tracing finished meat products back to source. PMID:22479559
NASA Astrophysics Data System (ADS)
Pasari, S.; Kundu, D.; Dikshit, O.
2012-12-01
Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.
Multi- and monofractal indices of short-term heart rate variability.
Fischer, R; Akay, M; Castiglioni, P; Di Rienzo, M
2003-09-01
Indices of heart rate variability (HRV) based on fractal signal models have recently been shown to possess value as predictors of mortality in specific patient populations. To develop more powerful clinical indices of HRV based on a fractal signal model, the study investigated two HRV indices based on a monofractal signal model called fractional Brownian motion and an index based on a multifractal signal model called multifractional Brownian motion. The performance of the indices was compared with an HRV index in common clinical use. To compare the indices, 18 normal subjects were subjected to postural changes, and the indices were compared on their ability to respond to the resulting autonomic events in HRV recordings. The magnitude of the response to postural change (normalised by the measurement variability) was assessed by analysis of variance and multiple comparison testing. Four HRV indices were investigated for this study: the standard deviation of all normal R-R intervals; an HRV index commonly used in the clinic; detrended fluctuation analysis, an HRV index found to be the most powerful predictor of mortality in a study of patients with depressed left ventricular function; an HRV index developed using the maximum likelihood estimation (MLE) technique for a monofractal signal model; and an HRV index developed for the analysis of multifractional Brownian motion signals. The HRV index based on the MLE technique was found to respond most strongly to the induced postural changes (95% CI). The magnitude of its response (normalised by the measurement variability) was at least 25% greater than any of the other indices tested.
Modelling of PM10 concentration for industrialized area in Malaysia: A case study in Shah Alam
NASA Astrophysics Data System (ADS)
N, Norazian Mohamed; Abdullah, M. M. A.; Tan, Cheng-yau; Ramli, N. A.; Yahaya, A. S.; Fitri, N. F. M. Y.
In Malaysia, the predominant air pollutants are suspended particulate matter (SPM) and nitrogen dioxide (NO2). This research is on PM10 as they may trigger harm to human health as well as environment. Six distributions, namely Weibull, log-normal, gamma, Rayleigh, Gumbel and Frechet were chosen to model the PM10 observations at the chosen industrial area i.e. Shah Alam. One-year period hourly average data for 2006 and 2007 were used for this research. For parameters estimation, method of maximum likelihood estimation (MLE) was selected. Four performance indicators that are mean absolute error (MAE), root mean squared error (RMSE), coefficient of determination (R2) and prediction accuracy (PA), were applied to determine the goodness-of-fit criteria of the distributions. The best distribution that fits with the PM10 observations in Shah Alamwas found to be log-normal distribution. The probabilities of the exceedences concentration were calculated and the return period for the coming year was predicted from the cumulative density function (cdf) obtained from the best-fit distributions. For the 2006 data, Shah Alam was predicted to exceed 150 μg/m3 for 5.9 days in 2007 with a return period of one occurrence per 62 days. For 2007, the studied area does not exceed the MAAQG of 150 μg/m3
Nagy, Peter; Szabó, Ágnes; Váradi, Tímea; Kovács, Tamás; Batta, Gyula; Szöllősi, János
2016-04-01
Fluorescence or Förster resonance energy transfer (FRET) remains one of the most widely used methods for assessing protein clustering and conformation. Although it is a method with solid physical foundations, many applications of FRET fall short of providing quantitative results due to inappropriate calibration and controls. This shortcoming is especially valid for microscopy where currently available tools have limited or no capability at all to display parameter distributions or to perform gating. Since users of multiparameter flow cytometry usually apply these tools, the absence of these features in applications developed for microscopic FRET analysis is a significant limitation. Therefore, we developed a graphical user interface-controlled Matlab application for the evaluation of ratiometric, intensity-based microscopic FRET measurements. The program can calculate all the necessary overspill and spectroscopic correction factors and the FRET efficiency and it displays the results on histograms and dot plots. Gating on plots and mask images can be used to limit the calculation to certain parts of the image. It is an important feature of the program that the calculated parameters can be determined by regression methods, maximum likelihood estimation (MLE) and from summed intensities in addition to pixel-by-pixel evaluation. The confidence interval of calculated parameters can be estimated using parameter simulations if the approximate average number of detected photons is known. The program is not only user-friendly, but it provides rich output, it gives the user freedom to choose from different calculation modes and it gives insight into the reliability and distribution of the calculated parameters. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.
Hunger influenced life expectancy in war-torn Sub-Saharan African countries.
Uchendu, Florence N
2018-04-27
Malnutrition is a global public health problem especially in developing countries experiencing war/conflicts. War might be one of the socio-political factors influencing malnutrition in Sub-Saharan African (SSA) countries. This study aims at determining the influence of war on corruption, population (POP), number of population malnourished (NPU), food security and life expectancy (LE) in war-torn SSA countries (WTSSA) by comparing their malnutrition indicators. Fourteen countries in WTSSA were stratified into zones according to war incidences. Countries' secondary data on population (POP), NPU, Food Security Index (FSI), corruption perceptions index (CPI), Global Hunger Index (GHI) and LE were obtained from global published data. T test, multivariate and Pearson correlation analyses were performed to determine the relationship between CPI, POP, GHI, FSI, NPU, male LE (MLE) and female LE (FLE) in WTSSA at p < .05. Data were presented in tables, means, standard deviation and percentages. Mean NPU, CPI, GHI, POP, FSI, MLE and FLE in WTSSA were 5.0 million, 28.3%, 18.2%, 33.8 million, 30.8%, 54.7 years and 57.1 years, respectively. GHI significantly influenced LE in both male and female POP in WTSSA. NPU, CPI, FSI, GHI and FLE were not significantly different according to zones except in MLE. Malnutrition indicators were similarly affected in WTSSA. Hunger influenced life expectancy. Policies promoting good governance, equity, peaceful co-existence, respect for human right and adequate food supply will aid malnutrition eradication and prevent war occurrences in Sub-Saharan African countries.
People, Parks and Rainforests.
ERIC Educational Resources Information Center
Singer, Judith Y.
1992-01-01
The MLE Learning Center, a publicly funded day care center and after-school program in Brooklyn, New York, helps children develop awareness of a global community by using local resources to teach the children about the rainforest. (LB)
The Fastrack Suborbital Platform for Microgravity Applications
NASA Technical Reports Server (NTRS)
Levine, H. G.; Ball, J. E.; Shultz, D.; Odyssey, A.; Wells, H. W.; Soler, R. R.; Albino, S.; Meshberger, R. J.; Murdoch, T.
2009-01-01
The FASTRACK suborbital experiment platform has been developed to provide a capability for utilizing 2.5-5 minute microgravity flight opportunities anticipated from the commercial suborbital fleet (currently in development) for science investigations, technology development and hardware testing. It also provides "express rack" functionality to deliver payloads to ISS. FASTRACK fits within a 24" x 24" x 36" (61 cm x 61 cm x 91.4 cm) envelope and is capable of supporting either two single Middeck Locker Equivalents (MLE) or one double MLE configuration. Its overall mass is 300 lbs (136 kg), of which 160 lbs (72 kg) is reserved for experiments. FASTRACK operates using 28 VDC power or batteries. A support drawer located at the bottom of the structure contains all ancillary electrical equipment (including batteries, a conditioned power system and a data collection system) as well as a front panel that contains all switches (including remote cut-off), breakers and warning LEDs.
Genotoxic Evaluation of Mikania laevigata Extract on DNA Damage Caused by Acute Coal Dust Exposure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freitas, T.P.; Heuser, V.D.; Tavares, P.
2009-06-15
We report data on the possible antigenotoxic activity of Mikania laevigata extract (MLE) after acute intratracheal instillation of coal dust using the comet assay in peripheral blood, bone marrow, and liver cells and the micronucleus test in peripheral blood of Wistar rats. The animals were pretreated for 2 weeks with saline solution (groups 1 and 2) or MLE (100 mg/kg) (groups 3 and 4). On day 15, the animals were anesthetized with ketamine (80 mg/kg) and xylazine (20 mg/kg), and gross mineral coal dust (3 mg/0.3 mL saline) (groups 2 and 4) or saline solution (0.3 mL) (groups 1 andmore » 3) was administered directly in the lung by intratracheal administration. Fifteen days after coal dust or saline instillation, the animals were sacrificed, and the femur, liver, and peripheral blood were removed. The results showed a general increase in the DNA damage values at 8 hours for all treatment groups, probably related to surgical procedures that had stressed the animals. Also, liver cells from rats treated with coal dust, pretreated or not with MLE, showed statistically higher comet assay values compared to the control group at 14 days after exposure. These results could be expected because the liver metabolizes a variety of organic compounds to more polar by-products. On the other hand, the micronucleus assay results did not show significant differences among groups. Therefore, our data do not support the antimutagenic activity of M. laevigata as a modulator of DNA damage after acute coal dust instillation.« less
Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows
NASA Astrophysics Data System (ADS)
Srivastav, R. K.; Srinivasan, K.; Sudheer, K.
2009-05-01
Synthetic streamflow data generation involves the synthesis of likely streamflow patterns that are statistically indistinguishable from the observed streamflow data. The various kinds of stochastic models adopted for multi-season streamflow generation in hydrology are: i) parametric models which hypothesize the form of the periodic dependence structure and the distributional form a priori (examples are PAR, PARMA); disaggregation models that aim to preserve the correlation structure at the periodic level and the aggregated annual level; ii) Nonparametric models (examples are bootstrap/kernel based methods), which characterize the laws of chance, describing the stream flow process, without recourse to prior assumptions as to the form or structure of these laws; (k-nearest neighbor (k-NN), matched block bootstrap (MABB)); non-parametric disaggregation model. iii) Hybrid models which blend both parametric and non-parametric models advantageously to model the streamflows effectively. Despite many of these developments that have taken place in the field of stochastic modeling of streamflows over the last four decades, accurate prediction of the storage and the critical drought characteristics has been posing a persistent challenge to the stochastic modeler. This is partly because, usually, the stochastic streamflow model parameters are estimated by minimizing a statistically based objective function (such as maximum likelihood (MLE) or least squares (LS) estimation) and subsequently the efficacy of the models is being validated based on the accuracy of prediction of the estimates of the water-use characteristics, which requires large number of trial simulations and inspection of many plots and tables. Still accurate prediction of the storage and the critical drought characteristics may not be ensured. In this study a multi-objective optimization framework is proposed to find the optimal hybrid model (blend of a simple parametric model, PAR(1) model and matched block bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.
An optimal algorithm for reconstructing images from binary measurements
NASA Astrophysics Data System (ADS)
Yang, Feng; Lu, Yue M.; Sbaiz, Luciano; Vetterli, Martin
2010-01-01
We have studied a camera with a very large number of binary pixels referred to as the gigavision camera [1] or the gigapixel digital film camera [2, 3]. Potential advantages of this new camera design include improved dynamic range, thanks to its logarithmic sensor response curve, and reduced exposure time in low light conditions, due to its highly sensitive photon detection mechanism. We use maximum likelihood estimator (MLE) to reconstruct a high quality conventional image from the binary sensor measurements of the gigavision camera. We prove that when the threshold T is "1", the negative loglikelihood function is a convex function. Therefore, optimal solution can be achieved using convex optimization. Base on filter bank techniques, fast algorithms are given for computing the gradient and the multiplication of a vector and Hessian matrix of the negative log-likelihood function. We show that with a minor change, our algorithm also works for estimating conventional images from multiple binary images. Numerical experiments with synthetic 1-D signals and images verify the effectiveness and quality of the proposed algorithm. Experimental results also show that estimation performance can be improved by increasing the oversampling factor or the number of binary images.
NASA Astrophysics Data System (ADS)
Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.; Bianchini, Federico; Bleem, Lindsey E.; Crawford, Thomas M.; Holder, Gilbert P.; Manzotti, Alessandro; Reichardt, Christian L.
2017-08-01
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, we examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.
Dai, Jinxing; Xia, Xinyu; Li, Zhisheng; Coleman, Dennis D.; Dias, Robert F.; Gao, Ling; Li, Jian; Deev, Andrei; Li, Jin; Dessort, Daniel; Duclerc, Dominique; Li, Liwu; Liu, Jinzhong; Schloemer, Stefan; Zhang, Wenlong; Ni, Yunyan; Hu, Guoyi; Wang, Xiaobo; Tang, Yongchun
2012-01-01
Compound-specific carbon and hydrogen isotopic compositions of three natural gas round robins were calibrated by ten laboratories carrying out more than 800 measurements including both on-line and off-line methods. Two-point calibrations were performed with international measurement standards for hydrogen isotope ratios (VSMOW and SLAP) and carbon isotope ratios (NBS 19 and L-SVEC CO2). The consensus δ13C values and uncertainties were derived from the Maximum Likelihood Estimation (MLE) based on off-line measurements; the consensus δ2H values and uncertainties were derived from MLE of both off-line and on-line measurements, taking the bias of on-line measurements into account. The calibrated consensus values in ‰ relative to VSMOW and VPDB are: NG1 (coal-related gas): Methane: δ2HVSMOW = − 185.1‰ ± 1.2‰, δ13CVPDB = − 34.18‰ ± 0.10‰ Ethane: δ2HVSMOW = − 156.3‰ ± 1.8‰, δ13CVPDB = − 24.66‰ ± 0.11‰ Propane: δ2HVSMOW = − 143.6‰ ± 3.3‰, δ13CVPDB = − 22.21‰ ± 0.11‰ i-Butane: δ13CVPDB = − 21.62‰ ± 0.12‰ n-Butane: δ13CVPDB = − 21.74‰ ± 0.13‰ CO2: δ13CVPDB = − 5.00‰ ± 0.12‰ NG2 (biogas): Methane: δ2HVSMOW = − 237.0‰ ± 1.2‰, δ13CVPDB = − 68.89‰ ± 0.12‰ NG3 (oil-related gas): Methane: δ2HVSMOW = − 167.6‰ ± 1.0‰, δ13CVPDB = − 43.61‰ ± 0.09‰ Ethane: δ2HVSMOW = − 164.1‰ ± 2.4‰, δ13CVPDB = − 40.24‰ ± 0.10‰ Propane: δ2HVSMOW = − 138.4‰ ± 3.0‰, δ13CVPDB = − 33.79‰ ± 0.09‰ All of the assigned values are traceable to the international carbon isotope standard of VPDB and hydrogen isotope standard of VSMOW.
Li, Jiahui; Yu, Qiqing
2016-01-01
Dinse (Biometrics, 38:417-431, 1982) provides a special type of right-censored and masked competing risks data and proposes a non-parametric maximum likelihood estimator (NPMLE) and a pseudo MLE of the joint distribution function [Formula: see text] with such data. However, their asymptotic properties have not been studied so far. Under the extention of either the conditional masking probability (CMP) model or the random partition masking (RPM) model (Yu and Li, J Nonparametr Stat 24:753-764, 2012), we show that (1) Dinse's estimators are consistent if [Formula: see text] takes on finitely many values and each point in the support set of [Formula: see text] can be observed; (2) if the failure time is continuous, the NPMLE is not uniquely determined, and the standard approach (which puts weights only on one element in each observed set) leads to an inconsistent NPMLE; (3) in general, Dinse's estimators are not consistent even under the discrete assumption; (4) we construct a consistent NPMLE. The consistency is given under a new model called dependent masking and right-censoring model. The CMP model and the RPM model are indeed special cases of the new model. We compare our estimator to Dinse's estimators through simulation and real data. Simulation study indicates that the consistent NPMLE is a good approximation to the underlying distribution for moderate sample sizes.
Stults-Kolehmainen, Matthew A.; Tuit, Keri; Sinha, Rajita
2015-01-01
Both cumulative adversity, an individual's lifetime exposure to stressors, and insufficient exercise are associated with poor health outcomes. The purpose of this study was to ascertain whether exercise buffers the association of cumulative adverse life events (CALE) with health in a community-wide sample of healthy adults (ages 18–50 years; women: n 219, 29.5 ± 9.2 years; men: n = 176, 29.4 ± 8.7 years, mean ± standard deviation). Participants underwent the Cumulative Adversity Interview, which divides life events into three subsets: major life events (MLE), recent life events (RLE) and traumatic experiences (TLE). These individuals also completed the Cornell Medical Index and a short assessment for moderate or greater intensity exercise behavior, modified from the Nurses’ Health Study. Results indicated that higher CALE was associated with greater total health problems (r = 0.431, p<0.001). Interactions between stress and exercise were not apparent for RLE and TLE. However, at low levels of MLE, greater exercise was related to fewer total, physical, cardiovascular and psychological health problems (p value<0.05). Conversely, at high levels of MLE, the benefits of exercise appear to be absent. Three-way interactions were observed between sex, exercise and stress. Increased levels of exercise were related to better physical health in men, at all levels of CALE. Only women who reported both low levels of CALE and high levels of exercise had more favorable physical health outcomes. A similar pattern of results emerged for RLE. Together, these data suggest that increased exercise is related to better health, but these effects may vary by cumulative stress exposure and sex. PMID:24392966
Stults-Kolehmainen, Matthew A; Tuit, Keri; Sinha, Rajita
2014-03-01
Both cumulative adversity, an individual's lifetime exposure to stressors, and insufficient exercise are associated with poor health outcomes. The purpose of this study was to ascertain whether exercise buffers the association of cumulative adverse life events (CALE) with health in a community-wide sample of healthy adults (ages 18-50 years; women: n = 219, 29.5 ± 9.2 years; men: n = 176, 29.4 ± 8.7 years, mean ± standard deviation). Participants underwent the Cumulative Adversity Interview, which divides life events into three subsets: major life events (MLE), recent life events (RLE) and traumatic experiences (TLE). These individuals also completed the Cornell Medical Index and a short assessment for moderate or greater intensity exercise behavior, modified from the Nurses' Health Study. Results indicated that higher CALE was associated with greater total health problems (r = 0.431, p < 0.001). Interactions between stress and exercise were not apparent for RLE and TLE. However, at low levels of MLE, greater exercise was related to fewer total, physical, cardiovascular and psychological health problems (p value <0.05). Conversely, at high levels of MLE, the benefits of exercise appear to be absent. Three-way interactions were observed between sex, exercise and stress. Increased levels of exercise were related to better physical health in men, at all levels of CALE. Only women who reported both low levels of CALE and high levels of exercise had more favorable physical health outcomes. A similar pattern of results emerged for RLE. Together, these data suggest that increased exercise is related to better health, but these effects may vary by cumulative stress exposure and sex.
Dynamic Modelling with "MLE-Energy Dynamic" for Primary School
NASA Astrophysics Data System (ADS)
Giliberti, Enrico; Corni, Federico
During the recent years simulation and modelling are growing instances in science education. In primary school, however, the main use of software is the simulation, due to the lack of modelling software tools specially designed to fit/accomplish the needs of primary education. In particular primary school teachers need to use simulation in a framework that is both consistent and simple enough to be understandable by children [
NASA Astrophysics Data System (ADS)
Hillman, Dustin S.
The primary goal of this study is to evaluate the effects of different media-based learning environments (MLEs) that present identical chemistry content material. This is done with four different MLEs that utilize some or all components of a chemistry-based media-based prototype video game. Examination of general chemistry student volunteers purposefully randomized to one of four different MLEs did not provide evidence that the higher the level of interactivity resulted in a more effective MLE for the chemistry content. Data suggested that the cognitive load to play the chemistry-based video game may impaired the chemistry content being presented and recalled by the students while the students watching the movie of the chemistry-based video game were able to recall the chemistry content more efficiently. Further studies in this area need to address the overall cognitive load of the different MLEs to potentially better determine what the most effective MLE may be for this chemistry content.
Lucero, D E; Carlson, T C; Delisle, J; Poindexter, S; Jones, T F; Moncayo, A C
2016-05-01
West Nile virus (WNV) and Flanders virus (FLAV) can cocirculate in Culex mosquitoes in parts of North America. A large dataset of mosquito pools tested for WNV and FLAV was queried to understand the spatiotemporal relationship between these two viruses in Shelby County, TN. We found strong evidence of global clustering (i.e., spatial autocorrelation) and overlapping of local clustering (i.e., Hot Spots based on Getis Ord Gi*) of maximum likelihood estimates (MLE) of infection rates (IR) during 2008-2013. Temporally, FLAV emerges and peaks on average 10.2 wk prior to WNV based on IR. Higher levels of WNV IR were detected within 3,000 m of FLAV-positive pool buffers than outside these buffers. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A Cognitive Developmental Approach to Social Problem Management.
ERIC Educational Resources Information Center
Watts, Walter J.
The paper reviews L. Kohlberg's theory of moral reasoning and its relationship to cognitive development of children. R. Feuerstein's theories of mediated learning experience (MLE) are reviewed, and remediation for individuals deficient in cognitive functions is addressed. The paper notes the existence of deficient cognitive functions, specifically…
Scaling with System Size of the Lyapunov Exponents for the Hamiltonian Mean Field Model
NASA Astrophysics Data System (ADS)
Manos, Thanos; Ruffo, Stefano
2011-12-01
The Hamiltonian Mean Field model is a prototype for systems with long-range interactions. It describes the motion of N particles moving on a ring, coupled with an infinite-range potential. The model has a second-order phase transition at the energy density Uc =3/4 and its dynamics is exactly described by the Vlasov equation in the N→∞ limit. Its chaotic properties have been investigated in the past, but the determination of the scaling with N of the Lyapunov Spectrum (LS) of the model remains a challenging open problem. Here we show that the N -1/3 scaling of the Maximal Lyapunov Exponent (MLE), found in previous numerical and analytical studies, extends to the full LS; scaling is "precocious" for the LS, meaning that it becomes manifest for a much smaller number of particles than the one needed to check the scaling for the MLE. Besides that, the N -1/3 scaling appears to be valid not only for U>Uc , as suggested by theoretical approaches based on a random matrix approximation, but also below a threshold energy Ut ≈0.2. Using a recently proposed method (GALI) devised to rapidly check the chaotic or regular nature of an orbit, we find that Ut is also the energy at which a sharp transition from weak to strong chaos is present in the phase-space of the model. Around this energy the phase of the vector order parameter of the model becomes strongly time dependent, inducing a significant untrapping of particles from a nonlinear resonance.
New Horizons: Designing and Measuring for Modern Learning Environments
ERIC Educational Resources Information Center
Carter, Richard Allen, Jr.
2017-01-01
This dissertation consists of five chapters. The first chapter serves to introduce the Modern Learning Environment (MLE) by discussing the challenges of designing and measuring student performance in these novel environments. Chapter two of the dissertation reviews the current research base of studying self-regulated learning in the modern…
Structural Identification and Comparison of Intelligent Mobile Learning Environment
ERIC Educational Resources Information Center
Upadhyay, Nitin; Agarwal, Vishnu Prakash
2007-01-01
This paper proposes a methodology using graph theory, matrix algebra and permanent function to compare different architecture (structure) design of intelligent mobile learning environment. The current work deals with the development/selection of optimum architecture (structural) model of iMLE. This can be done using the criterion as discussed in…
Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.; ...
2017-08-25
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment’s beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment’s beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raghunathan, Srinivasan; Patil, Sanjaykumar; Bianchini, Federico
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less
Probabilistic modelling of drought events in China via 2-dimensional joint copula
NASA Astrophysics Data System (ADS)
Ayantobo, Olusola O.; Li, Yi; Song, Songbai; Javed, Tehseen; Yao, Ning
2018-04-01
Probabilistic modelling of drought events is a significant aspect of water resources management and planning. In this study, popularly applied and several relatively new bivariate Archimedean copulas were employed to derive regional and spatial based copula models to appraise drought risk in mainland China over 1961-2013. Drought duration (Dd), severity (Ds), and peak (Dp), as indicated by Standardized Precipitation Evapotranspiration Index (SPEI), were extracted according to the run theory and fitted with suitable marginal distributions. The maximum likelihood estimation (MLE) and curve fitting method (CFM) were used to estimate the copula parameters of nineteen bivariate Archimedean copulas. Drought probabilities and return periods were analysed based on appropriate bivariate copula in sub-region I-VII and entire mainland China. The goodness-of-fit tests as indicated by the CFM showed that copula NN19 in sub-regions III, IV, V, VI and mainland China, NN20 in sub-region I and NN13 in sub-region VII are the best for modeling drought variables. Bivariate drought probability across mainland China is relatively high, and the highest drought probabilities are found mainly in the Northwestern and Southwestern China. Besides, the result also showed that different sub-regions might suffer varying drought risks. The drought risks as observed in Sub-region III, VI and VII, are significantly greater than other sub-regions. Higher probability of droughts of longer durations in the sub-regions also corresponds to shorter return periods with greater drought severity. These results may imply tremendous challenges for the water resources management in different sub-regions, particularly the Northwestern and Southwestern China.
Sainudiin, Raazesh; Welch, David
2016-12-07
We derive a combinatorial stochastic process for the evolution of the transmission tree over the infected vertices of a host contact network in a susceptible-infected (SI) model of an epidemic. Models of transmission trees are crucial to understanding the evolution of pathogen populations. We provide an explicit description of the transmission process on the product state space of (rooted planar ranked labelled) binary transmission trees and labelled host contact networks with SI-tags as a discrete-state continuous-time Markov chain. We give the exact probability of any transmission tree when the host contact network is a complete, star or path network - three illustrative examples. We then develop a biparametric Beta-splitting model that directly generates transmission trees with exact probabilities as a function of the model parameters, but without explicitly modelling the underlying contact network, and show that for specific values of the parameters we can recover the exact probabilities for our three example networks through the Markov chain construction that explicitly models the underlying contact network. We use the maximum likelihood estimator (MLE) to consistently infer the two parameters driving the transmission process based on observations of the transmission trees and use the exact MLE to characterize equivalence classes over the space of contact networks with a single initial infection. An exploratory simulation study of the MLEs from transmission trees sampled from three other deterministic and four random families of classical contact networks is conducted to shed light on the relation between the MLEs of these families with some implications for statistical inference along with pointers to further extensions of our models. The insights developed here are also applicable to the simplest models of "meme" evolution in online social media networks through transmission events that can be distilled from observable actions such as "likes", "mentions", "retweets" and "+1s" along with any concomitant comments. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Dupuis, Alan P; Peters, Ryan J; Prusinski, Melissa A; Falco, Richard C; Ostfeld, Richard S; Kramer, Laura D
2013-07-15
Deer tick virus, DTV, is a genetically and ecologically distinct lineage of Powassan virus (POWV) also known as lineage II POWV. Human incidence of POW encephalitis has increased in the last 15 years potentially due to the emergence of DTV, particularly in the Hudson Valley of New York State. We initiated an extensive sampling campaign to determine whether POWV was extant throughout the Hudson Valley in tick vectors and/or vertebrate hosts. More than 13,000 ticks were collected from hosts or vegetation and tested for the presence of DTV using molecular and virus isolation techniques. Vertebrate hosts of Ixodes scapularis (black-legged tick) were trapped (mammals) or netted (birds) and blood samples analyzed for the presence of neutralizing antibodies to POWV. Maximum likelihood estimates (MLE) were calculated to determine infection rates in ticks at each study site. Evidence of DTV was identified each year from 2007 to 2012, in nymphal and adult I. scapularis collected from the Hudson Valley. 58 tick pools were positive for virus and/or RNA. Infection rates were higher in adult ticks collected from areas east of the Hudson River. MLE limits ranged from 0.2-6.0 infected adults per 100 at sites where DTV was detected. Virginia opossums, striped skunks and raccoons were the source of infected nymphal ticks collected as replete larvae. Serologic evidence of POWV infection was detected in woodchucks (4/6), an opossum (1/6), and birds (4/727). Lineage I, prototype POWV, was not detected. These data demonstrate widespread enzootic transmission of DTV throughout the Hudson Valley, in particular areas east of the river. High infection rates were detected in counties where recent POW encephalitis cases have been identified, supporting the hypothesis that lineage II POWV, DTV, is responsible for these human infections.
The impact of rigorous mathematical thinking as learning method toward geometry understanding
NASA Astrophysics Data System (ADS)
Nugraheni, Z.; Budiyono, B.; Slamet, I.
2018-05-01
To reach higher order thinking skill, needed to be mastered the conceptual understanding. RMT is a unique realization of the cognitive conceptual construction approach based on Mediated Learning Experience (MLE) theory by Feurstein and Vygotsky’s sociocultural theory. This was quasi experimental research which was comparing the experimental class that was given Rigorous Mathematical Thinking (RMT) as learning method and control class that was given Direct Learning (DL) as the conventional learning activity. This study examined whether there was different effect of two learning method toward conceptual understanding of Junior High School students. The data was analyzed by using Independent t-test and obtained a significant difference of mean value between experimental and control class on geometry conceptual understanding. Further, by semi-structure interview known that students taught by RMT had deeper conceptual understanding than students who were taught by conventional way. By these result known that Rigorous Mathematical Thinking (RMT) as learning method have positive impact toward Geometry conceptual understanding.
Multilingual Education: The Role of Language Ideologies and Attitudes
ERIC Educational Resources Information Center
Liddicoat, Anthony J.; Taylor-Leech, Kerry
2015-01-01
This paper overviews issues relating to the role of ideologies and attitudes in multilingual education (MLE). It argues that ideologies and attitudes are constituent parts of the language planning process and shape the possibilities for multilingualism in educational programmes in complex ways, but most frequently work to constrain the ways that…
Parental Involvement in Child Assessment: A Dynamic Approach.
ERIC Educational Resources Information Center
SeokHoon, Alice Seng
This paper examines the status of parents in the developmental assessment process and considers how involving parents jointly with the professional to assess their young child may yield more accurate and valuable information. The paper explores the use of a mediated learning experience (MLE) approach as a framework for increasing support for…
Validation Study of Waray Text Readability Instrument
ERIC Educational Resources Information Center
Oyzon, Voltaire Q.; Corrales, Juven B.; Estardo, Wilfredo M., Jr.
2015-01-01
In 2012 the Leyte Normal University developed a computer software--modelled after the Spache Readability Formula (1953) made for English--made to help rank texts that can is used by teachers or research groups on selecting appropriate reading materials to support the DepEd's MTB-MLE program in Region VIII, in the Philippines. However,…
ERIC Educational Resources Information Center
Page, Tom; Thorsteinsson, Gisli
2006-01-01
The work outlined here provides a comprehensive report and formative observations of the development and implementation of hypermedia resources for learning and teaching used in conjunction with a managed learning environment (MLE). These resources are used to enhance teaching and learning of an electronics module in product design at final year…
Understanding the nature of sleep.
1994-11-23
This is the first article in a series of three looking at patients' sleep in hospitals. This article explores the nature of sleep and reviews the various theories that have been put forward to explain why we need to sleep. The other two articles will concentrate on sleep disorders and hospitalisation, and the mle of the night nurse.
Effects of Test Item Disclosure on Medical Licensing Examination
ERIC Educational Resources Information Center
Yang, Eunbae B.; Lee, Myung Ae; Park, Yoon Soo
2018-01-01
In 2012, the National Health Personnel Licensing Examination Board of Korea decided to publicly disclose all test items and answers to satisfy the test takers' right to know and enhance the transparency of tests administered by the government. This study investigated the effects of item disclosure on the medical licensing examination (MLE),…
Emotional Design in Multimedia: Does Gender and Academic Achievement Influence Learning Outcomes?
ERIC Educational Resources Information Center
Kumar, Jeya Amantha; Muniandy, Balakrishnan; Yahaya, Wan Ahmad Jaafar Wan
2016-01-01
This study was designed as a preliminary study (N = 33) to explore the effects of gender and academic achievement (Cumulative Grade Point Average-CGPA) on polytechnic students' learning outcomes when exposed to Multimedia Learning Environments (MLE) designed to induce emotions. Three designs namely positive (PosD), neutral (NeuD) and negative…
MLeXAI: A Project-Based Application-Oriented Model
ERIC Educational Resources Information Center
Russell, Ingrid; Markov, Zdravko; Neller, Todd; Coleman, Susan
2010-01-01
Our approach to teaching introductory artificial intelligence (AI) unifies its diverse core topics through a theme of machine learning, and emphasizes how AI relates more broadly with computer science. Our work, funded by a grant from the National Science Foundation, involves the development, implementation, and testing of a suite of projects that…
2012-01-01
Lactobacillus plantarum is involved in a multitude of food related industrial fermentation processes including the malolactic fermentation (MLF) of wine. This work is the first report on a recombinant L. plantarum strain successfully conducting MLF. The malolactic enzyme (MLE) from Oenococcus oeni was cloned into the lactobacillal expression vector pSIP409 which is based on the sakacin P operon of Lactobacillus sakei and expressed in the host strain L. plantarum WCFS1. Both recombinant and wild-type L. plantarum strains were tested for MLF using a buffered malic acid solution in absence of glucose. Under the conditions with L-malic acid as the only energy source and in presence of Mn2+ and NAD+, the recombinant L. plantarum and the wild-type strain converted 85% (2.5 g/l) and 51% (1.5 g/l), respectively, of L-malic acid in 3.5 days. Furthermore, the recombinant L. plantarum cells converted in a modified wine 15% (0.4 g/l) of initial L-malic acid concentration in 2 days. In conclusion, recombinant L. plantarum cells expressing MLE accelerate the malolactic fermentation. PMID:22452826
Bulk flow in the combined 2MTF and 6dFGSv surveys
NASA Astrophysics Data System (ADS)
Qin, Fei; Howlett, Cullan; Staveley-Smith, Lister; Hong, Tao
2018-07-01
We create a combined sample of 10 904 late- and early-type galaxies from the 2MTF and 6dFGSv surveys in order to accurately measure bulk flow in the local Universe. Galaxies and groups of galaxies common between the two surveys are used to verify that the difference in zero-points is <0.02 dex. We introduce a maximum likelihood estimator (ηMLE) for bulk flow measurements that allows for more accurate measurement in the presence of non-Gaussian measurement errors. To calibrate out residual biases due to the subtle interaction of selection effects, Malmquist bias and anisotropic sky distribution, the estimator is tested on mock catalogues generated from 16 independent large-scale GiggleZ and SURFS simulations. The bulk flow of the local Universe using the combined data set, corresponding to a scale size of 40 h-1 Mpc, is 288 ± 24 km s-1 in the direction (l, b) = (296 ± 6°, 21 ± 5°). This is the most accurate bulk flow measurement to date, and the amplitude of the flow is consistent with the Λ cold dark matter expectation for similar size scales.
Bulk flow in the combined 2MTF and 6dFGSv surveys
NASA Astrophysics Data System (ADS)
Qin, Fei; Howlett, Cullan; Staveley-Smith, Lister; Hong, Tao
2018-04-01
We create a combined sample of 10,904 late and early-type galaxies from the 2MTF and 6dFGSv surveys in order to accurately measure bulk flow in the local Universe. Galaxies and groups of galaxies common between the two surveys are used to verify that the difference in zero-points is <0.02 dex. We introduce a new maximum likelihood estimator (ηMLE) for bulk flow measurements which allows for more accurate measurement in the presence non-Gaussian measurement errors. To calibrate out residual biases due to the subtle interaction of selection effects, Malmquist bias and anisotropic sky distribution, the estimator is tested on mock catalogues generated from 16 independent large-scale GiggleZ and SURFS simulations. The bulk flow of the local Universe using the combined data set, corresponding to a scale size of 40 h-1 Mpc, is 288 ± 24 km s-1 in the direction (l, b) = (296 ± 6°, 21 ± 5°). This is the most accurate bulk flow measurement to date, and the amplitude of the flow is consistent with the ΛCDM expectation for similar size scales.
Shah, Farhan Mahmood; Razaq, Muhammad; Han, Peng; Chen, Julian
2017-01-01
Wheat being staple food of Pakistan is constantly attacked by major wheat aphid species, Schizaphis graminum (R.), Rhopalosiphum padi (L.) and Sitobion avenae (F.). Due to concern on synthetic chemical use in wheat, it is imperative to search for alternative environment- and human- friendly control measures such as botanical pesticides. In the present study, we evaluated the comparative role of neem seed extract (NSE), moringa leaf extract (MLE) and imidacloprid (I) in the management of the aphid as well as the yield losses parameters in late planted wheat fields. Imidacloprid reduced significantly aphids infestation compared to the other treatments, hence resulting in higher yield, particularly when applied with MLE. The percentages of yield increase in I+MLE treated plots over the control were 19.15–81.89% for grains per spike, 5.33–37.62% for thousand grain weight and 27.59–61.12% for yield kg/ha. NSE was the second most effective control measure in suppressing aphid population, but the yield protected by NSE treatment over the control was comparable to that by imidacloprid. Population densities of coccinellids and syrphids in the plots treated with NSE-2 were higher than those treated with imidacloprid in two out of three experiments during 2013–14. Low predator density in imidacloprid-treated plots was attributed to the lower availability of prey aphids. The efficacy of NSE against aphids varied depending on degree of synchronization among the application timing, the activity of aphids, crop variety and environmental conditions. Despite that, we suggested NSE to be a promising alternative botanical insecticide compared to the most commonly recommended imidiacloprid. Further studies should consider the side effects of biopesticides on non-target organisms in order to provide better management practices in the field. PMID:28953894
The effect of social integration on outcomes after major lower extremity amputation.
Hawkins, Alexander T; Pallangyo, Anthony J; Herman, Ayesiga M; Schaumeier, Maria J; Smith, Ann D; Hevelone, Nathanael D; Crandell, David M; Nguyen, Louis L
2016-01-01
Major lower extremity (MLE) amputation is a common procedure that results in a profound change in a patient's life. We sought to determine the association between social support and outcomes after amputation. We hypothesized that patients with greater social support will have better post amputation outcomes. From November 2011 to May 2013, we conducted a cross-sectional, observational, multicenter study. Social integration was measured by the social integration subset of the Short Form Craig Handicap Assessment and Reporting Technique. Systemic social support was assessed by comparing a United States and Tanzanian population. Walking function was measured using the 6-minute walk test and quality of life (QoL) was measured using the EuroQol-5D. We recruited 102 MLE amputees. Sixty-three patients were enrolled in the United States with a mean age of 58.0. Forty-two (67%) were male. Patients with low social integration were more likely to be unable to ambulate (no walk 39% vs slow walk 23% vs fast walk 10%; P = .01) and those with high social integration were more likely to be fast walkers (no walk 10% vs slow walk 59% vs fast walk 74%; P = .01). This relationship persisted in a multivariable analysis. Increasing social integration scores were also positively associated with increasing QoL scores in a multivariable analysis (β, .002; standard error, 0.0008; P = .02). In comparing the United States population with the Tanzanian cohort (39 subjects), there were no differences between functional or QoL outcomes in the systemic social support analysis. In the United States population, increased social integration is associated with both improved function and QoL outcomes among MLE amputees. Systemic social support, as measured by comparing the United States population with a Tanzanian population, was not associated with improved function or QoL outcomes. In the United States, steps should be taken to identify and aid amputees with poor social integration. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Aneja, Manish Kumar; Geiger, Johannes; Imker, Rabea; Uzgun, Senta; Kormann, Michael; Hasenpusch, Guenther; Maucksch, Christof; Rudolph, Carsten
2009-12-31
phi C31 integrase has emerged as a potent tool for achieving long-term gene expression in different tissues. The present study aimed at optimizing elements of phi C31 integrase system for alveolar type II cells. Luciferase and beta-galactosidase activities were measured at different time points post transfection. 5-Aza-2'deoxycytidine (AZA) and trichostatin A (TSA) were used to inhibit DNA methyltransferase and histone deacetylase complex (HDAC) respectively. In A549 cells, expression of the integrase using a CMV promoter resulted in highest integrase activity, whereas in MLE12 cells, both CAG and CMV promoter were equally effective. Effect of polyA site was observed only in A549 cells, where replacement of SV40 polyA by bovine growth hormone (BGH) polyA site resulted in an enhancement of integrase activity. Addition of a C-terminal SV40 nuclear localization signal (NLS) did not result in any significant increase in integrase activity. Long-term expression studies with AZA and TSA, provided evidence for post-integrative gene silencing. In MLE12 cells, both DNA methylases and HDACs played a significant role in silencing, whereas in A549 cells, it could be attributed majorly to HDAC activity. Donor plasmids comprising cellular promoters ubiquitin B (UBB), ubiquitin C (UCC) and elongation factor 1 alpha (EF1 alpha) in an improved backbone prevented post-integrative gene silencing. In contrast to A549 and MLE12 cells, no silencing could be observed in human bronchial epithelial cells, BEAS-2B. Donor plasmid coding for murine erythropoietin under the EF1 alpha promoter when combined with phi C31 integrase resulted in higher long-term erythropoietin expression and subsequently higher hematocrit levels in mice after intravenous delivery to the lungs. These results provide evidence for cell specific post integrative gene silencing with C31 integrase and demonstrate the pivotal role of donor plasmid in long-term expression attained with this system.
Kurakula, Kondababu; Hamers, Anouk A; van Loenen, Pieter; de Vries, Carlie J M
2015-06-19
Mucus hypersecretion and excessive cytokine synthesis is associated with many of the pathologic features of chronic airway diseases such as asthma. 6-Mercaptopurine (6-MP) is an immunosuppressive drug that is widely used in several inflammatory disorders. Although 6-MP has been used to treat asthma, its function and mechanism of action in airway epithelial cells is unknown. Confluent NCI-H292 and MLE-12 epithelial cells were pretreated with 6-MP followed by stimulation with TNFα or PMA. mRNA levels of cytokines and mucins were measured by RT-PCR. Western blot analysis was performed to assess the phosphorylation of IκBα and luciferase assays were performed using an NFκB reporter plasmid to determine NFκB activity. Periodic Acid Schiff staining was used to assess the production of mucus. 6-MP displayed no effect on cell viability up to a concentration of 15 μM. RT-PCR analysis showed that 6-MP significantly reduces TNFα- and PMA-induced expression of several proinflammatory cytokines in NCI-H292 and MLE-12 cells. Consistent with this, we demonstrated that 6-MP strongly inhibits TNFα-induced phosphorylation of IκBα and thus attenuates NFκB luciferase reporter activity. In addition, 6-MP decreases Rac1 activity in MLE-12 cells. 6-MP down-regulates gene expression of the mucin Muc5ac, but not Muc2, through inhibition of activation of the NFκB pathway. Furthermore, PMA- and TNFα-induced mucus production, as visualized by Periodic Acid Schiff (PAS) staining, is decreased by 6-MP. Our data demonstrate that 6-MP inhibits Muc5ac gene expression and mucus production in airway epithelial cells through inhibition of the NFκB pathway, and 6-MP may represent a novel therapeutic target for mucus hypersecretion in airway diseases.
Shah, Farhan Mahmood; Razaq, Muhammad; Ali, Abid; Han, Peng; Chen, Julian
2017-01-01
Wheat being staple food of Pakistan is constantly attacked by major wheat aphid species, Schizaphis graminum (R.), Rhopalosiphum padi (L.) and Sitobion avenae (F.). Due to concern on synthetic chemical use in wheat, it is imperative to search for alternative environment- and human- friendly control measures such as botanical pesticides. In the present study, we evaluated the comparative role of neem seed extract (NSE), moringa leaf extract (MLE) and imidacloprid (I) in the management of the aphid as well as the yield losses parameters in late planted wheat fields. Imidacloprid reduced significantly aphids infestation compared to the other treatments, hence resulting in higher yield, particularly when applied with MLE. The percentages of yield increase in I+MLE treated plots over the control were 19.15-81.89% for grains per spike, 5.33-37.62% for thousand grain weight and 27.59-61.12% for yield kg/ha. NSE was the second most effective control measure in suppressing aphid population, but the yield protected by NSE treatment over the control was comparable to that by imidacloprid. Population densities of coccinellids and syrphids in the plots treated with NSE-2 were higher than those treated with imidacloprid in two out of three experiments during 2013-14. Low predator density in imidacloprid-treated plots was attributed to the lower availability of prey aphids. The efficacy of NSE against aphids varied depending on degree of synchronization among the application timing, the activity of aphids, crop variety and environmental conditions. Despite that, we suggested NSE to be a promising alternative botanical insecticide compared to the most commonly recommended imidiacloprid. Further studies should consider the side effects of biopesticides on non-target organisms in order to provide better management practices in the field.
Ali, Mehboob; Heyob, Kathryn; Jacob, Naduparambil K; Rogers, Lynette K
2016-09-01
Profilin 1, cofilin 1, and vasodialator-stimulated phosphoprotein (VASP) are actin-binding proteins (ABP) that regulate actin remodeling and facilitate cancer cell metastases. miR-17-92 is highly expressed in metastatic tumors and profilin1 and cofilin1 are predicted targets. Docosahexaenoic acid (DHA) inhibits cancer cell proliferation and adhesion. These studies tested the hypothesis that the metastatic phenotype is driven by changes in ABPs including alternative phosphorylation and/or changes in subcellular localization. In addition, we tested the efficacy of DHA supplementation to attenuate or inhibit these changes. Human lung cancer tissue sections were analyzed for F-actin content and expression and cellular localization of profilin1, cofilin1, and VASP (S157 or S239 phosphorylation). The metastatic phenotype was investigated in A549 and MLE12 cells lines using 8 Br-cAMP as a metastasis inducer and DHA as a therapeutic agent. Migration was assessed by wound assay and expression measured by Western blot and confocal analysis. miR-17-92 expression was measured by qRT-PCR. Results indicated increased expression and altered cellular distribution of profilin1/VASP(pS157), but no changes in cofilin1/VASP(pS239) in the human malignant tissues compared with normal tissues. In A549 and MLE12 cells, the expression patterns of profilin1/VASP(pS157) or cofilin1/VASP(pS239) suggested an interaction in regulation of actin dynamics. Furthermore, DHA inhibited cancer cell migration and viability, ABP expression and cellular localization, and modulated expression of miR-17-92 in A549 cells with minimal effects in MLE12 cells. Further investigations are warranted to understand ABP interactions, changes in cellular localization, regulation by miR-17-92, and DHA as a novel therapeutic. Mol Cancer Ther; 15(9); 2220-31. ©2016 AACR. ©2016 American Association for Cancer Research.
Ali, Mehboob; Heyob, Kathryn; Jacob, Naduparambil K.; Rogers, Lynette K.
2016-01-01
Profilin 1, cofilin 1, and vasodialator stimulated phosphoprotein (VASP) are actin binding proteins (ABP) which regulate actin remodelling and facilitate cancer cell metastases. MiR~17–92 is highly expressed in metastatic tumors and profilin1 and cofilin1 are predicted targets. Docosahexaenoic acid (DHA) inhibits cancer cell proliferation and adhesion. These studies tested the hypothesis that the metastatic phenotype is driven by changes in ABPs including alternative phosphorylation and/or changes in subcellular localization. Additionally, we tested the efficacy of DHA supplementation to attenuate or inhibit these changes. Human lung cancer tissue sections were analyzed for F-actin content and expression and cellular localization of profilin1, cofilin1 and VASP (S157 or S239 phosphorylation). The metastatic phenotype was investigated in A549 and MLE12 cells lines using 8 Br-cAMP as a metastasis inducer and DHA as a therapeutic agent. Migration was assessed by wound assay and expression measured by western blot and confocal analysis. MiR~17–92 expression was measured by qRT-PCR. Results indicated increased expression and altered cellular distribution of profilin1/VASPpS157 but no changes in cofilin1/VASPpS239 in the human malignant tissues compared to normal tissues. In A549 and MLE12 cells, the expression patterns of profilin1/VASPpS157 or cofilin1/VASPpS239 suggested an interaction in regulation of actin dynamics. Furthermore, DHA inhibited cancer cell migration and viability, ABP expression and cellular localization, and modulated expression of miR~17–92 in A549 cells with minimal effects in MLE12 cells. Further investigations are warranted to understand ABP interactions, changes in cellular localization, regulation by miR~17–92, and DHA as a novel therapeutic. PMID:27496138
Bordignon, Annélise; Frédérich, Michel; Ledoux, Allison; Campos, Pierre-Eric; Clerc, Patricia; Hermann, Thomas; Quetin-Leclercq, Joëlle; Cieckiewicz, Ewa
2018-06-01
Due to the in vitro antiplasmodial activity of leaf extracts from Vernonia fimbrillifera Less. (Asteraceae), a bioactivity-guided fractionation was carried out. Three sesquiterpene lactones were isolated, namely 8-(4'-hydroxymethacrylate)-dehydromelitensin (1), onopordopicrin (2) and 8α-[4'-hydroxymethacryloyloxy]-4-epi-sonchucarpolide (3). Their structures were elucidated by spectroscopic methods (1D and 2D NMR and MS analyses) and by comparison with published data. The isolated compounds exhibited antiplasmodial activity with IC 50 values ≤ 5 μg/mL. Cytotoxicity of the compounds against a human cancer cell line (HeLa) and a mouse lung epithelial cell line (MLE12) was assessed to determine selectivity. Compound 3 displayed promising selective antiplasmodial activity (SI > 10).
Ali, Mehboob; Heyob, Kathryn; Rogers, Lynette K.
2016-01-01
AIMS Deaths associated with cancer metastasis have steadily increased making the need for newer, anti-metastatic therapeutics imparative. Gelsolin and vimentin, actin binding proteins expressed in metastatic tumors, participate in actin remodelling and regulate cell migration. Docosahexaenoic acid (DHA) limits cancer cell proliferation and adhesion but the mechanisms involved in reducing metastatic phenotypes are unknown. We aimed to investigate the effects of DHA on gelsolin and vimentin expression, and ultimately cell migration and proliferation, in this context. MAIN METHODS Non-invasive lung epithelial cells (MLE12) and invasive lung cancer cells (A549) were treated with DHA (30 μmol/ml) or/and 8 bromo-cyclic adenosine monophosphate (8 Br-cAMP) (300 μmol/ml) for 6 or 24 h either before (pre-treatment) or after (post-treatment) plating in transwells. Migration was assessed by the number of cells that progressed through the transwell. Gelsolin and vimentin expression were measured by western blot and confocal microscopy in cells, and by immunohistochemistry in human lung cancer biospy samples. KEY FINDINGS A significant decrease in cell migration was detected for A549 cells treated with DHA verses control but this same decrease was not seen in MLE12 cells. DHA and 8 Br-cAMP altered gelsolin and vimentin expression but no clear pattern of change was observed. Immunoflorescence staining indicated slightly higher vimentin expression in human lung tissue that was malignant compared to control. SIGNIFICANCE Collectively, our data indicate that DHA inhibits cancer cell migration and further suggests that vimentin and gelsolin may play secondary roles in cancer cell migration and proliferation, but are not the primary regulators. PMID:27157519
Ming, Jing; Wang, Yaqiang; Du, Zhencai; Zhang, Tong; Guo, Wanqin; Xiao, Cunde; Xu, Xiaobin; Ding, Minghu; Zhang, Dongqi; Yang, Wen
2015-01-01
The widely distributed glaciers in the greater Himalayan region have generally experienced rapid shrinkage since the 1850s. As invaluable sources of water and because of their scarcity, these glaciers are extremely important. Beginning in the twenty-first century, new methods have been applied to measure the mass budget of these glaciers. Investigations have shown that the albedo is an important parameter that affects the melting of Himalayan glaciers. The surface albedo based on the Moderate Resolution Imaging Spectroradiometer (MODIS) data over the Hindu Kush, Karakoram and Himalaya (HKH) glaciers is surveyed in this study for the period 2000-2011. The general albedo trend shows that the glaciers have been darkening since 2000. The most rapid decrease in the surface albedo has occurred in the glacial area above 6000 m, which implies that melting will likely extend to snow accumulation areas. The mass-loss equivalent (MLE) of the HKH glacial area caused by surface shortwave radiation absorption is estimated to be 10.4 Gt yr-1, which may contribute to 1.2% of the global sea level rise on annual average (2003-2009). This work probably presents a first scene depicting the albedo variations over the whole HKH glacial area during the period 2000-2011. Most rapidly decreasing in albedo has been detected in the highest area, which deserves to be especially concerned.
Stochastic generation of hourly rainstorm events in Johor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nojumuddin, Nur Syereena; Yusof, Fadhilah; Yusop, Zulkifli
2015-02-03
Engineers and researchers in water-related studies are often faced with the problem of having insufficient and long rainfall record. Practical and effective methods must be developed to generate unavailable data from limited available data. Therefore, this paper presents a Monte-Carlo based stochastic hourly rainfall generation model to complement the unavailable data. The Monte Carlo simulation used in this study is based on the best fit of storm characteristics. Hence, by using the Maximum Likelihood Estimation (MLE) and Anderson Darling goodness-of-fit test, lognormal appeared to be the best rainfall distribution. Therefore, the Monte Carlo simulation based on lognormal distribution was usedmore » in the study. The proposed model was verified by comparing the statistical moments of rainstorm characteristics from the combination of the observed rainstorm events under 10 years and simulated rainstorm events under 30 years of rainfall records with those under the entire 40 years of observed rainfall data based on the hourly rainfall data at the station J1 in Johor over the period of 1972–2011. The absolute percentage error of the duration-depth, duration-inter-event time and depth-inter-event time will be used as the accuracy test. The results showed the first four product-moments of the observed rainstorm characteristics were close with the simulated rainstorm characteristics. The proposed model can be used as a basis to derive rainfall intensity-duration frequency in Johor.« less
ERIC Educational Resources Information Center
Trudell, Barbara
2014-01-01
This article examines a new phenomenon in language activism variously called the multilingual education working group or the multilingual education network, and abbreviated as MLEN. After an analysis of the conceptual and organizational contexts for these activist groups, the six MLENs in existence as of 2013 are described. The groups are then…
Sun, Mingzhai; Huang, Jiaqing; Bunyak, Filiz; Gumpper, Kristyn; De, Gejing; Sermersheim, Matthew; Liu, George; Lin, Pei-Hui; Palaniappan, Kannappan; Ma, Jianjie
2014-01-01
One key factor that limits resolution of single-molecule superresolution microscopy relates to the localization accuracy of the activated emitters, which is usually deteriorated by two factors. One originates from the background noise due to out-of-focus signals, sample auto-fluorescence, and camera acquisition noise; and the other is due to the low photon count of emitters at a single frame. With fast acquisition rate, the activated emitters can last multiple frames before they transiently switch off or permanently bleach. Effectively incorporating the temporal information of these emitters is critical to improve the spatial resolution. However, majority of the existing reconstruction algorithms locate the emitters frame by frame, discarding or underusing the temporal information. Here we present a new image reconstruction algorithm based on tracklets, short trajectories of the same objects. We improve the localization accuracy by associating the same emitters from multiple frames to form tracklets and by aggregating signals to enhance the signal to noise ratio. We also introduce a weighted mean-shift algorithm (WMS) to automatically detect the number of modes (emitters) in overlapping regions of tracklets so that not only well-separated single emitters but also individual emitters within multi-emitter groups can be identified and tracked. In combination with a maximum likelihood estimator method (MLE), we are able to resolve low to medium density of overlapping emitters with improved localization accuracy. We evaluate the performance of our method with both synthetic and experimental data, and show that the tracklet-based reconstruction is superior in localization accuracy, particularly for weak signals embedded in a strong background. Using this method, for the first time, we resolve the transverse tubule structure of the mammalian skeletal muscle. PMID:24921337
Sun, Mingzhai; Huang, Jiaqing; Bunyak, Filiz; Gumpper, Kristyn; De, Gejing; Sermersheim, Matthew; Liu, George; Lin, Pei-Hui; Palaniappan, Kannappan; Ma, Jianjie
2014-05-19
One key factor that limits resolution of single-molecule superresolution microscopy relates to the localization accuracy of the activated emitters, which is usually deteriorated by two factors. One originates from the background noise due to out-of-focus signals, sample auto-fluorescence, and camera acquisition noise; and the other is due to the low photon count of emitters at a single frame. With fast acquisition rate, the activated emitters can last multiple frames before they transiently switch off or permanently bleach. Effectively incorporating the temporal information of these emitters is critical to improve the spatial resolution. However, majority of the existing reconstruction algorithms locate the emitters frame by frame, discarding or underusing the temporal information. Here we present a new image reconstruction algorithm based on tracklets, short trajectories of the same objects. We improve the localization accuracy by associating the same emitters from multiple frames to form tracklets and by aggregating signals to enhance the signal to noise ratio. We also introduce a weighted mean-shift algorithm (WMS) to automatically detect the number of modes (emitters) in overlapping regions of tracklets so that not only well-separated single emitters but also individual emitters within multi-emitter groups can be identified and tracked. In combination with a maximum likelihood estimator method (MLE), we are able to resolve low to medium density of overlapping emitters with improved localization accuracy. We evaluate the performance of our method with both synthetic and experimental data, and show that the tracklet-based reconstruction is superior in localization accuracy, particularly for weak signals embedded in a strong background. Using this method, for the first time, we resolve the transverse tubule structure of the mammalian skeletal muscle.
Joint sparsity based heterogeneous data-level fusion for target detection and estimation
NASA Astrophysics Data System (ADS)
Niu, Ruixin; Zulch, Peter; Distasio, Marcello; Blasch, Erik; Shen, Dan; Chen, Genshe
2017-05-01
Typical surveillance systems employ decision- or feature-level fusion approaches to integrate heterogeneous sensor data, which are sub-optimal and incur information loss. In this paper, we investigate data-level heterogeneous sensor fusion. Since the sensors monitor the common targets of interest, whose states can be determined by only a few parameters, it is reasonable to assume that the measurement domain has a low intrinsic dimensionality. For heterogeneous sensor data, we develop a joint-sparse data-level fusion (JSDLF) approach based on the emerging joint sparse signal recovery techniques by discretizing the target state space. This approach is applied to fuse signals from multiple distributed radio frequency (RF) signal sensors and a video camera for joint target detection and state estimation. The JSDLF approach is data-driven and requires minimum prior information, since there is no need to know the time-varying RF signal amplitudes, or the image intensity of the targets. It can handle non-linearity in the sensor data due to state space discretization and the use of frequency/pixel selection matrices. Furthermore, for a multi-target case with J targets, the JSDLF approach only requires discretization in a single-target state space, instead of discretization in a J-target state space, as in the case of the generalized likelihood ratio test (GLRT) or the maximum likelihood estimator (MLE). Numerical examples are provided to demonstrate that the proposed JSDLF approach achieves excellent performance with near real-time accurate target position and velocity estimates.
Xie, Lin; Cui, Xiaowei; Zhao, Sihao; Lu, Mingquan
2017-01-01
It is well known that multipath effect remains a dominant error source that affects the positioning accuracy of Global Navigation Satellite System (GNSS) receivers. Significant efforts have been made by researchers and receiver manufacturers to mitigate multipath error in the past decades. Recently, a multipath mitigation technique using dual-polarization antennas has become a research hotspot for it provides another degree of freedom to distinguish the line-of-sight (LOS) signal from the LOS and multipath composite signal without extensively increasing the complexity of the receiver. Numbers of multipath mitigation techniques using dual-polarization antennas have been proposed and all of them report performance improvement over the single-polarization methods. However, due to the unpredictability of multipath, multipath mitigation techniques based on dual-polarization are not always effective while few studies discuss the condition under which the multipath mitigation using a dual-polarization antenna can outperform that using a single-polarization antenna, which is a fundamental question for dual-polarization multipath mitigation (DPMM) and the design of multipath mitigation algorithms. In this paper we analyze the characteristics of the signal received by a dual-polarization antenna and use the maximum likelihood estimation (MLE) to assess the theoretical performance of DPMM in different received signal cases. Based on the assessment we answer this fundamental question and find the dual-polarization antenna’s capability in mitigating short delay multipath—the most challenging one among all types of multipath for the majority of the multipath mitigation techniques. Considering these effective conditions, we propose a dual-polarization sequential iterative maximum likelihood estimation (DP-SIMLE) algorithm for DPMM. The simulation results verify our theory and show superior performance of the proposed DP-SIMLE algorithm over the traditional one using only an RHCP antenna. PMID:28208832
Trajectory Dispersed Vehicle Process for Space Launch System
NASA Technical Reports Server (NTRS)
Statham, Tamara; Thompson, Seth
2017-01-01
The Space Launch System (SLS) vehicle is part of NASA's deep space exploration plans that includes manned missions to Mars. Manufacturing uncertainties in design parameters are key considerations throughout SLS development as they have significant effects on focus parameters such as lift-off-thrust-to-weight, vehicle payload, maximum dynamic pressure, and compression loads. This presentation discusses how the SLS program captures these uncertainties by utilizing a 3 degree of freedom (DOF) process called Trajectory Dispersed (TD) analysis. This analysis biases nominal trajectories to identify extremes in the design parameters for various potential SLS configurations and missions. This process utilizes a Design of Experiments (DOE) and response surface methodologies (RSM) to statistically sample uncertainties, and develop resulting vehicles using a Maximum Likelihood Estimate (MLE) process for targeting uncertainties bias. These vehicles represent various missions and configurations which are used as key inputs into a variety of analyses in the SLS design process, including 6 DOF dispersions, separation clearances, and engine out failure studies.
F-8C adaptive flight control laws
NASA Technical Reports Server (NTRS)
Hartmann, G. L.; Harvey, C. A.; Stein, G.; Carlson, D. N.; Hendrick, R. C.
1977-01-01
Three candidate digital adaptive control laws were designed for NASA's F-8C digital flyby wire aircraft. Each design used the same control laws but adjusted the gains with a different adaptative algorithm. The three adaptive concepts were: high-gain limit cycle, Liapunov-stable model tracking, and maximum likelihood estimation. Sensors were restricted to conventional inertial instruments (rate gyros and accelerometers) without use of air-data measurements. Performance, growth potential, and computer requirements were used as criteria for selecting the most promising of these candidates for further refinement. The maximum likelihood concept was selected primarily because it offers the greatest potential for identifying several aircraft parameters and hence for improved control performance in future aircraft application. In terms of identification and gain adjustment accuracy, the MLE design is slightly superior to the other two, but this has no significant effects on the control performance achievable with the F-8C aircraft. The maximum likelihood design is recommended for flight test, and several refinements to that design are proposed.
Wang, Xue-Feng; Song, Shun-de; Li, Ya-Jun; Hu, Zheng Qiang; Zhang, Zhe-Wen; Yan, Chun-Guang; Li, Zi-Gang; Tang, Hui-Fang
2018-06-01
Quercetin (Que) as an abundant flavonol element possesses potent antioxidative properties and has protective effect in lipopolysaccharide (LPS)-induced acute lung injury (ALI), but the specific mechanism is still unclear, so we investigated the effect of Que from in vivo and in vitro studies and the related mechanism of cAMP-PKA/Epac pathway. The results in mice suggested that Que can inhibit the release of inflammatory cytokine, block neutrophil recruitment, and decrease the albumin leakage in dose-dependent manners. At the same time, Que can increase the cAMP content of lung tissue, and Epac content, except PKA. The results in epithelial cell (MLE-12) suggested that Que also can inhibit the inflammatory mediators keratinocyte-derived chemokines release after LPS stimulation; Epac inhibitor ESI-09 functionally antagonizes the inhibitory effect of Que; meanwhile, PKA inhibitor H89 functionally enhances the inhibitory effect of Que. Overexpression of Epac1 in MLE-12 suggested that Epac1 enhance the effect of Que. All those results suggested that the protective effect of quercetin in ALI is involved in cAMP-Epac pathway.
NASA Astrophysics Data System (ADS)
von Boehn, Bernhard; Mehrwald, Sarah; Imbihl, Ronald
2018-04-01
Various oxidation reactions with NO as oxidant have been investigated on a partially VOx covered Rh(111) surface (θV = 0.3 MLE) in the 10-4 mbar range, using photoelectron emission microscopy (PEEM) as spatially resolving method. The PEEM studies are complemented by rate measurements and by low-energy electron diffraction. In catalytic methanol oxidation with NO and in the NH3 + NO reaction, we observe that starting from a homogeneous surface with increasing temperature first a stripe pattern develops, followed by a pattern in which macroscopic holes of nearly bare metal surface are surrounded by a VOx film. These hole patterns represent just the inverse of the VOx distribution patterns seen if O2 instead of NO is used as oxidant.
Titan dune heights retrieval by using Cassini Radar Altimeter
NASA Astrophysics Data System (ADS)
Mastrogiuseppe, M.; Poggiali, V.; Seu, R.; Martufi, R.; Notarnicola, C.
2014-02-01
The Cassini Radar is a Ku band multimode instrument capable of providing topographic and mapping information. During several of the 93 Titan fly-bys performed by Cassini, the radar collected a large amount of data observing many dune fields in multiple modes such as SAR, Altimeter, Scatterometer and Radiometer. Understanding dune characteristics, such as shape and height, will reveal important clues on Titan's climatic and geological history providing a better understanding of aeolian processes on Earth. Dunes are believed to be sculpted by the action of the wind, weak at the surface but still able to activate the process of sand-sized particle transport. This work aims to estimate dunes height by modeling the shape of the real Cassini Radar Altimeter echoes. Joint processing of SAR/Altimeter data has been adopted to localize the altimeter footprints overlapping dune fields excluding non-dune features. The height of the dunes was estimated by applying Maximum Likelihood Estimation along with a non-coherent electromagnetic (EM) echo model, thus comparing the real averaged waveform with the theoretical curves. Such analysis has been performed over the Fensal dune field observed during the T30 flyby (May 2007). As a result we found that the estimated dunes' peak to trough heights difference was in the order of 60-120 m. Estimation accuracy and robustness of the MLE for different complex scenarios was assessed via radar simulations and Monte-Carlo approach. We simulated dunes-interdunes different composition and roughness for a large set of values verifying that, in the range of possible Titan environment conditions, these two surface parameters have weak effects on our estimates of standard dune heights deviation. Results presented here are the first part of a study that will cover all Titan's sand seas.
Ultrathin TiO(x) films on Pt(111): a LEED, XPS, and STM investigation.
Sedona, Francesco; Rizzi, Gian Andrea; Agnoli, Stefano; Llabrés i Xamena, Francesc X; Papageorgiou, Anthoula; Ostermann, Dieter; Sambi, Mauro; Finetti, Paola; Schierbaum, Klaus; Granozzi, Gaetano
2005-12-29
Ultrathin ordered titanium oxide films on Pt(111) surface are prepared by reactive evaporation of Ti in oxygen. By varying the Ti dose and the annealing conditions (i.e., temperature and oxygen pressure), six different long-range ordered phases are obtained. They are characterized by means of low-energy electron diffraction (LEED), X-ray photoemission spectroscopy (XPS), and scanning tunneling microscopy (STM). By careful optimization of the preparative parameters, we find conditions where predominantly single phases of TiO(x), revealing distinct LEED pattern and STM images, are produced. XPS binding energy and photoelectron diffraction (XPD) data indicate that all the phases, except one (the stoichiometric rect-TiO2), are one monolayer thick and composed of a Ti-O bilayer with interfacial Ti. Atomically resolved STM images confirm that these TiO(x) phases wet the Pt surface, in contrast to rect-TiO2. This indicates their interface stabilization. At a low Ti dose (0.4 monolayer equivalents, MLE), an incommensurate kagomé-like low-density phase (k-TiO(x) phase) is observed where hexagons are sharing their vertexes. At a higher Ti dose (0.8 MLE), two denser phases are found, both characterized by a zigzag motif (z- and z'-TiO(x) phases), but with distinct rectangular unit cells. Among them, z'-TiO(x), which is obtained by annealing in ultrahigh vacuum (UHV), shows a larger unit cell. When the postannealing of the 0.8 MLE deposit is carried out at high temperatures and high oxygen partial pressures, the incommensurate nonwetting, fully oxidized rect-TiO2 is found The symmetry and lattice dimensions are almost identical with rect-VO2, observed in the system VO(x)/Pd(111). At a higher coverage (1.2 MLE), two commensurate hexagonal phases are formed, namely the w- [(square root(43) x square root(43)) R 7.6 degrees] and w'-TiO(x) phase [(7 x 7) R 21.8 degrees]. They show wagon-wheel-like structures and have slightly different lattice dimensions. Larger Ti deposits produce TiO2 nanoclusters on top of the different monolayer films, as supported both by XPS and STM data. Besides the formation of TiO(x) surfaces phases, wormlike features are found on the bare parts of the substrate by STM. We suggest that these structures, probably multilayer disordered TiO2, represent growth precursors of the ordered phases. Our results on the different nanostructures are compared with literature data on similar systems, e.g., VO(x)/Pd(111), VO(x)/Rh(111), TiO(x)/Pd(111), TiO(x)/Pt(111), and TiO(x)/Ru(0001). Similar and distinct features are observed in the TiO(x)/Pt(111) case, which may be related to the different chemical natures of the overlayer and of the substrate.
NASA Astrophysics Data System (ADS)
Adavallan, K.; Gurushankar, K.; Nazeer, Shaiju S.; Gohulkumar, M.; Jayasree, Ramapurath S.; Krishnakumar, N.
2017-06-01
Fluorescence spectroscopic techniques have the potential to assess the metabolic changes during disease development and evaluation of treatment response in a non-invasive and label-free manner. The present study aims to evaluate the effect of mulberry-mediated gold nanoparticles (MAuNPs) in comparison with mulberry leaf extract alone (MLE) for monitoring endogenous fluorophores and to quantify the metabolic changes associated with mitochondrial redox states during streptozotocin-induced diabetic liver tissues using fluorescence spectroscopy. Two mitochondrial metabolic coenzymes, reduced nicotinamide dinucleotide (NADH) and oxidized flavin adenine dinucleotide (FAD) are autofluorescent and are important optical biomarkers to estimate the redox state of a cell. Significant differences in the autofluorescence spectral signatures between the control and the experimental diabetic animals have been noticed under the excitation wavelength at 320 nm with emission ranging from 350-550 nm. A direct correlation between the progression of diabetes and the levels of collagen and optical redox ratio was observed. The results revealed that a significant increase in the emission of collagen in diabetic liver tissues as compared with the control liver tissues. Moreover, there was a significant decrease in the optical redox ratio (FAD/(FAD + NADH)) observed in diabetic control liver tissues, which indicates an increased oxidative stress compared to the liver tissues of control rats. Further, the extent of increased oxidative stress was confirmed by the reduced levels of reduced glutathione (GSH) in diabetic liver tissues. On a comparative basis, treatment with MAuNPs was found to be more effective than MLE for reducing the progression of diabetes and improving the optical redox ratio to a near normal range in streptozotocin-induced diabetic liver tissues. Furthermore, principal component analysis followed by linear discriminant analysis (PC-LDA) has been used to classify the autofluorescence emission spectra from the control and the experimental group of diabetic rats. The results of this study raise the important possibility that fluorescence spectroscopy in conjunction with multivariate statistical analysis has tremendous potential for monitoring or potentially predicting responses to therapy.
NASA Astrophysics Data System (ADS)
DeMarco, Adam Ward
The turbulent motions with the atmospheric boundary layer exist over a wide range of spatial and temporal scales and are very difficult to characterize. Thus, to explore the behavior of such complex flow enviroments, it is customary to examine their properties from a statistical perspective. Utilizing the probability density functions of velocity and temperature increments, deltau and deltaT, respectively, this work investigates their multiscale behavior to uncover the unique traits that have yet to be thoroughly studied. Utilizing diverse datasets, including idealized, wind tunnel experiments, atmospheric turbulence field measurements, multi-year ABL tower observations, and mesoscale models simulations, this study reveals remarkable similiarities (and some differences) between the small and larger scale components of the probability density functions increments fields. This comprehensive analysis also utilizes a set of statistical distributions to showcase their ability to capture features of the velocity and temperature increments' probability density functions (pdfs) across multiscale atmospheric motions. An approach is proposed for estimating their pdfs utilizing the maximum likelihood estimation (MLE) technique, which has never been conducted utilizing atmospheric data. Using this technique, we reveal the ability to estimate higher-order moments accurately with a limited sample size, which has been a persistent concern for atmospheric turbulence research. With the use robust Goodness of Fit (GoF) metrics, we quantitatively reveal the accuracy of the distributions to the diverse dataset. Through this analysis, it is shown that the normal inverse Gaussian (NIG) distribution is a prime candidate to be used as an estimate of the increment pdfs fields. Therefore, using the NIG model and its parameters, we display the variations in the increments over a range of scales revealing some unique scale-dependent qualities under various stability and ow conditions. This novel approach can provide a method of characterizing increment fields with the sole use of only four pdf parameters. Also, we investigate the capability of the current state-of-the-art mesoscale atmospheric models to predict the features and highlight the potential for use for future model development. With the knowledge gained in this study, a number of applications can benefit by using our methodology, including the wind energy and optical wave propagation fields.
ERIC Educational Resources Information Center
Eden, S.; Bezer, M.
2011-01-01
The research examined the effect of an intervention program employing 3D immersive virtual reality (IVR), which focused on the perception of sequential time, on the mediation level and behavioural aspects of children with intellectual disability (ID). The intervention is based on the mediated learning experience (MLE) theory, which refers the…
ERIC Educational Resources Information Center
Saady, Amany; Ibrahim, Raphiq; Eviatar, Zohar
2015-01-01
The goal of the present study was to extend the models explaining the missing-letter effect (MLE) to an additional language and orthography, and to test the role of phonology in silent reading in Arabic. We also examined orthographic effects such as letter position and letter shape, morphological effects such as pseudo-prefixes, and phonological…
Optimal Repairman Allocation Models
1976-03-01
state X under policy ir. Then lim {k1’ lC0 (^)I) e.(X,k) - 0 k*0 *’-’ (3.1.1) Proof; The result is proven by induction on |CQ(X...following theorem. Theorem 3.1 D. Under the conditions of theorem 3.1 A, define g1[ 1) (X) - g^U), then lim k- lC0 W l-mle (XHkl00^ Ig*11 (X
ERIC Educational Resources Information Center
Redmond, Theresa
2016-01-01
This essay is a response to Brown's (2015) article describing her strategy of transaction circles as a student-centered, culturally responsive, and democratic literacy practice. In my response, I provide further evidence from the field of media literacy education (MLE) that serves to enhance Brown's argument for using transaction circles in order…
Alkon, Abbey; Boyce, W Thomas; Neilands, Torsten B; Eskenazi, Brenda
2017-01-01
Sleep problems are common for young children especially if they live in adverse home environments. Some studies investigate if young children may also be at a higher risk of sleep problems if they have a specific biological sensitivity to adversity. This paper addresses the research question, does the relations between children's exposure to family adversities and their sleep problems differ depending on their autonomic nervous system's sensitivity to challenges? As part of a larger cohort study of Latino, low-income families, we assessed the cross-sectional relations among family demographics (education, marital status), adversities [routines, major life events (MLE)], and biological sensitivity as measured by autonomic nervous system (ANS) reactivity associated with parent-rated sleep problems when the children were 5 years old. Mothers were interviewed in English or Spanish and completed demographic, family, and child measures. The children completed a 15-min standardized protocol while continuous cardiac measures of the ANS [respiratory sinus arrhythmia (RSA), preejection period (PEP)] were collected during resting and four challenge conditions. Reactivity was defined as the mean of the responses to the four challenge conditions minus the first resting condition. Four ANS profiles, co-activation, co-inhibition, reciprocal low RSA and PEP reactivity, and reciprocal high RSA and PEP reactivity, were created by dichotomizing the reactivity scores as high or low reactivity. Logistic regression models showed there were significant main effects for children living in families with fewer daily routines having more sleep problems than for children living in families with daily routines. There were significant interactions for children with low PEP reactivity and for children with the reciprocal, low reactivity profiles who experienced major family life events in predicting children's sleep problems. Children who had a reciprocal, low reactivity ANS profile had more sleep problems if they also experienced MLE than children who experienced fewer MLE. These findings suggest that children who experience family adversities have different risks for developing sleep problems depending on their biological sensitivity. Interventions are needed for young Latino children that support family routines and reduce the impact of family adversities to help them develop healthy sleep practices.
Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu
2016-12-01
We investigated a bacterial sample preparation procedure for single-cell studies. In the present study, we examined whether single bacterial cells obtained via 10-fold dilution followed a theoretical Poisson distribution. Four serotypes of Salmonella enterica, three serotypes of enterohaemorrhagic Escherichia coli and one serotype of Listeria monocytogenes were used as sample bacteria. An inoculum of each serotype was prepared via a 10-fold dilution series to obtain bacterial cell counts with mean values of one or two. To determine whether the experimentally obtained bacterial cell counts follow a theoretical Poisson distribution, a likelihood ratio test between the experimentally obtained cell counts and Poisson distribution which parameter estimated by maximum likelihood estimation (MLE) was conducted. The bacterial cell counts of each serotype sufficiently followed a Poisson distribution. Furthermore, to examine the validity of the parameters of Poisson distribution from experimentally obtained bacterial cell counts, we compared these with the parameters of a Poisson distribution that were estimated using random number generation via computer simulation. The Poisson distribution parameters experimentally obtained from bacterial cell counts were within the range of the parameters estimated using a computer simulation. These results demonstrate that the bacterial cell counts of each serotype obtained via 10-fold dilution followed a Poisson distribution. The fact that the frequency of bacterial cell counts follows a Poisson distribution at low number would be applied to some single-cell studies with a few bacterial cells. In particular, the procedure presented in this study enables us to develop an inactivation model at the single-cell level that can estimate the variability of survival bacterial numbers during the bacterial death process. Copyright © 2016 Elsevier Ltd. All rights reserved.
Widespread Albedo Decreasing and Induced Melting of Himalayan Snow and Ice in the Early 21st Century
Ming, Jing; Wang, Yaqiang; Du, Zhencai; Zhang, Tong; Guo, Wanqin; Xiao, Cunde; Xu, Xiaobin; Ding, Minghu; Zhang, Dongqi; Yang, Wen
2015-01-01
Background The widely distributed glaciers in the greater Himalayan region have generally experienced rapid shrinkage since the 1850s. As invaluable sources of water and because of their scarcity, these glaciers are extremely important. Beginning in the twenty-first century, new methods have been applied to measure the mass budget of these glaciers. Investigations have shown that the albedo is an important parameter that affects the melting of Himalayan glaciers. Methodology/Principal Findings The surface albedo based on the Moderate Resolution Imaging Spectroradiometer (MODIS) data over the Hindu Kush, Karakoram and Himalaya (HKH) glaciers is surveyed in this study for the period 2000–2011. The general albedo trend shows that the glaciers have been darkening since 2000. The most rapid decrease in the surface albedo has occurred in the glacial area above 6000 m, which implies that melting will likely extend to snow accumulation areas. The mass-loss equivalent (MLE) of the HKH glacial area caused by surface shortwave radiation absorption is estimated to be 10.4 Gt yr-1, which may contribute to 1.2% of the global sea level rise on annual average (2003–2009). Conclusions/Significance This work probably presents a first scene depicting the albedo variations over the whole HKH glacial area during the period 2000–2011. Most rapidly decreasing in albedo has been detected in the highest area, which deserves to be especially concerned. PMID:26039088
Common mode error in Antarctic GPS coordinate time series on its effect on bedrock-uplift estimates
NASA Astrophysics Data System (ADS)
Liu, Bin; King, Matt; Dai, Wujiao
2018-05-01
Spatially-correlated common mode error always exists in regional, or-larger, GPS networks. We applied independent component analysis (ICA) to GPS vertical coordinate time series in Antarctica from 2010 to 2014 and made a comparison with the principal component analysis (PCA). Using PCA/ICA, the time series can be decomposed into a set of temporal components and their spatial responses. We assume the components with common spatial responses are common mode error (CME). An average reduction of ˜40% about the RMS values was achieved in both PCA and ICA filtering. However, the common mode components obtained from the two approaches have different spatial and temporal features. ICA time series present interesting correlations with modeled atmospheric and non-tidal ocean loading displacements. A white noise (WN) plus power law noise (PL) model was adopted in the GPS velocity estimation using maximum likelihood estimation (MLE) analysis, with ˜55% reduction of the velocity uncertainties after filtering using ICA. Meanwhile, spatiotemporal filtering reduces the amplitude of PL and periodic terms in the GPS time series. Finally, we compare the GPS uplift velocities, after correction for elastic effects, with recent models of glacial isostatic adjustment (GIA). The agreements of the GPS observed velocities and four GIA models are generally improved after the spatiotemporal filtering, with a mean reduction of ˜0.9 mm/yr of the WRMS values, possibly allowing for more confident separation of various GIA model predictions.
NASA Astrophysics Data System (ADS)
Chow, V. Y.; Gerbig, C.; Longo, M.; Koch, F.; Nehrkorn, T.; Eluszkiewicz, J.; Ceballos, J. C.; Longo, K.; Wofsy, S. C.
2012-12-01
The Balanço Atmosférico Regional de Carbono na Amazônia (BARCA) aircraft program spanned the dry to wet and wet to dry transition seasons in November 2008 & May 2009 respectively. It resulted in ~150 vertical profiles covering the Brazilian Amazon Basin (BAB). With the data we attempt to estimate a carbon budget for the BAB, to determine if regional aircraft experiments can provide strong constraints for a budget, and to compare inversion frameworks when optimizing flux estimates. We use a LPDM to integrate satellite-, aircraft-, & surface-data with mesoscale meteorological fields to link bottom-up and top-down models to provide constraints and error bounds for regional fluxes. The Stochastic Time-Inverted Lagrangian Transport (STILT) model driven by meteorological fields from BRAMS, ECMWF, and WRF are coupled to a biosphere model, the Vegetation Photosynthesis Respiration Model (VPRM), to determine regional CO2 fluxes for the BAB. The VPRM is a prognostic biosphere model driven by MODIS 8-day EVI and LSWI indices along with shortwave radiation and temperature from tower measurements and mesoscale meteorological data. VPRM parameters are tuned using eddy flux tower data from the Large-Scale Biosphere Atmosphere experiment. VPRM computes hourly CO2 fluxes by calculating Gross Ecosystem Exchange (GEE) and Respiration (R) for 8 different vegetation types. The VPRM fluxes are scaled up to the BAB by using time-averaged drivers (shortwave radiation & temperature) from high-temporal resolution runs of BRAMS, ECMWF, and WRF and vegetation maps from SYNMAP and IGBP2007. Shortwave radiation from each mesoscale model is validated using surface data and output from GL 1.2, a global radiation model based on GOES 8 visible imagery. The vegetation maps are updated to 2008 and 2009 using landuse scenarios modeled by Sim Amazonia 2 and Sim Brazil. A priori fluxes modeled by STILT-VPRM are optimized using data from BARCA, eddy covariance sites, and flask measurements. The aircraft mixing ratios are applied as a top down constraint in Maximum Likelihood Estimation (MLE) and Bayesian inversion frameworks that solves for parameters controlling the flux. Posterior parameter estimates are used to estimate the carbon budget of the BAB. Preliminary results show that the STILT-VPRM model simulates the net emission of CO2 during both transition periods reasonably well. There is significant enhancement from biomass burning during the November 2008 profiles and some from fossil fuel combustion during the May 2009 flights. ΔCO/ΔCO2 emission ratios are used in combination with continuous observations of CO to remove the CO2 contributions from biomass burning and fossil fuel combustion from the observed CO2 measurements resulting in better agreement of observed and modeled aircraft data. Comparing column calculations for each of the vertical profiles shows our model represents the variability in the diurnal cycle. The high altitude CO2 values from above 3500m are similar to the lateral boundary conditions from CarbonTracker 2010 and GEOS-Chem indicating little influence from surface fluxes at these levels. The MLE inversion provides scaling factors for GEE and R for each of the 8 vegetation types and a Bayesian inversion is being conducted. Our initial inversion results suggest the BAB represents a small net source of CO2 during both of the BARCA intensives.
U.S.-Vietnamese Security Cooperation for Access to the SCS
2015-06-16
of the claimants to islands in the SCS, China has more Coast Guard/MLE vessels than Japan, Indonesia, Malaysia , and the Philippines combined.7...45 In 2011, the U.S. looked at Vietnam, along with other Southeast Asian countries, for its Naval Medical ...human rights; culture, tourism , and sports; as well as continued annual defense dialogue meetings.47 Both Vietnam and the U.S. are concerned about
Temporal Delineation and Quantification of Short Term Clustered Mining Seismicity
NASA Astrophysics Data System (ADS)
Woodward, Kyle; Wesseloo, Johan; Potvin, Yves
2017-07-01
The assessment of the temporal characteristics of seismicity is fundamental to understanding and quantifying the seismic hazard associated with mining, the effectiveness of strategies and tactics used to manage seismic hazard, and the relationship between seismicity and changes to the mining environment. This article aims to improve the accuracy and precision in which the temporal dimension of seismic responses can be quantified and delineated. We present a review and discussion on the occurrence of time-dependent mining seismicity with a specific focus on temporal modelling and the modified Omori law (MOL). This forms the basis for the development of a simple weighted metric that allows for the consistent temporal delineation and quantification of a seismic response. The optimisation of this metric allows for the selection of the most appropriate modelling interval given the temporal attributes of time-dependent mining seismicity. We evaluate the performance weighted metric for the modelling of a synthetic seismic dataset. This assessment shows that seismic responses can be quantified and delineated by the MOL, with reasonable accuracy and precision, when the modelling is optimised by evaluating the weighted MLE metric. Furthermore, this assessment highlights that decreased weighted MLE metric performance can be expected if there is a lack of contrast between the temporal characteristics of events associated with different processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva, Consuelo Juanita
Recent amendments to the Safe Drinking Water Act emphasize efforts toward safeguarding our nation's water supplies against attack and contamination. Specifically, the Public Health Security and Bioterrorism Preparedness and Response Act of 2002 established requirements for each community water system serving more than 3300 people to conduct an assessment of the vulnerability of its system to a terrorist attack or other intentional acts. Integral to evaluating system vulnerability is the threat assessment, which is the process by which the credibility of a threat is quantified. Unfortunately, full probabilistic assessment is generally not feasible, as there is insufficient experience and/or datamore » to quantify the associated probabilities. For this reason, an alternative approach is proposed based on Markov Latent Effects (MLE) modeling, which provides a framework for quantifying imprecise subjective metrics through possibilistic or fuzzy mathematics. Here, an MLE model for water systems is developed and demonstrated to determine threat assessments for different scenarios identified by the assailant, asset, and means. Scenario assailants include terrorists, insiders, and vandals. Assets include a water treatment plant, water storage tank, node, pipeline, well, and a pump station. Means used in attacks include contamination (onsite chemicals, biological and chemical), explosives and vandalism. Results demonstrated highest threats are vandalism events and least likely events are those performed by a terrorist.« less
Ability evaluation by binary tests: Problems, challenges & recent advances
NASA Astrophysics Data System (ADS)
Bashkansky, E.; Turetsky, V.
2016-11-01
Binary tests designed to measure abilities of objects under test (OUTs) are widely used in different fields of measurement theory and practice. The number of test items in such tests is usually very limited. The response to each test item provides only one bit of information per OUT. The problem of correct ability assessment is even more complicated, when the levels of difficulty of the test items are unknown beforehand. This fact makes the search for effective ways of planning and processing the results of such tests highly relevant. In recent years, there has been some progress in this direction, generated by both the development of computational tools and the emergence of new ideas. The latter are associated with the use of so-called “scale invariant item response models”. Together with maximum likelihood estimation (MLE) approach, they helped to solve some problems of engineering and proficiency testing. However, several issues related to the assessment of uncertainties, replications scheduling, the use of placebo, as well as evaluation of multidimensional abilities still present a challenge for researchers. The authors attempt to outline the ways to solve the above problems.
Cytoprophet: a Cytoscape plug-in for protein and domain interaction networks inference.
Morcos, Faruck; Lamanna, Charles; Sikora, Marcin; Izaguirre, Jesús
2008-10-01
Cytoprophet is a software tool that allows prediction and visualization of protein and domain interaction networks. It is implemented as a plug-in of Cytoscape, an open source software framework for analysis and visualization of molecular networks. Cytoprophet implements three algorithms that predict new potential physical interactions using the domain composition of proteins and experimental assays. The algorithms for protein and domain interaction inference include maximum likelihood estimation (MLE) using expectation maximization (EM); the set cover approach maximum specificity set cover (MSSC) and the sum-product algorithm (SPA). After accepting an input set of proteins with Uniprot ID/Accession numbers and a selected prediction algorithm, Cytoprophet draws a network of potential interactions with probability scores and GO distances as edge attributes. A network of domain interactions between the domains of the initial protein list can also be generated. Cytoprophet was designed to take advantage of the visual capabilities of Cytoscape and be simple to use. An example of inference in a signaling network of myxobacterium Myxococcus xanthus is presented and available at Cytoprophet's website. http://cytoprophet.cse.nd.edu.
Clark, Jeremy S C; Kaczmarczyk, Mariusz; Mongiało, Zbigniew; Ignaczak, Paweł; Czajkowski, Andrzej A; Klęsk, Przemysław; Ciechanowicz, Andrzej
2013-08-01
Gompertz-related distributions have dominated mortality studies for 187 years. However, nonrelated distributions also fit well to mortality data. These compete with the Gompertz and Gompertz-Makeham data when applied to data with varying extents of truncation, with no consensus as to preference. In contrast, Gaussian-related distributions are rarely applied, despite the fact that Lexis in 1879 suggested that the normal distribution itself fits well to the right of the mode. Study aims were therefore to compare skew-t fits to Human Mortality Database data, with Gompertz-nested distributions, by implementing maximum likelihood estimation functions (mle2, R package bbmle; coding given). Results showed skew-t fits obtained lower Bayesian information criterion values than Gompertz-nested distributions, applied to low-mortality country data, including 1711 and 1810 cohorts. As Gaussian-related distributions have now been found to have almost universal application to error theory, one conclusion could be that a Gaussian-related distribution might replace Gompertz-related distributions as the basis for mortality studies.
Alkon, Abbey; Boyce, W. Thomas; Neilands, Torsten B.; Eskenazi, Brenda
2017-01-01
Sleep problems are common for young children especially if they live in adverse home environments. Some studies investigate if young children may also be at a higher risk of sleep problems if they have a specific biological sensitivity to adversity. This paper addresses the research question, does the relations between children’s exposure to family adversities and their sleep problems differ depending on their autonomic nervous system’s sensitivity to challenges? As part of a larger cohort study of Latino, low-income families, we assessed the cross-sectional relations among family demographics (education, marital status), adversities [routines, major life events (MLE)], and biological sensitivity as measured by autonomic nervous system (ANS) reactivity associated with parent-rated sleep problems when the children were 5 years old. Mothers were interviewed in English or Spanish and completed demographic, family, and child measures. The children completed a 15-min standardized protocol while continuous cardiac measures of the ANS [respiratory sinus arrhythmia (RSA), preejection period (PEP)] were collected during resting and four challenge conditions. Reactivity was defined as the mean of the responses to the four challenge conditions minus the first resting condition. Four ANS profiles, co-activation, co-inhibition, reciprocal low RSA and PEP reactivity, and reciprocal high RSA and PEP reactivity, were created by dichotomizing the reactivity scores as high or low reactivity. Logistic regression models showed there were significant main effects for children living in families with fewer daily routines having more sleep problems than for children living in families with daily routines. There were significant interactions for children with low PEP reactivity and for children with the reciprocal, low reactivity profiles who experienced major family life events in predicting children’s sleep problems. Children who had a reciprocal, low reactivity ANS profile had more sleep problems if they also experienced MLE than children who experienced fewer MLE. These findings suggest that children who experience family adversities have different risks for developing sleep problems depending on their biological sensitivity. Interventions are needed for young Latino children that support family routines and reduce the impact of family adversities to help them develop healthy sleep practices. PMID:28713808
TIGA Tide Gauge Data Reprocessing at GFZ
NASA Astrophysics Data System (ADS)
Deng, Zhiguo; Schöne, Tilo; Gendt, Gerd
2014-05-01
To analyse the tide gauge measurements for the purpose of global long-term sea level change research a well-defined absolute reference frame is required by oceanographic community. To create such frame the data from a global GNSS network located at or near tide gauges are processed. For analyzing the GNSS data on a preferably continuous basis the International GNSS Service (IGS) Tide Gauge Benchmark Monitoring Working Group (TIGA-WG) is responsible. As one of the TIGA Analysis Centers the German Research Centre for Geosciences (GFZ) is contributing to the IGS TIGA Reprocessing Campaign. The solutions of the TIGA Reprocessing Campaign will also contribute to 2nd IGS Data Reprocessing Campaign with GFZ IGS reprocessing solution. After the first IGS reprocessing finished in 2010 some improvements were implemented into the latest GFZ software version EPOS.P8: reference frame IGb08 based on ITRF2008, antenna calibration igs08.atx, geopotential model (EGM2008), higher-order ionospheric effects, new a priori meteorological model (GPT2), VMF mapping function, and other minor improvements. GPS data of the globally distributed tracking network of 794 stations for the time span from 1994 until end of 2012 are used for the TIGA reprocessing. To handle such large network a new processing strategy is developed and described in detail. In the TIGA reprocessing the GPS@TIGA data are processed in precise point positioning (PPP) mode to clean data using the IGS reprocessing orbit and clock products. To validate the quality of the PPP coordinate results the rates of 80 GPS@TIGA station vertical movement are estimated from the PPP results using Maximum Likelihood Estimation (MLE) method. The rates are compared with the solution of University of LaRochelle Consortium (ULR) (named ULR5). 56 of the 80 stations have a difference of the vertical velocities below 1 mm/yr. The error bars of PPP rates are significant larger than those of ULR5, which indicates large time correlated noise in the PPP solutions.
Liu, Dehua; Chan, Ben Chung-Lap; Cheng, Ling; Tsang, Miranda Sin-Man; Zhu, Jing; Wong, Chun-Wai; Jiao, Delong; Chan, Helen Yau-Tsz; Leung, Ping Chung; Lam, Christopher Wai-Kei; Wong, Chun Kwok
2018-03-02
The immune system responds to Mycobacterium tuberculosis (MTB) infection by forming granulomas to quarantine the bacteria from spreading. Granuloma-mediated inflammation is a cause of lung destruction and disease transmission. Sophora flavescens (SF) has been demonstrated to exhibit bactericidal activities against MTB. However, its immune modulatory activities on MTB-mediated granulomatous inflammation have not been reported. In the present study, we found that flavonoids from Sophora flavescens (FSF) significantly suppressed the pro-inflammatory mediators released from mouse lung alveolar macrophages (MH-S) upon stimulation by trehalose dimycolate (TDM), the most abundant lipoglycan on MTB surface. Moreover, FSF reduced adhesion molecule (LFA-1) expression on MH-S cells after TDM stimulation. Furthermore, FSF treatment on TDM-activated lung epithelial (MLE-12) cells significantly downregulated macrophage chemoattractant protein (MCP-1/CCL2) expression, which in turn reduced the in vitro migration of MH-S to MLE-12 cells. In addition, FSF increased the clearance of mycobacterium bacteria (Mycobacterium aurum) in macrophages. FSF mainly affected the Mincle-Syk-Erk signaling pathway in TDM-activated MH-S cells. In TDM-induced mouse granulomas model, oral administration with FSF significantly suppressed lung granulomas formation and inflammation. These findings collectively implicated an anti-inflammatory role of FSF on MTB-mediated granulomatous inflammation, thereby providing evidence of FSF as an efficacious adjunct treatment during mycobacterial infection.
Neutron Radiation Effects in Fiber Optics.
1980-06-05
due to naturevs effects , the photophone as a device was doomed. However the principles of voice transmission by modulated ]iqht beams were not. From...AD-A091 661 NAVAL ACADEMY ANNAPOLIS NO F/S 20/6 NEUTRON RADIATION EFFECTS IN FIBER OPTICS.(U) N.N 80 M J MARSHFIELD NCLASSIFIED USNA-TSPR-107 MLE...34’I//E/////EE I ffffffffffffff /l-"lll/"."lmIii//2 //0 A TRIDENT SCHOLAR * PROJECT REPORT NO. 1im "NEUTRON RADIATION EFFECTS IN FIBE OPTICS UNITED
Index of FAA Office of Aviation Medicine Reports: 1961 through 1993
1994-01-01
DOT/F~~J~94 11 Index of FAA Office of Office Of Aviation Medicine Aviation M edicine Reports: Washington, D.C. 20591 1961 through 1993 AD-A275 913 lI...and SubMle 5. Report Dab Index to FAA Office of Aviation Medicine Reports: January 1994 1961 through 1993 6. Pwe ng Organization Code 7. Autho(s) 6...Covered Office of Aviation Medicine Federal Aviation Administration 800 Independence Avenue, S.W. Washington, DC 20591 14. Sponsorg Agency Code 1S. &Su
Zhai, Xuetong; Chakraborty, Dev P
2017-06-01
The objective was to design and implement a bivariate extension to the contaminated binormal model (CBM) to fit paired receiver operating characteristic (ROC) datasets-possibly degenerate-with proper ROC curves. Paired datasets yield two correlated ratings per case. Degenerate datasets have no interior operating points and proper ROC curves do not inappropriately cross the chance diagonal. The existing method, developed more than three decades ago utilizes a bivariate extension to the binormal model, implemented in CORROC2 software, which yields improper ROC curves and cannot fit degenerate datasets. CBM can fit proper ROC curves to unpaired (i.e., yielding one rating per case) and degenerate datasets, and there is a clear scientific need to extend it to handle paired datasets. In CBM, nondiseased cases are modeled by a probability density function (pdf) consisting of a unit variance peak centered at zero. Diseased cases are modeled with a mixture distribution whose pdf consists of two unit variance peaks, one centered at positive μ with integrated probability α, the mixing fraction parameter, corresponding to the fraction of diseased cases where the disease was visible to the radiologist, and one centered at zero, with integrated probability (1-α), corresponding to disease that was not visible. It is shown that: (a) for nondiseased cases the bivariate extension is a unit variances bivariate normal distribution centered at (0,0) with a specified correlation ρ 1 ; (b) for diseased cases the bivariate extension is a mixture distribution with four peaks, corresponding to disease not visible in either condition, disease visible in only one condition, contributing two peaks, and disease visible in both conditions. An expression for the likelihood function is derived. A maximum likelihood estimation (MLE) algorithm, CORCBM, was implemented in the R programming language that yields parameter estimates and the covariance matrix of the parameters, and other statistics. A limited simulation validation of the method was performed. CORCBM and CORROC2 were applied to two datasets containing nine readers each contributing paired interpretations. CORCBM successfully fitted the data for all readers, whereas CORROC2 failed to fit a degenerate dataset. All fits were visually reasonable. All CORCBM fits were proper, whereas all CORROC2 fits were improper. CORCBM and CORROC2 were in agreement (a) in declaring only one of the nine readers as having significantly different performances in the two modalities; (b) in estimating higher correlations for diseased cases than for nondiseased ones; and (c) in finding that the intermodality correlation estimates for nondiseased cases were consistent between the two methods. All CORCBM fits yielded higher area under curve (AUC) than the CORROC2 fits, consistent with the fact that a proper ROC model like CORCBM is based on a likelihood-ratio-equivalent decision variable, and consequently yields higher performance than the binormal model-based CORROC2. The method gave satisfactory fits to four simulated datasets. CORCBM is a robust method for fitting paired ROC datasets, always yielding proper ROC curves, and able to fit degenerate datasets. © 2017 American Association of Physicists in Medicine.
Second-order variational equations for N-body simulations
NASA Astrophysics Data System (ADS)
Rein, Hanno; Tamayo, Daniel
2016-07-01
First-order variational equations are widely used in N-body simulations to study how nearby trajectories diverge from one another. These allow for efficient and reliable determinations of chaos indicators such as the Maximal Lyapunov characteristic Exponent (MLE) and the Mean Exponential Growth factor of Nearby Orbits (MEGNO). In this paper we lay out the theoretical framework to extend the idea of variational equations to higher order. We explicitly derive the differential equations that govern the evolution of second-order variations in the N-body problem. Going to second order opens the door to new applications, including optimization algorithms that require the first and second derivatives of the solution, like the classical Newton's method. Typically, these methods have faster convergence rates than derivative-free methods. Derivatives are also required for Riemann manifold Langevin and Hamiltonian Monte Carlo methods which provide significantly shorter correlation times than standard methods. Such improved optimization methods can be applied to anything from radial-velocity/transit-timing-variation fitting to spacecraft trajectory optimization to asteroid deflection. We provide an implementation of first- and second-order variational equations for the publicly available REBOUND integrator package. Our implementation allows the simultaneous integration of any number of first- and second-order variational equations with the high-accuracy IAS15 integrator. We also provide routines to generate consistent and accurate initial conditions without the need for finite differencing.
Inhibition of Prolyl Hydroxylase Attenuates Fas Ligand-Induced Apoptosis and Lung Injury in Mice.
Nagamine, Yusuke; Tojo, Kentaro; Yazawa, Takuya; Takaki, Shunsuke; Baba, Yasuko; Goto, Takahisa; Kurahashi, Kiyoyasu
2016-12-01
Alveolar epithelial injury and increased alveolar permeability are hallmarks of acute respiratory distress syndrome. Apoptosis of lung epithelial cells via the Fas/Fas ligand (FasL) pathway plays a critical role in alveolar epithelial injury. Activation of hypoxia-inducible factor (HIF)-1 by inhibition of prolyl hydroxylase domain proteins (PHDs) is a possible therapeutic approach to attenuate apoptosis and organ injury. Here, we investigated whether treatment with dimethyloxalylglycine (DMOG), an inhibitor of PHDs, could attenuate Fas/FasL-dependent apoptosis in lung epithelial cells and lung injury. DMOG increased HIF-1α protein expression in vitro in MLE-12 cells, a murine alveolar epithelial cell line. Treatment of MLE-12 cells with DMOG significantly suppressed cell surface expression of Fas and attenuated FasL-induced caspase-3 activation and apoptotic cell death. Inhibition of the HIF-1 pathway by echinomycin or small interfering RNA transfection abolished these antiapoptotic effects of DMOG. Moreover, intraperitoneal injection of DMOG in mice increased HIF-1α expression and decreased Fas expression in lung tissues. DMOG treatment significantly attenuated caspase-3 activation, apoptotic cell death in lung tissue, and the increase in alveolar permeability in mice instilled intratracheally with FasL. In addition, inflammatory responses and histopathological changes were also significantly attenuated by DMOG treatment. In conclusion, inhibition of PHDs protects lung epithelial cells from Fas/FasL-dependent apoptosis through HIF-1 activation and attenuates lung injury in mice.
Dynamics of ultrathin V-oxide layers on Rh(111) in catalytic oxidation of ammonia and CO.
von Boehn, B; Preiss, A; Imbihl, R
2016-07-20
Catalytic oxidation of ammonia and CO has been studied in the 10(-4) mbar range using a catalyst prepared by depositing ultra-thin vanadium oxide layers on Rh(111) (θV ≈ 0.2 MLE). Using photoemission electron microscopy (PEEM) as a spatially resolving method, we observe that upon heating in an atmosphere of NH3 and O2 the spatial homogeneity of the VOx layer is removed at 800 K and a pattern consisting of macroscopic stripes develops; at elevated temperatures this pattern transforms into a pattern of circular VOx islands. Under reaction conditions the neighboring VOx islands become attracted by each other and coalesce. Similar processes of pattern formation and island coalescence are observed in catalytic CO oxidation. Reoxidation of the reduced VOx catalyst proceeds via surface diffusion of oxygen adsorbed onto Rh(111). A pattern consisting of macroscopic circular VOx islands can also be obtained by heating a Rh(111)/VOx catalyst in pure O2.
An Improved Statistical Solution for Global Seismicity by the HIST-ETAS Approach
NASA Astrophysics Data System (ADS)
Chu, A.; Ogata, Y.; Katsura, K.
2010-12-01
For long-term global seismic model fitting, recent work by Chu et al. (2010) applied the spatial-temporal ETAS model (Ogata 1998) and analyzed global data partitioned into tectonic zones based on geophysical characteristics (Bird 2003), and it has shown tremendous improvements of model fitting compared with one overall global model. While the ordinary ETAS model assumes constant parameter values across the complete region analyzed, the hierarchical space-time ETAS model (HIST-ETAS, Ogata 2004) is a newly introduced approach by proposing regional distinctions of the parameters for more accurate seismic prediction. As the HIST-ETAS model has been fit to regional data of Japan (Ogata 2010), our work applies the model to describe global seismicity. Employing the Akaike's Bayesian Information Criterion (ABIC) as an assessment method, we compare the MLE results with zone divisions considered to results obtained by an overall global model. Location dependent parameters of the model and Gutenberg-Richter b-values are optimized, and seismological interpretations are discussed.
NASA Astrophysics Data System (ADS)
Wang, Danshi; Zhang, Min; Cai, Zhongle; Cui, Yue; Li, Ze; Han, Huanhuan; Fu, Meixia; Luo, Bin
2016-06-01
An effective machine learning algorithm, the support vector machine (SVM), is presented in the context of a coherent optical transmission system. As a classifier, the SVM can create nonlinear decision boundaries to mitigate the distortions caused by nonlinear phase noise (NLPN). Without any prior information or heuristic assumptions, the SVM can learn and capture the link properties from only a few training data. Compared with the maximum likelihood estimation (MLE) algorithm, a lower bit-error rate (BER) is achieved by the SVM for a given launch power; moreover, the launch power dynamic range (LPDR) is increased by 3.3 dBm for 8 phase-shift keying (8 PSK), 1.2 dBm for QPSK, and 0.3 dBm for BPSK. The maximum transmission distance corresponding to a BER of 1 ×10-3 is increased by 480 km for the case of 8 PSK. The larger launch power range and longer transmission distance improve the tolerance to amplitude and phase noise, which demonstrates the feasibility of the SVM in digital signal processing for M-PSK formats. Meanwhile, in order to apply the SVM method to 16 quadratic amplitude modulation (16 QAM) detection, we propose a parameter optimization scheme. By utilizing a cross-validation and grid-search techniques, the optimal parameters of SVM can be selected, thus leading to the LPDR improvement by 2.8 dBm. Additionally, we demonstrate that the SVM is also effective in combating the laser phase noise combined with the inphase and quadrature (I/Q) modulator imperfections, but the improvement is insignificant for the linear noise and separate I/Q imbalance. The computational complexity of SVM is also discussed. The relatively low complexity makes it possible for SVM to implement the real-time processing.
Coastal retracking using along-track echograms and its dependency on coastal topography
NASA Astrophysics Data System (ADS)
Ichikawa, K.; Wang, X.
2017-12-01
Although the Brown mathematical model is the standard model for waveform retracking over open oceans, coastal waveforms usually deviate from open ocean waveform shapes due to inhomogeneous surface reflections within altimeter footprints, and thus cannot be directly interpreted by the Brown model. Generally, the two primary sources of heterogeneous surface reflections are land surfaces and bright targets such as calm surface water. The former reduces echo power, while the latter often produces particularly strong echoes. In previous studies, sub-waveform retrackers, which use waveform samples collected from around leading edges in order to avoid trailing edge noise, have been recommended for coastal waveform retracking. In the present study, the peaky-type noise caused by fixed-point bright targets is explicitly detected and masked using the parabolic signature in the sequential along-track waveforms (or, azimuth-range echograms). Moreover, the power deficit of waveform trailing edges caused by weak land reflections is compensated for by estimating the ratio of sea surface area within each annular footprint in order to produce pseudo-homogeneous reflected waveforms suitable for the Brown model. Using this method, Jason-2 altimeter waveforms are retracked in several coastal areas. Our results show that both the correlation coefficient and root mean square difference between the derived sea surface height anomalies and tide gauge records retain similar values at the open ocean (0.9 and 20 cm) level, even in areas approaching 3 km from coastlines, which is considerably improved from the 10 km correlation coefficient limit of the conventional MLE4 retracker and the 7 km sub-waveform ALES retracker limit. These values, however, depend on the coastal topography of the study areas because the approach distance limit increases (decreases) in areas with complicated (straight) coastlines
2013-10-01
4A, TGFbeta decreased E- cadherin expression and increase Col1a1 expression in MLE12 cells. Soluble Cad11 Fc fusion protein inhibited EMT induced by...TGFbeta as noted my higher E-cadherin levels and a significant reduction in Col1a1 mRNA. In contrast, when Cad11 Fc fusion protein was immobilized...Fc fusion protein alone was able to induce Col1a1 expression at the 50 ug/ml concentration, although E-cadherin expression was also increased. In
NASA Technical Reports Server (NTRS)
Vaughan, O. H., Jr.
1990-01-01
Information on the data obtained from the Mesoscale Lightning Experiment flown on STS-26 is provided. The experiment used onboard TV cameras and a 35 mm film camera to obtain data. Data from the 35 mm camera are presented. During the mission, the crew had difficulty locating the various targets of opportunity with the TV cameras. To obtain as much data as possible in the short observational timeline allowed due to other commitments, the crew opted to use the hand-held 35 mm camera.
2011-01-01
Background Logistic random effects models are a popular tool to analyze multilevel also called hierarchical data with a binary or ordinal outcome. Here, we aim to compare different statistical software implementations of these models. Methods We used individual patient data from 8509 patients in 231 centers with moderate and severe Traumatic Brain Injury (TBI) enrolled in eight Randomized Controlled Trials (RCTs) and three observational studies. We fitted logistic random effects regression models with the 5-point Glasgow Outcome Scale (GOS) as outcome, both dichotomized as well as ordinal, with center and/or trial as random effects, and as covariates age, motor score, pupil reactivity or trial. We then compared the implementations of frequentist and Bayesian methods to estimate the fixed and random effects. Frequentist approaches included R (lme4), Stata (GLLAMM), SAS (GLIMMIX and NLMIXED), MLwiN ([R]IGLS) and MIXOR, Bayesian approaches included WinBUGS, MLwiN (MCMC), R package MCMCglmm and SAS experimental procedure MCMC. Three data sets (the full data set and two sub-datasets) were analysed using basically two logistic random effects models with either one random effect for the center or two random effects for center and trial. For the ordinal outcome in the full data set also a proportional odds model with a random center effect was fitted. Results The packages gave similar parameter estimates for both the fixed and random effects and for the binary (and ordinal) models for the main study and when based on a relatively large number of level-1 (patient level) data compared to the number of level-2 (hospital level) data. However, when based on relatively sparse data set, i.e. when the numbers of level-1 and level-2 data units were about the same, the frequentist and Bayesian approaches showed somewhat different results. The software implementations differ considerably in flexibility, computation time, and usability. There are also differences in the availability of additional tools for model evaluation, such as diagnostic plots. The experimental SAS (version 9.2) procedure MCMC appeared to be inefficient. Conclusions On relatively large data sets, the different software implementations of logistic random effects regression models produced similar results. Thus, for a large data set there seems to be no explicit preference (of course if there is no preference from a philosophical point of view) for either a frequentist or Bayesian approach (if based on vague priors). The choice for a particular implementation may largely depend on the desired flexibility, and the usability of the package. For small data sets the random effects variances are difficult to estimate. In the frequentist approaches the MLE of this variance was often estimated zero with a standard error that is either zero or could not be determined, while for Bayesian methods the estimates could depend on the chosen "non-informative" prior of the variance parameter. The starting value for the variance parameter may be also critical for the convergence of the Markov chain. PMID:21605357
Li, Baoyue; Lingsma, Hester F; Steyerberg, Ewout W; Lesaffre, Emmanuel
2011-05-23
Logistic random effects models are a popular tool to analyze multilevel also called hierarchical data with a binary or ordinal outcome. Here, we aim to compare different statistical software implementations of these models. We used individual patient data from 8509 patients in 231 centers with moderate and severe Traumatic Brain Injury (TBI) enrolled in eight Randomized Controlled Trials (RCTs) and three observational studies. We fitted logistic random effects regression models with the 5-point Glasgow Outcome Scale (GOS) as outcome, both dichotomized as well as ordinal, with center and/or trial as random effects, and as covariates age, motor score, pupil reactivity or trial. We then compared the implementations of frequentist and Bayesian methods to estimate the fixed and random effects. Frequentist approaches included R (lme4), Stata (GLLAMM), SAS (GLIMMIX and NLMIXED), MLwiN ([R]IGLS) and MIXOR, Bayesian approaches included WinBUGS, MLwiN (MCMC), R package MCMCglmm and SAS experimental procedure MCMC.Three data sets (the full data set and two sub-datasets) were analysed using basically two logistic random effects models with either one random effect for the center or two random effects for center and trial. For the ordinal outcome in the full data set also a proportional odds model with a random center effect was fitted. The packages gave similar parameter estimates for both the fixed and random effects and for the binary (and ordinal) models for the main study and when based on a relatively large number of level-1 (patient level) data compared to the number of level-2 (hospital level) data. However, when based on relatively sparse data set, i.e. when the numbers of level-1 and level-2 data units were about the same, the frequentist and Bayesian approaches showed somewhat different results. The software implementations differ considerably in flexibility, computation time, and usability. There are also differences in the availability of additional tools for model evaluation, such as diagnostic plots. The experimental SAS (version 9.2) procedure MCMC appeared to be inefficient. On relatively large data sets, the different software implementations of logistic random effects regression models produced similar results. Thus, for a large data set there seems to be no explicit preference (of course if there is no preference from a philosophical point of view) for either a frequentist or Bayesian approach (if based on vague priors). The choice for a particular implementation may largely depend on the desired flexibility, and the usability of the package. For small data sets the random effects variances are difficult to estimate. In the frequentist approaches the MLE of this variance was often estimated zero with a standard error that is either zero or could not be determined, while for Bayesian methods the estimates could depend on the chosen "non-informative" prior of the variance parameter. The starting value for the variance parameter may be also critical for the convergence of the Markov chain.
Mann, Wolfgang; Peña, Elizabeth D; Morgan, Gary
2014-01-01
We describe a model for assessment of lexical-semantic organization skills in American Sign Language (ASL) within the framework of dynamic vocabulary assessment and discuss the applicability and validity of the use of mediated learning experiences (MLE) with deaf signing children. Two elementary students (ages 7;6 and 8;4) completed a set of four vocabulary tasks and received two 30-minute mediations in ASL. Each session consisted of several scripted activities focusing on the use of categorization. Both had experienced difficulties in providing categorically related responses in one of the vocabulary tasks used previously. Results showed that the two students exhibited notable differences with regards to their learning pace, information uptake, and effort required by the mediator. Furthermore, we observed signs of a shift in strategic behavior by the lower performing student during the second mediation. Results suggest that the use of dynamic assessment procedures in a vocabulary context was helpful in understanding children's strategies as related to learning potential. These results are discussed in terms of deaf children's cognitive modifiability with implications for planning instruction and how MLE can be used with a population that uses ASL. The reader will (1) recognize the challenges in appropriate language assessment of deaf signing children; (2) recall the three areas explored to investigate whether a dynamic assessment approach is sensitive to differences in deaf signing children's language learning profiles (3) discuss how dynamic assessment procedures can make deaf signing children's individual language learning differences visible. Copyright © 2014 Elsevier Inc. All rights reserved.
Leg exoskeleton reduces the metabolic cost of human hopping.
Grabowski, Alena M; Herr, Hugh M
2009-09-01
During bouncing gaits such as hopping and running, leg muscles generate force to enable elastic energy storage and return primarily from tendons and, thus, demand metabolic energy. In an effort to reduce metabolic demand, we designed two elastic leg exoskeletons that act in parallel with the wearer's legs; one exoskeleton consisted of a multiple leaf (MLE) and the other of a single leaf (SLE) set of fiberglass springs. We hypothesized that hoppers, hopping on both legs, would adjust their leg stiffness while wearing an exoskeleton so that the combination of the hopper and exoskeleton would behave as a linear spring-mass system with the same total stiffness as during normal hopping. We also hypothesized that decreased leg force generation while wearing an exoskeleton would reduce the metabolic power required for hopping. Nine subjects hopped in place at 2.0, 2.2, 2.4, and 2.6 Hz with and without an exoskeleton while we measured ground reaction forces, exoskeletal compression, and metabolic rates. While wearing an exoskeleton, hoppers adjusted their leg stiffness to maintain linear spring-mass mechanics and a total stiffness similar to normal hopping. Without accounting for the added weight of each exoskeleton, wearing the MLE reduced net metabolic power by an average of 6% and wearing the SLE reduced net metabolic power by an average of 24% compared with hopping normally at frequencies between 2.0 and 2.6 Hz. Thus, when hoppers used external parallel springs, they likely decreased the mechanical work performed by the legs and substantially reduced metabolic demand compared with hopping without wearing an exoskeleton.
Using embryology screencasts: a useful addition to the student learning experience?
Evans, Darrell J R
2011-01-01
Although podcasting has been a well used resource format in the last few years as a way of improving the student learning experience, the inclusion of enhanced audiovisual formats such as screencasts has been less used, despite the advantage that they work well for both visual and auditory learners. This study examines the use of and student reaction to a set of screencasts introduced to accompany embryology lectures within a second year module at Brighton and Sussex Medical School. Five mini-lecture screencasts and one review quiz screencast were produced as digital recordings of computer screen output with audio narration and released to students via the managed learning environment (MLE). Analysis of server log information from the MLE showed that the screencasts were accessed by many of the students in the cohort, although the exact numbers were variable depending on the screencast. Students accessed screencasts at different times of the day and over the whole of the access period, although maximum downloads were predictably recorded leading up to the written examination. Quantitative and qualitative feedback demonstrated that most students viewed the screencasts favorably in terms of usefulness to their learning, and end-of-module written examination scores suggest that the screencasts may have had a positive effect on student outcome when compared with previous student attainment. Overall, the development of a series of embryology screencasts to accompany embryology lecture sessions appears to be a useful addition to learning for most students and not simply an innovation that checks the box of "technology engagement." Copyright © 2011 American Association of Anatomists.
Jafari, Saeid; Goh, Yong M; Rajion, Mohamed A; Jahromi, Mohammad F; Ahmad, Yusof H; Ebrahimi, Mahdi
2017-02-01
Papaya leaf methanolic extract (PLE) at concentrations of 0 (CON), 5 (LLE), 10 (MLE) and 15 (HLE) mg/250 mg dry matter (DM) with 30 mL buffered rumen fluid were incubated for 24 h to identify its effect on in vitro ruminal methanogenesis and ruminal biohydrogenation (BH). Total gas production was not affected (P > 0.05) by addition of PLE compared to the CON at 24 h of incubation. Methane (CH 4 ) production (mL/250 mg DM) decreased (P < 0.05) with increasing levels of PLE. Acetate to propionate ratio was lower (P <0.05) in MLE (2.02) and HLE (1.93) compared to the CON (2.28). Supplementation of the diet with PLE significantly (P <0.05) decreased the rate of BH of C18:1n-9 (oleic acid; OA), C18:2n-6 (linoleic acid; LA), C18:3n-3 (linolenic acid; LNA) and C18 polyunsaturated fatty acids (PUFA) compared to CON after 24 h incubation, which resulted in higher concentrations of BH intermediates such as C18:1 t11 (vaccenic acid; VA), c9t11 conjugated LA (CLA) (rumenic acid; RA) and t10c12 CLA. Real-time PCR analysis indicated that the total bacteria, total protozoa, Butyrivibrio fibrisolvens and methanogen population in HLE decreased (P <0.05) compared to CON, but the total bacteria and B. fibrisolvens population were higher (P < 0.05) in CON compared to the PLE treatment groups. © 2016 Japanese Society of Animal Science.
Multivariate stochastic analysis for Monthly hydrological time series at Cuyahoga River Basin
NASA Astrophysics Data System (ADS)
zhang, L.
2011-12-01
Copula has become a very powerful statistic and stochastic methodology in case of the multivariate analysis in Environmental and Water resources Engineering. In recent years, the popular one-parameter Archimedean copulas, e.g. Gumbel-Houggard copula, Cook-Johnson copula, Frank copula, the meta-elliptical copula, e.g. Gaussian Copula, Student-T copula, etc. have been applied in multivariate hydrological analyses, e.g. multivariate rainfall (rainfall intensity, duration and depth), flood (peak discharge, duration and volume), and drought analyses (drought length, mean and minimum SPI values, and drought mean areal extent). Copula has also been applied in the flood frequency analysis at the confluences of river systems by taking into account the dependence among upstream gauge stations rather than by using the hydrological routing technique. In most of the studies above, the annual time series have been considered as stationary signal which the time series have been assumed as independent identically distributed (i.i.d.) random variables. But in reality, hydrological time series, especially the daily and monthly hydrological time series, cannot be considered as i.i.d. random variables due to the periodicity existed in the data structure. Also, the stationary assumption is also under question due to the Climate Change and Land Use and Land Cover (LULC) change in the fast years. To this end, it is necessary to revaluate the classic approach for the study of hydrological time series by relaxing the stationary assumption by the use of nonstationary approach. Also as to the study of the dependence structure for the hydrological time series, the assumption of same type of univariate distribution also needs to be relaxed by adopting the copula theory. In this paper, the univariate monthly hydrological time series will be studied through the nonstationary time series analysis approach. The dependence structure of the multivariate monthly hydrological time series will be studied through the copula theory. As to the parameter estimation, the maximum likelihood estimation (MLE) will be applied. To illustrate the method, the univariate time series model and the dependence structure will be determined and tested using the monthly discharge time series of Cuyahoga River Basin.
NASA Astrophysics Data System (ADS)
Klos, A.; Bogusz, J.; Moreaux, G.
2017-12-01
This research focuses on the investigation of the deterministic and stochastic parts of the DORIS (Doppler Orbitography and Radiopositioning Integrated by Satellite) weekly coordinate time series from the IDS contribution to the ITRF2014A set of 90 stations was divided into three groups depending on when the data was collected at an individual station. To reliably describe the DORIS time series, we employed a mathematical model that included the long-term nonlinear signal, linear trend, seasonal oscillations (these three sum up to produce the Polynomial Trend Model) and a stochastic part, all being resolved with Maximum Likelihood Estimation (MLE). We proved that the values of the parameters delivered for DORIS data are strictly correlated with the time span of the observations, meaning that the most recent data are the most reliable ones. Not only did the seasonal amplitudes decrease over the years, but also, and most importantly, the noise level and its type changed significantly. We examined five different noise models to be applied to the stochastic part of the DORIS time series: a pure white noise (WN), a pure power-law noise (PL), a combination of white and power-law noise (WNPL), an autoregressive process of first order (AR(1)) and a Generalized Gauss Markov model (GGM). From our study it arises that the PL process may be chosen as the preferred one for most of the DORIS data. Moreover, the preferred noise model has changed through the years from AR(1) to pure PL with few stations characterized by a positive spectral index.
Weakly Supervised Dictionary Learning
NASA Astrophysics Data System (ADS)
You, Zeyu; Raich, Raviv; Fern, Xiaoli Z.; Kim, Jinsub
2018-05-01
We present a probabilistic modeling and inference framework for discriminative analysis dictionary learning under a weak supervision setting. Dictionary learning approaches have been widely used for tasks such as low-level signal denoising and restoration as well as high-level classification tasks, which can be applied to audio and image analysis. Synthesis dictionary learning aims at jointly learning a dictionary and corresponding sparse coefficients to provide accurate data representation. This approach is useful for denoising and signal restoration, but may lead to sub-optimal classification performance. By contrast, analysis dictionary learning provides a transform that maps data to a sparse discriminative representation suitable for classification. We consider the problem of analysis dictionary learning for time-series data under a weak supervision setting in which signals are assigned with a global label instead of an instantaneous label signal. We propose a discriminative probabilistic model that incorporates both label information and sparsity constraints on the underlying latent instantaneous label signal using cardinality control. We present the expectation maximization (EM) procedure for maximum likelihood estimation (MLE) of the proposed model. To facilitate a computationally efficient E-step, we propose both a chain and a novel tree graph reformulation of the graphical model. The performance of the proposed model is demonstrated on both synthetic and real-world data.
1981-07-01
MY 161b 4703266 65882 70,8 6.236 930 67.61 36.4 9,r’ 4 W., bIub_ 3345791 55711 59,9 30181 93C 1 81 1 S Dow Point 1-- - - - - - - - - - - - - --)7...zlr Z X Me N Ni. Obs. Me mle. of Now,$ wolk Tempefetw MCI. Hm 9 2 7i 1 | 1 g ~ s5. Ii3 4 9n 1 0l " F sS32V P T m73F I 80 aegO-1al3F Totsi I t, Bulb q
Kim, Seok-Jo; Cheresh, Paul; Jablonski, Renea P.; Morales-Nebreda, Luisa; Cheng, Yuan; Hogan, Erin; Yeldandi, Anjana; Chi, Monica; Piseaux, Raul; Ridge, Karen; Hart, C. Michael; Chandel, Navdeep; Budinger, G.R. Scott; Kamp, David W.
2018-01-01
Rationale Alveolar epithelial cell (AEC) injury and mitochondrial dysfunction are important in the development of lung fibrosis. Our group has shown that in the asbestos exposed lung, the generation of mitochondrial reactive oxygen species (ROS) in AEC mediate mitochondrial DNA (mtDNA) damage and apoptosis which are necessary for lung fibrosis. These data suggest that mitochondrial-targeted antioxidants should ameliorate asbestos-induced lung. Objective To determine whether transgenic mice that express mitochondrial-targeted catalase (MCAT) have reduced lung fibrosis following exposure to asbestos or bleomycin and, if so, whether this occurs in association with reduced AEC mtDNA damage and apoptosis. Methods Crocidolite asbestos (100 μg/50 μL), TiO2 (negative control), bleomycin (0.025 units/50 μL), or PBS was instilled intratracheally in 8–10 week-old wild-type (WT - C57Bl/6 J) or MCAT mice. The lungs were harvested at 21 d. Lung fibrosis was quantified by collagen levels (Sircol) and lung fibrosis scores. AEC apoptosis was assessed by cleaved caspase-3 (CC-3)/Surfactant protein C (SFTPC) immunohistochemistry (IHC) and semi-quantitative analysis. AEC (primary AT2 cells from WT and MCAT mice and MLE-12 cells) mtDNA damage was assessed by a quantitative PCR-based assay, apoptosis was assessed by DNA fragmentation, and ROS production was assessed by a Mito-Sox assay. Results Compared to WT, crocidolite-exposed MCAT mice exhibit reduced pulmonary fibrosis as measured by lung collagen levels and lung fibrosis score. The protective effects in MCAT mice were accompanied by reduced AEC mtDNA damage and apoptosis. Similar findings were noted following bleomycin exposure. Euk-134, a mitochondrial SOD/catalase mimetic, attenuated MLE-12 cell DNA damage and apoptosis. Finally, compared to WT, asbestos-induced MCAT AT2 cell ROS production was reduced. Conclusions Our finding that MCAT mice have reduced pulmonary fibrosis, AEC mtDNA damage and apoptosis following exposure to asbestos or bleomycin suggests an important role for AEC mitochondrial H2O2-induced mtDNA damage in promoting lung fibrosis. We reason that strategies aimed at limiting AEC mtDNA damage arising from excess mitochondrial H2O2 production may be a novel therapeutic target for mitigating pulmonary fibrosis. PMID:27840320
Transitional basal cells at the squamous-columnar junction generate Barrett’s oesophagus
Jiang, Ming; Li, Haiyan; Zhang, Yongchun; Yang, Ying; Lu, Rong; Liu, Kuancan; Lin, Sijie; Lan, Xiaopeng; Wang, Haikun; Wu, Han; Zhu, Jian; Zhou, Zhongren; Xu, Jianming; Lee, Dong-Kee; Zhang, Lanjing; Lee, Yuan-Cho; Yuan, Jingsong; Abrams, Julian A.; Wang, Timothy G.; Sepulveda, Antonia R.; Wu, Qi; Chen, Huaiyong; Sun, Xin; She, Junjun; Chen, Xiaoxin; Que, Jianwen
2017-01-01
In several organ systems the transitional zone between different types of epithelia is a hotspot for pre-neoplastic metaplasia and malignancy1–3. However, the cell-of-origin for the metaplastic epithelium and subsequent malignancy, remains obscure1–3. In the case of Barrett’s oesophagus (BE), intestinal metaplasia occurs at the gastro-oesophageal junction, where stratified squamous epithelium transitions into simple columnar cells4. Based on different experimental models, several alternative cell types have been proposed as the source of the metaplasia, but in all cases the evidence is inconclusive and no model completely mimics BE with the presence of intestinal goblet cells5–8. Here, we describe a novel transitional columnar epithelium with distinct basal progenitor cells (p63+ KRT5+ KRT7+) in the squamous-columnar junction (SCJ) in the upper gastrointestinal tract of the mouse. We use multiple models and lineage tracing strategies to show that this unique SCJ basal cell population serves as a source of progenitors for the transitional epithelium. Moreover, upon ectopic expression of CDX2 these transitional basal progenitors differentiate into intestinal-like epithelium including goblet cells, thus reproducing Barrett’s metaplasia. A similar transitional columnar epithelium is present at the transitional zones of other mouse tissues, including the anorectal junction, and, importantly, at the gastro-oesophageal junction in the human gut. Acid reflux-induced oesophagitis and the multilayered epithelium (MLE) believed to be a precursor of BE are both characterized by the expansion of the transitional basal progenitor cells. Taken together our findings reveal the presence of a previously unidentified transitional zone in the epithelium of the upper gastrointestinal tract and provide evidence that the p63+ KRT7+ basal cells in this zone are the cell-of-origin for MLE and BE. PMID:29019984
Ali, Mehboob; Heyob, Kathryn; Rogers, Lynette K
2016-06-15
Deaths associated with cancer metastasis have steadily increased making the need for newer, anti-metastatic therapeutics imparative. Gelsolin and vimentin, actin binding proteins expressed in metastatic tumors, participate in actin remodelling and regulate cell migration. Docosahexaenoic acid (DHA) limits cancer cell proliferation and adhesion but the mechanisms involved in reducing metastatic phenotypes are unknown. We aimed to investigate the effects of DHA on gelsolin and vimentin expression, and ultimately cell migration and proliferation, in this context. Non-invasive lung epithelial cells (MLE12) and invasive lung cancer cells (A549) were treated with DHA (30μmol/ml) or/and 8 bromo-cyclic adenosine monophosphate (8 Br-cAMP) (300μmol/ml) for 6 or 24h either before (pre-treatment) or after (post-treatment) plating in transwells. Migration was assessed by the number of cells that progressed through the transwell. Gelsolin and vimentin expression were measured by Western blot and confocal microscopy in cells, and by immunohistochemistry in human lung cancer biopsy samples. A significant decrease in cell migration was detected for A549 cells treated with DHA verses control but this same decrease was not seen in MLE12 cells. DHA and 8 Br-cAMP altered gelsolin and vimentin expression but no clear pattern of change was observed. Immunofluorescence staining indicated slightly higher vimentin expression in human lung tissue that was malignant compared to control. Collectively, our data indicate that DHA inhibits cancer cell migration and further suggests that vimentin and gelsolin may play secondary roles in cancer cell migration and proliferation, but are not the primary regulators. Copyright © 2016 Elsevier Inc. All rights reserved.
Zavaleta-Muñiz, S A; Gonzalez-Lopez, L; Murillo-Vazquez, J D; Saldaña-Cruz, A M; Vazquez-Villegas, M L; Martín-Márquez, B T; Vasquez-Jimenez, J C; Sandoval-Garcia, F; Ruiz-Padilla, A J; Fajardo-Robledo, N S; Ponce-Guarneros, J M; Rocha-Muñoz, A D; Alcaraz-Lopez, M F; Cardona-Müller, D; Totsuka-Sutto, S E; Rubio-Arellano, E D; Gamez-Nava, J I
2016-12-19
Several interleukin 6 gene (IL6) polymorphisms are implicated in susceptibility to rheumatoid arthritis (RA). It has not yet been established with certainty if these polymorphisms are associated with the severe radiographic damage observed in some RA patients, particularly those with the development of joint bone ankylosis (JBA). The objective of the present study was to evaluate the association between severe radiographic damage in hands and the -174G/C and -572G/C IL6 polymorphisms in Mexican Mestizo people with RA. Mestizo adults with RA and long disease duration (>5 years) were classified into two groups according to the radiographic damage in their hands: a) severe radiographic damage (JBA and/or joint bone subluxations) and b) mild or moderate radiographic damage. We compared the differences in genotype and allele frequencies of -174G/C and -572G/C IL6 polymorphisms (genotyped using polymerase chain reaction-restriction fragment length polymorphism) between these two groups. Our findings indicated that the -174G/C polymorphism of IL6 is associated with severe joint radiographic damage [maximum likelihood odds ratios (MLE_OR): 8.03; 95%CI 1.22-187.06; P = 0.03], whereas the -572G/C polymorphism of IL6 exhibited no such association (MLE_OR: 1.5; 95%CI 0.52-4.5; P = 0.44). Higher anti-cyclic citrullinated peptide antibody levels were associated with more severe joint radiographic damage (P = 0.04). We conclude that there is a relevant association between the -174G/C IL6 polymorphism and severe radiographic damage. Future studies in other populations are required to confirm our findings.
Biweekly Maps of Wind Stress for the North Pacific from the ERS-1 Scatterometer
NASA Technical Reports Server (NTRS)
1997-01-01
The European Remote-sensing Satellite (ERS-1) was launched in July 1991 and contained several instruments for observing the Earth's ocean including a wind scatterometer. The scatterometer measurements were processed by the European Space Agency (ESA) and the Jet Propulsion Laboratory (JPL). JPL reprocessed (Freilich and Dunbar, 1992) the ERS-1 backscatter measurements to produced a 'value added' data set that contained the ESA wind vector as well as a set of up to four ambiguities. These ambiguities were further processed using a maximum-likelihood estimation (MLE) and a median filter to produce a 'selected vector.' This report describes a technique developed to produce time-averaged wind field estimates with their expected errors using only scatterometer wind vectors. The processing described in this report involved extracting regions of interest from the data tapes, checking the quality and creating the wind field estimate. This analysis also includes the derivation of biweekly average wind vectors over the North Pacific Ocean at a resolution of 0.50 x 0.50. This was done with an optimal average algorithm temporally and an over-determined biharmonic spline spatially. There have been other attempts at creating gridded wind files from ERS-1 winds, e.g., kriging techniques (Bentamy et al., 1996) and successive corrections schemes (Tang and Liu, 1996). There are several inherent problems with the ERS-1 scatterometer. Since this is a multidisciplinary mission, the satellite is flown in different orbits optimized for each phase of the mission. The scatterometer also shares several sub-systems with the Synthetic Aperture Radar (SAR) and cannot be operated while the SAR is in operation. The scatterometer is also a single-sided instrument and only measures backscatter along the right side of the satellite. The processing described here generates biweekly wind maps during the wktwo years analysis period regardless of the satellite orbit or missing data.
Space Shuttle to deploy Magellan planetary science mission
NASA Technical Reports Server (NTRS)
1989-01-01
The objectives of Space Shuttle Mission STS-30 are described along with major flight activities, prelaunch and launch operations, trajectory sequence of events, and landing and post-landing operations. The primary objective of STS-30 is to successfully deploy the Magellan spacecraft into low earth orbit. Following deployment, Magellan will be propelled to its Venus trajectory by an Inertial Upper Stage booster. The objectives of the Magellan mission are to obtain radar images of more than 70 percent of Venus' surface, a near-global topographic map, and near-global gravity field data. Secondary STS-30 payloads include the Fluids Experiment Apparatus (FEA) and the Mesoscale Lightning Experiment (MLE).
Yun, Seok-Min; Lee, Ye-Ji; Choi, WooYoung; Kim, Heung-Chul; Chong, Sung-Tae; Chang, Kyu-Sik; Coburn, Jordan M; Klein, Terry A; Lee, Won-Ja
2016-07-01
Ticks play an important role in transmission of arboviruses responsible for emerging infectious diseases, and have a significant impact on human, veterinary, and wildlife health. In the Republic of Korea (ROK), little is known about information regarding the presence of tick-borne viruses and their vectors. A total of 21,158 ticks belonging to 3 genera and 6 species collected at 6 provinces and 4 metropolitan areas in the ROK from March to October 2014 were assayed for selected tick-borne pathogens. Haemaphysalis longicornis (n=17,570) was the most numerously collected, followed by Haemaphysalis flava (n=3317), Ixodes nipponensis (n=249), Amblyomma testudinarium (n=11), Haemaphysalis phasiana (n=8), and Ixodes turdus (n=3). Ticks were pooled (adults 1-5, nymphs 1-30, and larvae 1-50) and tested by one-step reverse transcription polymerase chain reaction (RT-PCR) or nested RT-PCR for the detection of severe fever with thrombocytopenia virus (SFTSV), tick-borne encephalitis virus (TBEV), Powassan virus (POWV), Omsk hemorrhagic fever virus (OHFV), and Langat virus (LGTV). The overall maximum likelihood estimation (MLE) [estimated numbers of viral RNA positive ticks/1000 ticks] for SFTSV and TBEV was 0.95 and 0.43, respectively, while, all pools were negative for POWV, OHFV, and LGTV. The purpose of this study was to determine the prevalence of SFTSV, TBEV, POWV, OHFV, and LGTV in ixodid ticks collected from vegetation in the ROK to aid our understanding of the epidemiology of tick-borne viral diseases. Results from this study emphasize the need for continuous tick-based arbovirus surveillance to monitor the emergence of tick-borne diseases in the ROK. Copyright © 2016 The Authors. Published by Elsevier GmbH.. All rights reserved.
Kinetics of Huperzine A Dissociation from Acetylcholinesterase via Multiple Unbinding Pathways.
Rydzewski, J; Jakubowski, R; Nowak, W; Grubmüller, H
2018-06-12
The dissociation of huperzine A (hupA) from Torpedo californica acetylcholinesterase ( TcAChE) was investigated by 4 μs unbiased and biased all-atom molecular dynamics (MD) simulations in explicit solvent. We performed our study using memetic sampling (MS) for the determination of reaction pathways (RPs), metadynamics to calculate free energy, and maximum-likelihood estimation (MLE) to recover kinetic rates from unbiased MD simulations. Our simulations suggest that the dissociation of hupA occurs mainly via two RPs: a front door along the axis of the active-site gorge (pwf) and through a new transient side door (pws), i.e., formed by the Ω-loop (residues 67-94 of TcAChE). An analysis of the inhibitor unbinding along the RPs suggests that pws is opened transiently after hupA and the Ω-loop reach a low free-energy transition state characterized by the orientation of the pyridone group of the inhibitor directed toward the Ω-loop plane. Unlike pws, pwf does not require large structural changes in TcAChE to be accessible. The estimated free energies and rates agree well with available experimental data. The dissociation rates along the unbinding pathways are similar, suggesting that the dissociation of hupA along pws is likely to be relevant. This indicates that perturbations to hupA- TcAChE interactions could potentially induce pathway hopping. In summary, our results characterize the slow-onset inhibition of TcAChE by hupA, which may provide the structural and energetic bases for the rational design of the next-generation slow-onset inhibitors with optimized pharmacokinetic properties for the treatment of Alzheimer's disease.
NASA Astrophysics Data System (ADS)
Escobar Wolf, R. P.; Diehl, J. F.; Rose, W. I.; Singer, B. S.
2005-12-01
Paleomagnetic directions determined from oriented block samples collected by Rose et al. in 1977 ( Journal of Geology) and from eight paleomagnetic sites drilled in lava flows from Santa Maria volcano, Guatemala in 1990 define a pattern of variation similar to the pattern of geomagnetic field changes recorded by the sediments of the Wilson Creek Formation near Mono Lake, California. This led Conway et al. in 1994 ( Journal of Geology) to suggest that these flows had recorded the Mono Lake Excursion (MLE). The correlation was made on pattern recognition alone and relied almost entirely the well- defined inclination dataset than on the declination data; no radioisotopic ages were available. In March of 2005 we returned to the crater of Santa Mariá and drilled 23 lava flows from the original sections of Rose et al; block samples for 40Ar/39Ar were also collected. Unfortunately aggradation in the crater due to mass wasting made it impossible to sample all the flows of Rose et al. At each site or lava flow, four to seven cores were drilled and oriented with a sun compass. Samples cut from the drilled cores were magnetically cleaned using alternation field demagnetization and analyzed using principle component analysis. Thermal demagnetization is currently underway. The resulting inclination waveform (over 70° of change from +60° to -12°) is very similar to those previously reported in the literature for the MLE, but the declination waveform shows little variation (<25°; mean declination is 13.4°) throughout the stratigraphic sequence that we collected. Consequently, VGP data from the lava flows do not show the classic clockwise and counterclockwise loops as seen at the Wilson Creek section and at other MLE locations. Instead the directions (VGPs) tend to cluster in three distinct groups with the lowermost lava flows (5) and uppermost lava flows (3) clustering near the expected axial dipole inclination for the region (~28 °) while lava flows from the middle of the stratigraphic section have inclinations near zero (+8 ° to -12°). The transition between the low-inclination middle section and the upper section is marked by flows with inclinations up to +60°. This is also seen in the Conway data set. Preliminary 40Ar/39Ar dates from lava flows having near zero inclinations suggest an age of 20 ka. Therefore the possibility exists that the Santa Maria lava flows have recorded the Hilina Pali Excursion (HPE). In fact the magnitude of the inclination change recorded in the Santa Maria lava flows is very similar to that recorded by the lava flows from the Hawaiian Scientific Drilling Project. This suggests that the HPE is at least a regional geomagnetic event and may be useful as a tool for stratigraphic correlation. However, paleointensity data is needed before any firm conclusions can be drawn.
Neumann, M; Herten, D P; Dietrich, A; Wolfrum, J; Sauer, M
2000-02-25
The first capillary array scanner for time-resolved fluorescence detection in parallel capillary electrophoresis based on semiconductor technology is described. The system consists essentially of a confocal fluorescence microscope and a x,y-microscope scanning stage. Fluorescence of the labelled probe molecules was excited using a short-pulse diode laser emitting at 640 nm with a repetition rate of 50 MHz. Using a single filter system the fluorescence decays of different labels were detected by an avalanche photodiode in combination with a PC plug-in card for time-correlated single-photon counting (TCSPC). The time-resolved fluorescence signals were analyzed and identified by a maximum likelihood estimator (MLE). The x,y-microscope scanning stage allows for discontinuous, bidirectional scanning of up to 16 capillaries in an array, resulting in longer fluorescence collection times per capillary compared to scanners working in a continuous mode. Synchronization of the alignment and measurement process were developed to allow for data acquisition without overhead. Detection limits in the subzeptomol range for different dye molecules separated in parallel capillaries have been achieved. In addition, we report on parallel time-resolved detection and separation of more than 400 bases of single base extension DNA fragments in capillary array electrophoresis. Using only semiconductor technology the presented technique represents a low-cost alternative for high throughput DNA sequencing in parallel capillaries.
Ustundag-Budak, Yasemin; Huysal, Kagan
2017-02-01
Electrolytes have a narrow range of biological variation and small changes are clinically significant. It is important to select the best method for clinical decision making and patient monitoring in the emergency room. The sigma metrics model provides an objective method to evaluate the performance of a method. To calculate sigma metrics for electrolytes measured with one arterial blood gas analyser including two auto-analysers that use different technologies. To identify the best approach for electrolyte monitoring in an emergency setting and the context of routine emergency room workflow. The Coefficient of Variation (CV) was determined from Internal Quality Control (IQC). Data was measured from July 2015 to January 2016 for all three analysers. The records of KBUD external quality data (Association of Clinical Biochemists, Istanbul, Turkey) for both Mindray BS-2000M analyser (Mindray, Shenzhen, China) and Architect C16000 (Abbott Diagnostics, Abbott Park, IL) and MLE clinical laboratory evaluation program (Washington, DC, USA) for Radiometer ABL 700 (Radiometer Trading, Copenhagen, Denmark) during the study period were used to determine the bias. The calculated average sigma values for sodium (-1.1), potassium (3.3), and chloride (0.06) were with the Radiometer ABL700. All calculated sigma values were better than the auto-analysers. The sigma values obtained from all analysers suggest that running more controls and increasing the calibration frequency for electrolytes is necessary for quality assurance.
Development of Thread-compatible Open Source Stack
NASA Astrophysics Data System (ADS)
Zimmermann, Lukas; Mars, Nidhal; Schappacher, Manuel; Sikora, Axel
2017-07-01
The Thread protocol is a recent development based on 6LoWPAN (IPv6 over IEEE 802.15.4), but with extensions regarding a more media independent approach, which - additionally - also promises true interoperability. To evaluate and analyse the operation of a Thread network a given open source 6LoWPAN stack for embedded devices (emb::6) has been extended in order to comply with the Thread specification. The implementation covers Mesh Link Establishment (MLE) and network layer functionality as well as 6LoWPAN mesh under routing mechanism based on MAC short addresses. The development has been verified on a virtualization platform and allows dynamical establishment of network topologies based on Thread’s partitioning algorithm.
System identification and the modeling of sailing yachts
NASA Astrophysics Data System (ADS)
Legursky, Katrina
This research represents an exploration of sailing yacht dynamics with full-scale sailing motion data, physics-based models, and system identification techniques. The goal is to provide a method of obtaining and validating suitable physics-based dynamics models for use in control system design on autonomous sailing platforms, which have the capacity to serve as mobile, long range, high endurance autonomous ocean sensing platforms. The primary contributions of this study to the state-of-the-art are the formulation of a five degree-of-freedom (DOF) linear multi-input multi-output (MIMO) state space model of sailing yacht dynamics, the process for identification of this model from full-scale data, a description of the maneuvers performed during on-water tests, and an analysis method to validate estimated models. The techniques and results described herein can be directly applied to and tested on existing autonomous sailing platforms. A full-scale experiment on a 23ft monohull sailing yacht is developed to collect motion data for physics-based model identification. Measurements include 3 axes of accelerations, velocities, angular rates, and attitude angles in addition to apparent wind speed and direction. The sailing yacht herein is treated as a dynamic system with two control inputs, the rudder angle, deltaR, and the mainsail angle, delta B, which are also measured. Over 20 hours of full scale sailing motion data is collected, representing three sail configurations corresponding to a range of wind speeds: the Full Main and Genoa (abbrev. Genoa) for lower wind speeds, the Full Main and Jib (abbrev. Jib) for mid-range wind speeds, and the Reefed Main and Jib (abbrev. Reef) for the highest wind speeds. The data also covers true wind angles from upwind through a beam reach. A physics-based non-linear model to describe sailing yacht motion is outlined, including descriptions of methods to model the aerodynamics and hydrodynamics of a sailing yacht in surge, sway, roll, and yaw. Existing aerodynamic models for sailing yachts are unsuitable for control system design as they do not include a physical description of the sails' dynamic effect on the system. A new aerodynamic model is developed and validated using the full-scale sailing data which includes sail deflection as a control input to the system. The Maximum Likelihood Estimation (MLE) algorithm is used with non-linear simulation data to successfully estimate a set of hydrodynamic derivatives for a sailing yacht. It is shown that all sailing yacht models will contain a second order mode (referred to herein as Mode 1A.S or 4B.S) which is dependent upon trimmed roll angle. For the test yacht it is concluded that for this mode when the trimmed roll angle is, roll rate and roll angle are the dominant motion variables, and for surge velocity and yaw rate dominate. This second order mode is dynamically stable for . It transitions from stability in the higher values of to instability in the region defined by. These conclusions align with other work which has also found roll angle to be a driving factor in the dynamic behavior of a tall-ship (Johnson, Miles, Lasher, & Womack, 2009). It is also shown that all linear models also contain a first order mode, (referred to herein as Mode 3A.F or 1B.F), which lies very close to the origin of the complex plane indicating a long time constant. Measured models have indicated this mode can be stable or unstable. The eigenvector analysis reveals that the mode is stable if the surge contribution is < 40% and the sway contribution is > 20%. The small set of maneuvers necessary for model identification, quick OSLS estimation method, and detailed modal analysis of estimated models outlined in this work are immediately applicable to existing autonomous mono-hull sailing yachts, and could readily be adapted for use with other wind-powered vessel configurations such as wing-sails, catamarans, and tri-marans. (Abstract shortened by UMI.)
The Role of Gender Empowerment on Reproductive Health Outcomes in Urban Nigeria
Speizer, Ilene S.; Fotso, Jean-Christophe; Akiode, Akinsewa; Saad, Abdulmumin; Calhoun, Lisa; Irani, Laili
2014-01-01
Objectives To date, limited evidence is available for urban populations in sub-Saharan Africa, specifically research into the association between urban women’s empowerment and reproductive health outcomes. The objective of this study is to investigate whether women’s empowerment in urban Nigerian settings is associated with family planning use and maternal health behaviors. Moreover, we examine whether different effects of empowerment exist by region of residence. Methods This study uses baseline household survey data from the Measurement, Learning & Evaluation Project (MLE) for the Nigerian Urban Reproductive Health Initiative (NURHI) being implemented in six major cities. We examine four dimensions of empowerment: economic freedom, attitudes towards domestic violence, partner prohibitions and decision-making. We determine if the empowerment dimensions have different effects on reproductive health outcomes by region of residence using multivariate analyses. Results Results indicate that more empowered women are more likely to use modern contraception, deliver in a health facility and have a skilled attendant at birth. These trends vary by empowerment dimension and by city/region in Nigeria. Conclusions We conclude by discussing the implications of these findings on future programs seeking to improve reproductive health outcomes in urban Nigeria and beyond. PMID:23576403
NASA Astrophysics Data System (ADS)
Nugraheni, Z.; Budiyono, B.; Slamet, I.
2018-03-01
To reach higher order thinking skill, needed to be mastered the conceptual understanding and strategic competence as they are two basic parts of high order thinking skill (HOTS). RMT is a unique realization of the cognitive conceptual construction approach based on Feurstein with his theory of Mediated Learning Experience (MLE) and Vygotsky’s sociocultural theory. This was quasi-experimental research which compared the experimental class that was given Rigorous Mathematical Thinking (RMT) as learning method and the control class that was given Direct Learning (DL) as the conventional learning activity. This study examined whether there was different effect of two learning model toward conceptual understanding and strategic competence of Junior High School Students. The data was analyzed by using Multivariate Analysis of Variance (MANOVA) and obtained a significant difference between experimental and control class when considered jointly on the mathematics conceptual understanding and strategic competence (shown by Wilk’s Λ = 0.84). Further, by independent t-test is known that there was significant difference between two classes both on mathematical conceptual understanding and strategic competence. By this result is known that Rigorous Mathematical Thinking (RMT) had positive impact toward Mathematics conceptual understanding and strategic competence.
NASA Astrophysics Data System (ADS)
Zaheer-ul-Haq; Khan, Waqasuddin
2011-01-01
Class II major histocompatibility complex (MHC II) molecules as expressed by antigen-presenting cells are heterodimeric cell-surface glycoprotein receptors that are fundamental in initiating and propagating an immune response by presenting tumor-associated antigenic peptides to CD4+/TH cells. The loading efficiency of such peptides can be improved by small organic compounds (MHC Loading Enhancers—MLEs), that convert the non-receptive peptide conformation of MHC II to a peptide-receptive conformation. In a reversible reaction, these compounds open up the binding site of MHC II molecules by specific interactions with a yet undefined pocket. Here, we performed molecular docking and molecular dynamics simulation studies of adamantyl compounds on the predicted cavity around the P1 pocket of 2 allelic variants of HLA-DRs. The purpose was to investigate the suitability of adamantyl compounds as MLEs at the dimorphic β86 position. Docking studies revealed that besides numerous molecular interactions formed by the adamantyl compounds, Asnβ82, Tyrβ83, and Thrβ90 are the crucial amino acid residues that are characterized as the "sensors" of peptide loading. Molecular dynamics simulation studies exposed the dynamical structural changes that HLA-DRs adopted as a response to binding of 3-(1-adamantyl)-5-hydrazidocarbonyl-1H-pyrazole (AdCaPy). The conformations of AdCaPy complexed with the Glyβ86 HLA-DR allelic variant are well correlated with the stabilized form of peptide-loaded HLA-DRs, further confirming the role of AdCaPy as a MLE. Hydrogen bonding interaction analysis clearly demonstrated that after making suitable contacts with AdCaPy, HLA-DR changes its local conformation. However, AdCaPy complexed with HLA-DR having Valβ86 at the dimorphic position did not accommodate AdCaPy as MLE due to steric hindrance caused by the valine.
Ambo, Akihiro; Ohkatsu, Hiromichi; Minamizawa, Motoko; Watanabe, Hideko; Sugawara, Shigeki; Nitta, Kazuo; Tsuda, Yuko; Okada, Yoshio; Sasaki, Yusuke
2012-03-15
To develop novel inhibitors of P-glycoprotein (P-gp), dimeric peptides related to an opioid peptide containing the Dmt-Tic pharmacophore were synthesized and their P-gp inhibitory activities were analyzed. Of the 30 analogs synthesized, N(α),N(ε)-[(CH(3))(2)Mle-Tic](2)Lys-NH(2) and its D-Lys analog were found to exhibit potent P-gp inhibitory activity, twice that of verapamil, in doxorubicin-resistant K562 cells. Structure-activity studies indicated that the correct hydrophobicity and spacer length between two aromatic rings are important structural elements in this series of analogs for inhibition of P-gp. Copyright © 2012 Elsevier Ltd. All rights reserved.
Determination of seasonals using wavelets in terms of noise parameters changeability
NASA Astrophysics Data System (ADS)
Klos, Anna; Bogusz, Janusz; Figurski, Mariusz
2015-04-01
The reliable velocities of GNSS-derived observations are becoming of high importance nowadays. The fact on how we determine and subtract the seasonals may all cause the time series autocorrelation and affect uncertainties of linear parameters. The periodic changes in GNSS time series are commonly assumed as the sum of annual and semi-annual changes with amplitudes and phases being constant in time and the Least-Squares Estimation (LSE) is used in general to model these sine waves. However, not only seasonals' time-changeability, but also their higher harmonics should be considered. In this research, we focused on more than 230 globally distributed IGS stations that were processed at the Military University of Technology EPN Local Analysis Centre (MUT LAC) in Bernese 5.0 software. The network was divided into 7 different sub-networks with few of overlapping stations and processed separately with newest models. Here, we propose a wavelet-based trend and seasonals determination and removal of whole frequency spectrum between Chandler and quarter-annual periods from North, East and Up components and compare it with LSE-determined values. We used a Meyer symmetric, orthogonal wavelet and assumed nine levels of decomposition. The details from 6 up to 9 were analyzed here as periodic components with frequencies between 0.3-2.5 cpy. The characteristic oscillations for each of frequency band were pointed out. The details lower than 6 summed together with detrended approximation were considered as residua. The power spectral densities (PSDs) of original and decomposed data were stacked for North, East and Up components for each of sub-networks so as to show what power was removed with each of decomposition levels. Moreover, the noises that the certain frequency band follows (in terms of spectral indices of power-law dependencies) were estimated here using a spectral method and compared for all processed sub-networks. It seems, that lowest frequencies up to 0.7 cpy are characterized by lower spectral indices in comparison to higher ones being close to white noise. Basing on the fact, that decomposition levels overlap each other, the frequency-window choice becomes a main point in spectral index estimation. Our results were compared with those obtained by Maximum Likelihood Estimation (MLE) and possible differences as well as their impact on velocity uncertainties pointed out. The results show that the spectral indices estimated in time and frequency domains differ of 0.15 in maximum. Moreover, we compared the removed power basing on wavelet decomposition levels with the one subtracted with LSE, assuming the same periodicities. In comparison to LSE, the wavelet-based approach leaves the residua being closer to white noise with lower power-law amplitudes of them, what strictly reduces velocity uncertainties. The last approximation was analyzed here as long-term trend, being the non-linear and compared with LSE-determined linear one. It seems that these two trends differ at the level of 0.3 mm/yr in the most extreme case, what makes wavelet decomposition being useful for velocity determination.
Johnson, Dayna A.; Lisabeth, Lynda; Lewis, Tené T.; Sims, Mario; Hickson, DeMarc A.; Samdarshi, Tandaw; Taylor, Herman; Diez Roux, Ana V.
2016-01-01
Study Objectives: Studies have shown that psychosocial stressors are related to poor sleep. However, studies of African Americans, who may be more vulnerable to the impact of psychosocial stressors, are lacking. Using the Jackson Heart Study (JHS) baseline data, we examined associations of psychosocial stressors with sleep in 4,863 African Americans. Methods: We examined cross-sectional associations between psychosocial stressors and sleep duration and quality in a large population sample of African Americans. Three measures of psychosocial stress were investigated: the Global Perceived Stress Scale (GPSS); Major Life Events (MLE); and the Weekly Stress Inventory (WSI). Sleep was assessed using self-reported hours of sleep and sleep quality rating (1 = poor; 5 = excellent). Multinomial logistic and linear regression models were used to examine the association of each stress measure (in quartiles) with continuous and categorical sleep duration (< 5 (“very short”), 5–6 h (“short”) and > 9 h (“long”) versus 7 or 8 h (“normal”); and with sleep quality after adjustment for demographics and risk factors (body mass index, hypertension, diabetes, physical activity). Results: Mean age of the sample was 54.6 years and 64% were female. Mean sleep duration was 6.4 + 1.5 hours, 54% had a short sleep duration, 5% had a long sleep duration, and 34% reported a “poor” or “fair” sleep quality. Persons in the highest GPSS quartile had higher odds of very short sleep (odds ratio: 2.87, 95% confidence interval [CI]: 2.02, 4.08), higher odds of short sleep (1.72, 95% CI: 1.40, 2.12), shorter average sleep duration (Δ = −33.6 min (95% CI: −41.8, −25.4), and reported poorer sleep quality (Δ = −0.73 (95% CI: −0.83, −0.63) compared to those in the lowest quartile of GPSS after adjustment for covariates. Similar patterns were observed for WSI and MLE. Psychosocial stressors were not associated with long sleep. For WSI, effects of stress on sleep duration were stronger for younger (< 60 y) and college-educated African-Americans. Conclusions: Psychosocial stressors are associated with higher odds of short sleep, lower average sleep duration, and lower sleep quality in African Americans. Psychosocial stressors may be a point of intervention among African Americans for the improvement of sleep and downstream health outcomes. Citation: Johnson DA, Lisabeth L, Lewis TT, Sims M, Hickson DA, Samdarshi T, Taylor H, Diez Roux AV. The contribution of psychosocial stressors to sleep among African Americans in the Jackson Heart Study. SLEEP 2016;39(7):1411–1419. PMID:27166234
Automated microwave ablation therapy planning with single and multiple entry points
NASA Astrophysics Data System (ADS)
Liu, Sheena X.; Dalal, Sandeep; Kruecker, Jochen
2012-02-01
Microwave ablation (MWA) has become a recommended treatment modality for interventional cancer treatment. Compared with radiofrequency ablation (RFA), MWA provides more rapid and larger-volume tissue heating. It allows simultaneous ablation from different entry points and allows users to change the ablation size by controlling the power/time parameters. Ablation planning systems have been proposed in the past, mainly addressing the needs for RFA procedures. Thus a planning system addressing MWA-specific parameters and workflows is highly desirable to help physicians achieve better microwave ablation results. In this paper, we design and implement an automated MWA planning system that provides precise probe locations for complete coverage of tumor and margin. We model the thermal ablation lesion as an ellipsoidal object with three known radii varying with the duration of the ablation and the power supplied to the probe. The search for the best ablation coverage can be seen as an iterative optimization problem. The ablation centers are steered toward the location which minimizes both un-ablated tumor tissue and the collateral damage caused to the healthy tissue. We assess the performance of our algorithm using simulated lesions with known "ground truth" optimal coverage. The Mean Localization Error (MLE) between the computed ablation center in 3D and the ground truth ablation center achieves 1.75mm (Standard deviation of the mean (STD): 0.69mm). The Mean Radial Error (MRE) which is estimated by comparing the computed ablation radii with the ground truth radii reaches 0.64mm (STD: 0.43mm). These preliminary results demonstrate the accuracy and robustness of the described planning algorithm.
Attention to sound improves auditory reliability in audio-tactile spatial optimal integration.
Vercillo, Tiziana; Gori, Monica
2015-01-01
The role of attention on multisensory processing is still poorly understood. In particular, it is unclear whether directing attention toward a sensory cue dynamically reweights cue reliability during integration of multiple sensory signals. In this study, we investigated the impact of attention in combining audio-tactile signals in an optimal fashion. We used the Maximum Likelihood Estimation (MLE) model to predict audio-tactile spatial localization on the body surface. We developed a new audio-tactile device composed by several small units, each one consisting of a speaker and a tactile vibrator independently controllable by external software. We tested participants in an attentional and a non-attentional condition. In the attentional experiment, participants performed a dual task paradigm: they were required to evaluate the duration of a sound while performing an audio-tactile spatial task. Three unisensory or multisensory stimuli, conflictual or not conflictual sounds and vibrations arranged along the horizontal axis, were presented sequentially. In the primary task participants had to evaluate in a space bisection task the position of the second stimulus (the probe) with respect to the others (the standards). In the secondary task they had to report occasionally changes in duration of the second auditory stimulus. In the non-attentional task participants had only to perform the primary task (space bisection). Our results showed an enhanced auditory precision (and auditory weights) in the auditory attentional condition with respect to the control non-attentional condition. The results of this study support the idea that modality-specific attention modulates multisensory integration.
Shuttle Atlantis to deploy Galileo probe toward Jupiter
NASA Technical Reports Server (NTRS)
1989-01-01
The objectives of Space Shuttle Mission STS-34 are described along with major flight activities, prelaunch and launch operations, trajectory sequence of events, and landing and post-landing operations. The primary objective of STS-34 is to deploy the Galileo planetary exploration spacecraft into low earth orbit. Following deployment, Galileo will be propelled on a trajectory, known as Venus-Earth-Earth Gravity Assist (VEEGA), by an inertial upper stage (IUS). The objectives of the Galileo mission are to study the chemical composition, state, and dynamics of the Jovian atmosphere and satellites, and investigate the structure and physical dynamics of the Jovian magnetosphere. Secondary STS-34 payloads include the Shuttle Solar Backscatter Ultraviolet (SSBUV) instrument; the Mesoscale Lightning Experiment (MLE); and various other payloads involving polymer morphology, the effects of microgravity on plant growth hormone, and the growth of ice crystals.
Arctic sea ice concentration observed with SMOS during summer
NASA Astrophysics Data System (ADS)
Gabarro, Carolina; Martinez, Justino; Turiel, Antonio
2017-04-01
The Arctic Ocean is under profound transformation. Observations and model predictions show dramatic decline in sea ice extent and volume [1]. A retreating Arctic ice cover has a marked impact on regional and global climate, and vice versa, through a large number of feedback mechanisms and interactions with the climate system [2]. The launch of the Soil Moisture and Ocean Salinity (SMOS) mission, in 2009, marked the dawn of a new type of space-based microwave observations. Although the mission was originally conceived for hydrological and oceanographic studies [3,4], SMOS is also making inroads in the cryospheric sciences by measuring the thin ice thickness [5,6]. SMOS carries an L-band (1.4 GHz), passive interferometric radiometer (the so-called MIRAS) that measures the electromagnetic radiation emitted by the Earth's surface, at about 50 km spatial resolution, continuous multi-angle viewing, large wide swath (1200-km), and with a 3-day revisit time at the equator, but more frequently at the poles. A novel radiometric method to determine sea ice concentration (SIC) from SMOS is presented. The method uses the Bayesian-based Maximum Likelihood Estimation (MLE) approach to retrieve SIC. The advantage of this approach with respect to the classical linear inversion is that the former takes into account the uncertainty of the tie-point measured data in addition to the mean value, while the latter only uses a mean value of the tie-point data. When thin ice is present, the SMOS algorithm underestimates the SIC due to the low opacity of the ice at this frequency. However, using a synergistic approach with data from other satellite sensors, it is possible to obtain accurate thin ice thickness estimations with the Bayesian-based method. Despite its lower spatial resolution relative to SSMI or AMSR-E, SMOS-derived SIC products are little affected by the atmosphere and the snow (almost transparent at L-band). Moreover L-band measurements are more robust in front of the accelerated metamorphosis and melt processes during summer affecting the ice surface fraction measurements. Therefore, the SMOS SIC dataset has great potential during summer periods in which higher frequency radiometers present high uncertainties determining the SIC. This new dataset can contribute to complement ongoing monitoring efforts in the Arctic Cryosphere. [1] Comiso, J. C.: Large Decadal Decline of the Arctic Multiyear Ice Cover, Journal of Climate, 25, 1176-1193, 2012. [2] Holland, M. M. and Bitz, C. M.: Polar amplification of climate change in coupled models, Climate Dynamics, 21, 221-232, 2003. [3] Font, J, et al.: SMOS: The Challenging Sea Surface Salinity Measurement from Space'. Proc. IGARSS, no. 5, 649 -665, 2010. [4] Kerr, Y., et al.: The SMOS mission: New tool for monitoring key elements of the global water cycle Proc. IGARSS no. 5, 666-687, 2010. [5] Kaleschke, L., et al.: Sea ice thickness retrieval from SMOS brightness temperatures during the Arctic freeze-up period, Geophys. Res. Lett., doi:10.1029/ 2012GL050916, 2012. [6] Huntemann, et al.: Empirical sea ice thickness retrieval during the freeze up period from SMOS high incident angle observations, The Cryosphere Discuss., 7, 4379-4405, 2013.
Investigation on the coloured noise in GPS-derived position with time-varying seasonal signals
NASA Astrophysics Data System (ADS)
Gruszczynska, Marta; Klos, Anna; Bos, Machiel Simon; Bogusz, Janusz
2016-04-01
The seasonal signals in the GPS-derived time series arise from real geophysical signals related to tidal (residual) or non-tidal (loadings from atmosphere, ocean and continental hydrosphere, thermo elastic strain, etc.) effects and numerical artefacts including aliasing from mismodelling in short periods or repeatability of the GPS satellite constellation with respect to the Sun (draconitics). Singular Spectrum Analysis (SSA) is a method for investigation of nonlinear dynamics, suitable to either stationary or non-stationary data series without prior knowledge about their character. The aim of SSA is to mathematically decompose the original time series into a sum of slowly varying trend, seasonal oscillations and noise. In this presentation we will explore the ability of SSA to subtract the time-varying seasonal signals in GPS-derived North-East-Up topocentric components and show properties of coloured noise from residua. For this purpose we used data from globally distributed IGS (International GNSS Service) permanent stations processed by the JPL (Jet Propulsion Laboratory) in a PPP (Precise Point Positioning) mode. After introducing a threshold of 13 years, 264 stations left with a maximum length reaching 23 years. The data was initially pre-processed for outliers, offsets and gaps. The SSA was applied to pre-processed series to estimate the time-varying seasonal signals. We adopted a 3-years window as the optimal dimension of its size determined with the Akaike's Information Criteria (AIC) values. A Fisher-Snedecor test corrected for the presence of temporal correlation was used to determine the statistical significance of reconstructed components. This procedure showed that first four components describing annual and semi-annual signals, are significant at a 99.7% confidence level, which corresponds to 3-sigma criterion. We compared the non-parametric SSA approach with a commonly chosen parametric Least-Squares Estimation that assumes constant amplitudes and phases over time. We noticed a maximum difference in seasonal oscillation of 3.5 mm and a maximum change in velocity of 0.15 mm/year for Up component (YELL, Yellowknife, Canada), when SSA and LSE are compared. The annual signal has the greatest influence on data variability in time series, while the semi-annual signal in Up component has much smaller contribution in the total variance of data. For some stations more than 35% of the total variance is explained by annual signal. According to the Power Spectral Densities (PSD) we proved that SSA has the ability to properly subtract the seasonals changing in time with almost no influence on power-law character of stochastic part. Then, the modified Maximum Likelihood Estimation (MLE) in Hector software was applied to SSA-filtered time series. We noticed a significant improvement in spectral indices and power-law amplitudes in comparison to classically determined ones with LSE, which will be the main subject of this presentation.
NASA Astrophysics Data System (ADS)
Sadewo, E.; Syabri, I.; Pradono
2018-05-01
In the theory of urban transformation, there has been growing attention to the development of metropolitan outskirts. While the debate of the post-suburbanization process has already settled, the knowledge on the development beyond is still questionable. This paper examines the urban spatial pattern transformation beyond the Jabodetabek Metropolitan Area post-suburbanization. We use the medium and large enterprise (MLE) data from the economic census (EC) and population data from village potential census (PODES) with two cross sectional times of reference. The 2005 and 2006 data are assumed to represent the post-suburbia situation, and the 2014 and 2016 data represents the situation beyond. We analyzed the extent of which post-suburban spatial pattern in JMA would develope by utilizing the Exploratory Spatial Data Analysis (ESDA) method. The result shows that the polycentric urban structure of JMA has strengthened. The low order service function which was previously clustered in suburban areas is recentralized in the urban core, leaving the manufacturing sector as the main function in the post-suburbia. It is implied that the post-suburban status has reached a steady state for a long term before its next transformation. Those transformations are followed by the shift in population dynamics, whereas workers tend to polarize based on the proximity to their job location. The findings recall further study regarding the post-suburban commuting pattern.
Before and after retrofit - response of a building during ambient and strong motions
Celebi, M.; Liu, Huaibao P.; ,
1998-01-01
This paper presents results obtained from ambient vibration and strong-motion responses of a thirteen-story, moment-resisting steel framed Santa Clara County Office Building (SCCOB) before being retrofitted by visco-elastic dampers and from ambient vibration response following the retrofit. Understanding the cumulative structural and site characteristics that affect the response of SCCOB before and after the retrofit is important in assessing earthquake hazards to other similar buildings and decision making in retrofitting them. The results emphasize the need to better evaluate structural and site characteristics in developing earthquake resisting designs that avoid resonating effects. Various studies of the strong-motion response records from the SCCOB during the 24 April 1984 (MHE) Morgan Hill (MS = 6.1), the 31 March 1986 (MLE) Mt. Lewis (MS = 6.1) and the 17 October 1989 (LPE) Loma Prieta (MS = 7.1) earthquakes show that the dynamic characteristics of the building are such that it (a) resonated (b) responded with a beating effect due to close-coupling of its translational and torsional frequencies, and (c) had a long-duration response due to low-damping. During each of these earthquakes, there was considerable contents damage and the occupants felt the rigorous vibration of the building. Ambient tests of SCCOB performed following LPE showed that both translational and torsional periods of the building are smaller than those derived from strong motions. Ambient tests performed following the retrofit of the building with visco-elastic dampers show that the structural fundamental mode frequency of the building has increased. The increased frequency implies a stiffer structure. Strong-motion response of the building during future earthquakes will ultimately validate the effectiveness of the retrofit method.This paper presents results obtained from ambient vibration and strong-motion responses of a thirteen-story, moment-resisting steel framed Santa Clara County Office Building (SCCOB) before being retrofitted by visco-elastic dampers and from ambient vibration response following the retrofit. Understanding the cumulative structural and site characteristics that affect the response of SCCOB before and after the retrofit is important in assessing earthquake hazards to other similar buildings and decision making in retrofitting them. The results emphasize the need to better evaluate structural and site characteristics in developing earthquake resisting designs that avoid resonating effects. Various studies of the strong-motion response records from the SCCOB during the 24 April 1984 (MHE) Morgan Hill (Ms = 6.1), the 31 March 1986 (MLE) Mt. Lewis (Ms = 6.1) and the 17 October 1989(LPE) Loma Prieta (Ms = 7.1) earthquakes show that the dynamic characteristics of the building are such that it (a) resonated (b) responded with a beating effect due to close-coupling of its translational and torsional frequencies, and (c) had a long-duration response due to low-damping. During each of these earthquakes, there was considerable contents damage and the occupants felt the rigorous vibration of the building. Ambient tests of SCCOB performed following LPE showed that both translational and torsional periods of the building are smaller than those derived from strong motions. Ambient tests performed following the retrofit of the building with visco-elastic dampers show that the structural fundamental mode frequency of the building has increased. The increased frequency implies a stiffer structure. Strong-motion response of the building during future earthquakes will ultimately validate the effectiveness of the retrofit method.
Conceptualization of an R&D Based Learning-to-Innovate Model for Science Education
NASA Astrophysics Data System (ADS)
Lai, Oiki Sylvia
The purpose of this research was to conceptualize an R & D based learning-to-innovate (LTI) model. The problem to be addressed was the lack of a theoretical L TI model, which would inform science pedagogy. The absorptive capacity (ACAP) lens was adopted to untangle the R & D LTI phenomenon into four learning processes: problem-solving via knowledge acquisition, incremental improvement via knowledge participation, scientific discovery via knowledge creation, and product design via knowledge productivity. The four knowledge factors were the latent factors and each factor had seven manifest elements as measured variables. The key objectives of the non experimental quantitative survey were to measure the relative importance of the identified elements and to explore the underlining structure of the variables. A questionnaire had been prepared, and was administered to more than 155 R & D professionals from four sectors - business, academic, government, and nonprofit. The results showed that every identified element was important to the R & D professionals, in terms of improving the related type of innovation. The most important elements were highlighted to serve as building blocks for elaboration. In search for patterns of the data matrix, exploratory factor analysis (EF A) was performed. Principal component analysis was the first phase of EF A to extract factors; while maximum likelihood estimation (MLE) was used to estimate the model. EF A yielded the finding of two aspects in each kind of knowledge. Logical names were assigned to represent the nature of the subsets: problem and knowledge under knowledge acquisition, planning and participation under knowledge participation, exploration and discovery under knowledge creation, and construction and invention under knowledge productivity. These two constructs, within each kind of knowledge, added structure to the vague R & D based LTI model. The research questions and hypotheses testing were addressed using correlation analysis. The alternative hypotheses that there were positive relationships between knowledge factors and their corresponding types of innovation were accepted. In-depth study of each process is recommended in both research and application. Experimental tests are needed, in order to ultimately present the LTI model to enhance the scientific knowledge absorptive capacity of the learners to facilitate their innovation performance.
Gueguen, Marc; Vuillerme, Nicolas; Isableu, Brice
2012-01-01
Background The selection of appropriate frames of reference (FOR) is a key factor in the elaboration of spatial perception and the production of robust interaction with our environment. The extent to which we perceive the head axis orientation (subjective head orientation, SHO) with both accuracy and precision likely contributes to the efficiency of these spatial interactions. A first goal of this study was to investigate the relative contribution of both the visual and egocentric FOR (centre-of-mass) in the SHO processing. A second goal was to investigate humans' ability to process SHO in various sensory response modalities (visual, haptic and visuo-haptic), and the way they modify the reliance to either the visual or egocentric FORs. A third goal was to question whether subjects combined visual and haptic cues optimally to increase SHO certainty and to decrease the FORs disruption effect. Methodology/Principal Findings Thirteen subjects were asked to indicate their SHO while the visual and/or egocentric FORs were deviated. Four results emerged from our study. First, visual rod settings to SHO were altered by the tilted visual frame but not by the egocentric FOR alteration, whereas no haptic settings alteration was observed whether due to the egocentric FOR alteration or the tilted visual frame. These results are modulated by individual analysis. Second, visual and egocentric FOR dependency appear to be negatively correlated. Third, the response modality enrichment appears to improve SHO. Fourth, several combination rules of the visuo-haptic cues such as the Maximum Likelihood Estimation (MLE), Winner-Take-All (WTA) or Unweighted Mean (UWM) rule seem to account for SHO improvements. However, the UWM rule seems to best account for the improvement of visuo-haptic estimates, especially in situations with high FOR incongruence. Finally, the data also indicated that FOR reliance resulted from the application of UWM rule. This was observed more particularly, in the visual dependent subject. Conclusions: Taken together, these findings emphasize the importance of identifying individual spatial FOR preferences to assess the efficiency of our interaction with the environment whilst performing spatial tasks. PMID:22509295
NASA Astrophysics Data System (ADS)
Lee, Kyunghoon
To evaluate the maximum likelihood estimates (MLEs) of probabilistic principal component analysis (PPCA) parameters such as a factor-loading, PPCA can invoke an expectation-maximization (EM) algorithm, yielding an EM algorithm for PPCA (EM-PCA). In order to examine the benefits of the EM-PCA for aerospace engineering applications, this thesis attempts to qualitatively and quantitatively scrutinize the EM-PCA alongside both POD and gappy POD using high-dimensional simulation data. In pursuing qualitative investigations, the theoretical relationship between POD and PPCA is transparent such that the factor-loading MLE of PPCA, evaluated by the EM-PCA, pertains to an orthogonal basis obtained by POD. By contrast, the analytical connection between gappy POD and the EM-PCA is nebulous because they distinctively approximate missing data due to their antithetical formulation perspectives: gappy POD solves a least-squares problem whereas the EM-PCA relies on the expectation of the observation probability model. To juxtapose both gappy POD and the EM-PCA, this research proposes a unifying least-squares perspective that embraces the two disparate algorithms within a generalized least-squares framework. As a result, the unifying perspective reveals that both methods address similar least-squares problems; however, their formulations contain dissimilar bases and norms. Furthermore, this research delves into the ramifications of the different bases and norms that will eventually characterize the traits of both methods. To this end, two hybrid algorithms of gappy POD and the EM-PCA are devised and compared to the original algorithms for a qualitative illustration of the different basis and norm effects. After all, a norm reflecting a curve-fitting method is found to more significantly affect estimation error reduction than a basis for two example test data sets: one is absent of data only at a single snapshot and the other misses data across all the snapshots. From a numerical performance aspect, the EM-PCA is computationally less efficient than POD for intact data since it suffers from slow convergence inherited from the EM algorithm. For incomplete data, this thesis quantitatively found that the number of data missing snapshots predetermines whether the EM-PCA or gappy POD outperforms the other because of the computational cost of a coefficient evaluation, resulting from a norm selection. For instance, gappy POD demands laborious computational effort in proportion to the number of data-missing snapshots as a consequence of the gappy norm. In contrast, the computational cost of the EM-PCA is invariant to the number of data-missing snapshots thanks to the L2 norm. In general, the higher the number of data-missing snapshots, the wider the gap between the computational cost of gappy POD and the EM-PCA. Based on the numerical experiments reported in this thesis, the following criterion is recommended regarding the selection between gappy POD and the EM-PCA for computational efficiency: gappy POD for an incomplete data set containing a few data-missing snapshots and the EM-PCA for an incomplete data set involving multiple data-missing snapshots. Last, the EM-PCA is applied to two aerospace applications in comparison to gappy POD as a proof of concept: one with an emphasis on basis extraction and the other with a focus on missing data reconstruction for a given incomplete data set with scattered missing data. The first application exploits the EM-PCA to efficiently construct reduced-order models of engine deck responses obtained by the numerical propulsion system simulation (NPSS), some of whose results are absent due to failed analyses caused by numerical instability. Model-prediction tests validate that engine performance metrics estimated by the reduced-order NPSS model exhibit considerably good agreement with those directly obtained by NPSS. Similarly, the second application illustrates that the EM-PCA is significantly more cost effective than gappy POD at repairing spurious PIV measurements obtained from acoustically-excited, bluff-body jet flow experiments. The EM-PCA reduces computational cost on factors 8 ˜ 19 compared to gappy POD while generating the same restoration results as those evaluated by gappy POD. All in all, through comprehensive theoretical and numerical investigation, this research establishes that the EM-PCA is an efficient alternative to gappy POD for an incomplete data set containing missing data over an entire data set. (Abstract shortened by UMI.)
Kim, Seok-Jo; Cheresh, Paul; Jablonski, Renea P; Morales-Nebreda, Luisa; Cheng, Yuan; Hogan, Erin; Yeldandi, Anjana; Chi, Monica; Piseaux, Raul; Ridge, Karen; Michael Hart, C; Chandel, Navdeep; Scott Budinger, G R; Kamp, David W
2016-12-01
Alveolar epithelial cell (AEC) injury and mitochondrial dysfunction are important in the development of lung fibrosis. Our group has shown that in the asbestos exposed lung, the generation of mitochondrial reactive oxygen species (ROS) in AEC mediate mitochondrial DNA (mtDNA) damage and apoptosis which are necessary for lung fibrosis. These data suggest that mitochondrial-targeted antioxidants should ameliorate asbestos-induced lung. To determine whether transgenic mice that express mitochondrial-targeted catalase (MCAT) have reduced lung fibrosis following exposure to asbestos or bleomycin and, if so, whether this occurs in association with reduced AEC mtDNA damage and apoptosis. Crocidolite asbestos (100µg/50µL), TiO 2 (negative control), bleomycin (0.025 units/50µL), or PBS was instilled intratracheally in 8-10 week-old wild-type (WT - C57Bl/6J) or MCAT mice. The lungs were harvested at 21d. Lung fibrosis was quantified by collagen levels (Sircol) and lung fibrosis scores. AEC apoptosis was assessed by cleaved caspase-3 (CC-3)/Surfactant protein C (SFTPC) immunohistochemistry (IHC) and semi-quantitative analysis. AEC (primary AT2 cells from WT and MCAT mice and MLE-12 cells) mtDNA damage was assessed by a quantitative PCR-based assay, apoptosis was assessed by DNA fragmentation, and ROS production was assessed by a Mito-Sox assay. Compared to WT, crocidolite-exposed MCAT mice exhibit reduced pulmonary fibrosis as measured by lung collagen levels and lung fibrosis score. The protective effects in MCAT mice were accompanied by reduced AEC mtDNA damage and apoptosis. Similar findings were noted following bleomycin exposure. Euk-134, a mitochondrial SOD/catalase mimetic, attenuated MLE-12 cell DNA damage and apoptosis. Finally, compared to WT, asbestos-induced MCAT AT2 cell ROS production was reduced. Our finding that MCAT mice have reduced pulmonary fibrosis, AEC mtDNA damage and apoptosis following exposure to asbestos or bleomycin suggests an important role for AEC mitochondrial H 2 O 2 -induced mtDNA damage in promoting lung fibrosis. We reason that strategies aimed at limiting AEC mtDNA damage arising from excess mitochondrial H 2 O 2 production may be a novel therapeutic target for mitigating pulmonary fibrosis. Published by Elsevier Inc.
Johnson, Dayna A; Lisabeth, Lynda; Lewis, Tené T; Sims, Mario; Hickson, DeMarc A; Samdarshi, Tandaw; Taylor, Herman; Diez Roux, Ana V
2016-07-01
Studies have shown that psychosocial stressors are related to poor sleep. However, studies of African Americans, who may be more vulnerable to the impact of psychosocial stressors, are lacking. Using the Jackson Heart Study (JHS) baseline data, we examined associations of psychosocial stressors with sleep in 4,863 African Americans. We examined cross-sectional associations between psychosocial stressors and sleep duration and quality in a large population sample of African Americans. Three measures of psychosocial stress were investigated: the Global Perceived Stress Scale (GPSS); Major Life Events (MLE); and the Weekly Stress Inventory (WSI). Sleep was assessed using self-reported hours of sleep and sleep quality rating (1 = poor; 5 = excellent). Multinomial logistic and linear regression models were used to examine the association of each stress measure (in quartiles) with continuous and categorical sleep duration (< 5 ("very short"), 5-6 h ("short") and > 9 h ("long") versus 7 or 8 h ("normal"); and with sleep quality after adjustment for demographics and risk factors (body mass index, hypertension, diabetes, physical activity). Mean age of the sample was 54.6 years and 64% were female. Mean sleep duration was 6.4 + 1.5 hours, 54% had a short sleep duration, 5% had a long sleep duration, and 34% reported a "poor" or "fair" sleep quality. Persons in the highest GPSS quartile had higher odds of very short sleep (odds ratio: 2.87, 95% confidence interval [CI]: 2.02, 4.08), higher odds of short sleep (1.72, 95% CI: 1.40, 2.12), shorter average sleep duration (Δ = -33.6 min (95% CI: -41.8, -25.4), and reported poorer sleep quality (Δ = -0.73 (95% CI: -0.83, -0.63) compared to those in the lowest quartile of GPSS after adjustment for covariates. Similar patterns were observed for WSI and MLE. Psychosocial stressors were not associated with long sleep. For WSI, effects of stress on sleep duration were stronger for younger (< 60 y) and college-educated African-Americans. Psychosocial stressors are associated with higher odds of short sleep, lower average sleep duration, and lower sleep quality in African Americans. Psychosocial stressors may be a point of intervention among African Americans for the improvement of sleep and downstream health outcomes. © 2016 Associated Professional Sleep Societies, LLC.
NASA Astrophysics Data System (ADS)
Leclerc, D. F.
2016-12-01
Northern-hemisphere (NH) heatwaves, during which temperatures rise 5 standard deviations (SD), sigma, above the historical mean temperature, mu, are becoming frequent; these events skew temperature anomaly (delta T) profiles towards extreme values. Although general extreme value (GEV) distributions have modeled precipitation data, their application to temperatures have met with limited success. This work presents a modified three-parameter (mu, sigma and tau (skew)) Exponential-Gaussian (eGd) model that hindcasts decadal NH land winter (DJF) and summer (JJA) delta Ts from 1951 to 2011, and forecasts profiles for a business-as-usual (BAU) scenario for 2061-2071. We accessed 12 numerical binned (0.05 °C/bin) z-scored NH decadal datasets (posted online until August 2015) from the publicly available website http://www.columbia.edu/ mhs119/PerceptionsAndDice/ mentioned in Hansen et al, PNAS 109 E2415-E2423 (2012) and stated to be in the public domain. No pre-processing was done. Parameters were calculated for the 12 NH datasets pasted into Microsoft Excel™ through the method of moments for 1-tail distributions and through the BEST deconvolution program described by Pommé and Marroyo, Applied Radiation and Isotopes 96 148-153 (2015) for 2-tail distributions. We used maximum likelihood estimation (MLE), residual sum of squares (RSS) and F-test to find optimal parameter values. Calculated 1st (= sigma + tau) and 2nd (= sigma2 + tau2) moments were found to be within 0.5% of observed values. Land delta Ts were recovered from the z-score values by multiplying the winter data by its SD (1.2 °C) and likewise the summer data by 0.6 °C. Results were all within 0.05 °C of 10-year averages from the GHCNv3 NH land dataset. Assuming BAU (increases from 2.1 to 2.6 ppm/y CO2) and using temperature rises of 0.27 °C and 0.35 °C per decade, for summer and winter, respectively, and forecasting to 2071, we obtain for the transient climate response for doubled CO2 (560 ppm CO2) mean delta Ts of 2.39 °C for summer and 2.97 °C for NH winter, thereby widely missing the agreed-to 2 °C international target which will be reached around 2040 @ 465 ppm CO2. In summary, barring volcanic eruptions and/or El Niño events, winter delta Ts will exceed 6 °C over 5% of land area, whereas in summer delta Ts will surpass 3.6 °C over 23% of same, both at the 5 sigma level.
Speizer, Ilene S; Corroon, Meghan; Calhoun, Lisa; Lance, Peter; Montana, Livia; Nanda, Priya; Guilkey, David
2014-01-01
ABSTRACT Family planning is crucial for preventing unintended pregnancies and for improving maternal and child health and well-being. In urban areas where there are large inequities in family planning use, particularly among the urban poor, programs are needed to increase access to and use of contraception among those most in need. This paper presents the midterm evaluation findings of the Urban Reproductive Health Initiative (Urban RH Initiative) programs, funded by the Bill & Melinda Gates Foundation, that are being implemented in 4 countries: India (Uttar Pradesh), Kenya, Nigeria, and Senegal. Between 2010 and 2013, the Measurement, Learning & Evaluation (MLE) project collected baseline and 2-year longitudinal follow-up data from women in target study cities to examine the role of demand generation activities undertaken as part of the Urban RH Initiative programs. Evaluation results demonstrate that, in each country where it was measured, outreach by community health or family planning workers as well as local radio programs were significantly associated with increased use of modern contraceptive methods. In addition, in India and Nigeria, television programs had a significant effect on modern contraceptive use, and in Kenya and Nigeria, the program slogans and materials that were blanketed across the cities (eg, leaflets/brochures distributed at health clinics and the program logo placed on all forms of materials, from market umbrellas to health facility signs and television programs) were also significantly associated with modern method use. Our results show that targeted, multilevel demand generation activities can make an important contribution to increasing modern contraceptive use in urban areas and could impact Millennium Development Goals for improved maternal and child health and access to reproductive health for all. PMID:25611476
Men’s attitudes on gender equality and their contraceptive use in Uttar Pradesh India
2014-01-01
Background Men play crucial role in contraceptive decision-making, particularly in highly gender-stratified populations. Past research examined men’s attitudes toward fertility and contraception and the association with actual contraceptive practices. More research is needed on whether men’s attitudes on gender equality are associated with contraceptive behaviors; this is the objective of this study. Methods This study uses baseline data of the Measurement, Learning, and Evaluation (MLE) Project for the Urban Health Initiative in Uttar Pradesh, India. Data were collected from a representative sample of 6,431 currently married men in four cities of the state. Outcomes are current use of contraception and contraceptive method choice. Key independent variables are three gender measures: men’s attitudes toward gender equality, gender sensitive decision making, and restrictions on wife’s mobility. Multivariate analyses are used to identify the association between the gender measures and contraceptive use. Results Most men have high or moderate levels of gender sensitive decision-making, have low to moderate levels of restrictions on wife’s mobility, and have moderate to high levels of gender equitable attitudes in all four cities. Gender sensitive decision making and equitable attitudes show significant positive association and restrictions on wife’s mobility showed significant negative relationship with current contraceptive use. Conclusion The study demonstrates that contraceptive programs need to engage men and address gender equitable attitudes; this can be done through peer outreach (interpersonal communication) or via mass media. Engaging men to be more gender equal may have an influence beyond contraceptive use in contexts where men play a crucial role in household decision-making. PMID:24894376
Shanmugam, Anusuya; Natarajan, Jeyakumar
2012-06-01
Multi drug resistance capacity for Mycobacterium leprae (MDR-Mle) demands the profound need for developing new anti-leprosy drugs. Since most of the drugs target a single enzyme, mutation in the active site renders the antibiotic ineffective. However, structural and mechanistic information on essential bacterial enzymes in a pathway could lead to the development of antibiotics that targets multiple enzymes. Peptidoglycan is an important component of the cell wall of M. leprae. The biosynthesis of bacterial peptidoglycan represents important targets for the development of new antibacterial drugs. Biosynthesis of peptidoglycan is a multi-step process that involves four key Mur ligase enzymes: MurC (EC:6.3.2.8), MurD (EC:6.3.2.9), MurE (EC:6.3.2.13) and MurF (EC:6.3.2.10). Hence in our work, we modeled the three-dimensional structure of the above Mur ligases using homology modeling method and analyzed its common binding features. The residues playing an important role in the catalytic activity of each of the Mur enzymes were predicted by docking these Mur ligases with their substrates and ATP. The conserved sequence motifs significant for ATP binding were predicted as the probable residues for structure based drug designing. Overall, the study was successful in listing significant and common binding residues of Mur enzymes in peptidoglycan pathway for multi targeted therapy.
Characterizing the topology of probabilistic biological networks.
Todor, Andrei; Dobra, Alin; Kahveci, Tamer
2013-01-01
Biological interactions are often uncertain events, that may or may not take place with some probability. This uncertainty leads to a massive number of alternative interaction topologies for each such network. The existing studies analyze the degree distribution of biological networks by assuming that all the given interactions take place under all circumstances. This strong and often incorrect assumption can lead to misleading results. In this paper, we address this problem and develop a sound mathematical basis to characterize networks in the presence of uncertain interactions. Using our mathematical representation, we develop a method that can accurately describe the degree distribution of such networks. We also take one more step and extend our method to accurately compute the joint-degree distributions of node pairs connected by edges. The number of possible network topologies grows exponentially with the number of uncertain interactions. However, the mathematical model we develop allows us to compute these degree distributions in polynomial time in the number of interactions. Our method works quickly even for entire protein-protein interaction (PPI) networks. It also helps us find an adequate mathematical model using MLE. We perform a comparative study of node-degree and joint-degree distributions in two types of biological networks: the classical deterministic networks and the more flexible probabilistic networks. Our results confirm that power-law and log-normal models best describe degree distributions for both probabilistic and deterministic networks. Moreover, the inverse correlation of degrees of neighboring nodes shows that, in probabilistic networks, nodes with large number of interactions prefer to interact with those with small number of interactions more frequently than expected. We also show that probabilistic networks are more robust for node-degree distribution computation than the deterministic ones. all the data sets used, the software implemented and the alignments found in this paper are available at http://bioinformatics.cise.ufl.edu/projects/probNet/.
NASA Astrophysics Data System (ADS)
Vasquez, N.; Corley, A. D.
2015-12-01
In the Mono Basin, CA, fine sand, silt, and volcanic ash deposited in Pleistocene Lake Russell is exposed on the margin of Mono Lake, and on Paoha Island in the lake. The silt records the Mono Lake Excursion (MLE: Denham and Cox, 1971) and several tens of thousands of years of paleomagnetic secular variation (PSV: Denham and Cox, 1971; Liddicoat, 1976; Lund et al., 1988). The sediment is believed to be an accurate recorder of PSV because the MLE has the same signal at widely separated localities in the basin (Denham, 1974; Liddicoat and Coe, 1979; Liddicoat, 1992) with the exception at wave-cut cliffs on the southeast side of the lake (Coe and Liddicoat, 1994). Magnetite, titanomagnetite, and titanomaghemite are present in the sediment (Denham and Cox, 1971; Liddicoat, 1976; Liddicoat and Coe, 1979), which is glacial flour from the adjacent Sierra Nevada (Lajoie, 1968). X-rays of the sediment and lineation measurements show patterns of normal bedding with layers aligned such that the minimum axes are within 5-10 degrees of normal bedding, with 10 percent foliation and 1 percent lineation (Coe and Liddicoat, 1994). We explore reasons for the difference in part of the PSV record at the wave-cut cliffs beyond the interpretation of Coe and Liddicoat (1994) that paleomagnetic field strength is a controlling factor. Possibilities include the sedimentation rate - at localities on the margin of Mono Lake the rate is about 60 percent less than at the wave-cut cliffs - and lithology of the sediment. At Mill Creek on the northwest side of Mono Lake, the non-magnetic sediment fraction is coarser-grained than at the wave-cut cliffs by a factor of about two, and there is a similar difference in the total inorganic carbon (TIC) percentage by weight for the two localities. (Spokowski et al., 2011) Studies of the sediment at two localities in the basin where the Hilina Pali Excursion (Teanby et al., 2002) might be recorded (Wilson Creek and South Shore Cliffs; Liddicoat and Coe, 2013) and at an extension of the PSV record of Lund et al. (1988) show a similar pattern to the grain size distribution and TIC percentage described above. Additional measurements of the TIC in the sediments from both sides of Mono Lake for the intervals recording the possible HPE and PSV extension of Lund et al. (1988) are in progress and will be presented.
SABER: A computational method for identifying active sites for new reactions
Nosrati, Geoffrey R; Houk, K N
2012-01-01
A software suite, SABER (Selection of Active/Binding sites for Enzyme Redesign), has been developed for the analysis of atomic geometries in protein structures, using a geometric hashing algorithm (Barker and Thornton, Bioinformatics 2003;19:1644–1649). SABER is used to explore the Protein Data Bank (PDB) to locate proteins with a specific 3D arrangement of catalytic groups to identify active sites that might be redesigned to catalyze new reactions. As a proof-of-principle test, SABER was used to identify enzymes that have the same catalytic group arrangement present in o-succinyl benzoate synthase (OSBS). Among the highest-scoring scaffolds identified by the SABER search for enzymes with the same catalytic group arrangement as OSBS were l-Ala d/l-Glu epimerase (AEE) and muconate lactonizing enzyme II (MLE), both of which have been redesigned to become effective OSBS catalysts, demonstrated by experiments. Next, we used SABER to search for naturally existing active sites in the PDB with catalytic groups similar to those present in the designed Kemp elimination enzyme KE07. From over 2000 geometric matches to the KE07 active site, SABER identified 23 matches that corresponded to residues from known active sites. The best of these matches, with a 0.28 Å catalytic atom RMSD to KE07, was then redesigned to be compatible with the Kemp elimination using RosettaDesign. We also used SABER to search for potential Kemp eliminases using a theozyme predicted to provide a greater rate acceleration than the active site of KE07, and used Rosetta to create a design based on the proteins identified. PMID:22492397
Men's attitudes on gender equality and their contraceptive use in Uttar Pradesh India.
Mishra, Anurag; Nanda, Priya; Speizer, Ilene S; Calhoun, Lisa M; Zimmerman, Allison; Bhardwaj, Rochak
2014-06-04
Men play crucial role in contraceptive decision-making, particularly in highly gender-stratified populations. Past research examined men's attitudes toward fertility and contraception and the association with actual contraceptive practices. More research is needed on whether men's attitudes on gender equality are associated with contraceptive behaviors; this is the objective of this study. This study uses baseline data of the Measurement, Learning, and Evaluation (MLE) Project for the Urban Health Initiative in Uttar Pradesh, India. Data were collected from a representative sample of 6,431 currently married men in four cities of the state. Outcomes are current use of contraception and contraceptive method choice. Key independent variables are three gender measures: men's attitudes toward gender equality, gender sensitive decision making, and restrictions on wife's mobility. Multivariate analyses are used to identify the association between the gender measures and contraceptive use. Most men have high or moderate levels of gender sensitive decision-making, have low to moderate levels of restrictions on wife's mobility, and have moderate to high levels of gender equitable attitudes in all four cities. Gender sensitive decision making and equitable attitudes show significant positive association and restrictions on wife's mobility showed significant negative relationship with current contraceptive use. The study demonstrates that contraceptive programs need to engage men and address gender equitable attitudes; this can be done through peer outreach (interpersonal communication) or via mass media. Engaging men to be more gender equal may have an influence beyond contraceptive use in contexts where men play a crucial role in household decision-making.
Kim, Sae-Hoon; Park, Da-Eun; Lee, Hyun-Seung; Kang, Hye-Ryun; Cho, Sang-Heon
2014-01-01
Background Epidemiologic clinical studies suggested that chronic exposure to chlorine products is associated with development of asthma and aggravation of asthmatic symptoms. However, its underlying mechanism was not clearly understood. Studies were undertaken to define the effects and mechanisms of chronic low-dose chlorine exposure in the pathogenesis of airway inflammation and airway hyperresponsiveness (AHR). Methods Six week-old female BALB/c mice were sensitized and challenged with OVA in the presence and absence of chronic low dose chlorine exposure of naturally vaporized gas of 5% sodium hypochlorite solution. Airway inflammation and AHR were evaluated by bronchoalveolar lavage (BAL) cell recovery and non-invasive phlethysmography, respectively. Real-time qPCR, Western blot assay, and ELISA were used to evaluate the mRNA and protein expressions of cytokines and other inflammatory mediators. Human A549 and murine epithelial (A549 and MLE12) and macrophage (AMJ2-C11) cells were used to define the responses to low dose chlorine exposure in vitro. Results Chronic low dose chlorine exposure significantly augmented airway inflammation and AHR in OVA-sensitized and challenged mice. The expression of Th2 cytokines IL-4 and IL-5 and proinflammatory cytokine IL-1β and IL-33 were significantly increased in OVA/Cl group compared with OVA group. The chlorine exposure also activates the major molecules associated with inflammasome pathway in the macrophages with increased expression of epithelial alarmins IL-33 and TSLP in vitro. Conclusion Chronic low dose exposure of chlorine aggravates allergic Th2 inflammation and AHR potentially through activation of inflammasome danger signaling pathways. PMID:25202911
Molaei, Goudarz; Armstrong, Philip M; Abadam, Charles F; Akaratovic, Karen I; Kiser, Jay P; Andreadis, Theodore G
2015-01-01
Eastern equine encephalitis virus (EEEV) causes a highly pathogenic mosquito-borne zoonosis that is responsible for sporadic outbreaks of severe illness in humans and equines in the eastern USA. Culiseta (Cs.) melanura is the primary vector of EEEV in most geographic regions but its feeding patterns on specific avian and mammalian hosts are largely unknown in the mid-Atlantic region. The objectives of our study were to: 1) identify avian hosts of Cs. melanura and evaluate their potential role in enzootic amplification of EEEV, 2) assess spatial and temporal patterns of virus activity during a season of intense virus transmission, and 3) investigate the potential role of Cs. melanura in epidemic/epizootic transmission of EEEV to humans and equines. Accordingly, we collected mosquitoes at 55 sites in Suffolk, Virginia in 2013, and identified the source of blood meals in engorged mosquitoes by nucleotide sequencing PCR products of the mitochondrial cytochrome b gene. We also examined field-collected mosquitoes for evidence of infection with EEEV using Vector Test, cell culture, and PCR. Analysis of 188 engorged Cs. melanura sampled from April through October 2013 indicated that 95.2%, 4.3%, and 0.5% obtained blood meals from avian, mammalian, and reptilian hosts, respectively. American Robin was the most frequently identified host for Cs. melanura (42.6% of blood meals) followed by Northern Cardinal (16.0%), European Starling (11.2%), Carolina Wren (4.3%), and Common Grackle (4.3%). EEEV was detected in 106 mosquito pools of Cs. melanura, and the number of virus positive pools peaked in late July with 22 positive pools and a Maximum Likelihood Estimation (MLE) infection rate of 4.46 per 1,000 mosquitoes. Our findings highlight the importance of Cs. melanura as a regional EEEV vector based on frequent feeding on virus-competent bird species. A small proportion of blood meals acquired from mammalian hosts suggests the possibility that this species may occasionally contribute to epidemic/epizootic transmission of EEEV.
The New NASA Orbital Debris Engineering Model ORDEM 3.0
NASA Technical Reports Server (NTRS)
Krisko, P. H.
2014-01-01
The NASA Orbital Debris Program Office (ODPO) has released its latest Orbital Debris Engineering Model, ORDEM 3.0. It supersedes ORDEM 2.0. This newer model encompasses the Earth satellite and debris flux environment from altitudes of low Earth orbit (LEO) through geosynchronous orbit (GEO). Debris sizes of 10 microns through 1 m in non-GEO and 10 cm through 1 m in GEO are modeled. The inclusive years are 2010 through 2035. The ORDEM model series has always been data driven. ORDEM 3.0 has the benefit of many more hours from existing data sources and from new sources that weren't available to past versions. Returned surfaces, ground tests, and remote sensors all contribute data. The returned surface and ground test data reveal material characteristics of small particles. Densities of fragmentation debris particles smaller than 10 cm are grouped in ORDEM 3.0 in terms of high-, medium-, and lowdensities, along with RORSAT sodium-potassium droplets. Supporting models have advanced significantly. The LEO-to-GEO ENvironment Debris model (LEGEND) includes an historical and a future projection component with yearly populations that include launched and maneuvered intacts, mission related debris (MRD), and explosion and collision fragments. LEGEND propagates objects with ephemerides and physical characteristics down to 1 mm in size. The full LEGEND yearly population acts as an a priori condition for a Bayesian statistical model. Specific, well defined populations are added like the Radar Ocean Reconnaissance Satellite (RORSAT) sodium-potassium (NaK) droplets, recent major accidental and deliberate collision fragments, and known anomalous debris event fragments. For microdebris of sizes 10 microns to 1 mm the ODPO uses an in-house Degradation/Ejecta model in which a MLE technique is used with returned surface data to estimate populations. This paper elaborates on the upgrades of this model over previous versions highlighting the material density splits and consequences of that to the penetration risk to spacecraft.
Empirical evidence for multi-scaled controls on wildfire size distributions in California
NASA Astrophysics Data System (ADS)
Povak, N.; Hessburg, P. F., Sr.; Salter, R. B.
2014-12-01
Ecological theory asserts that regional wildfire size distributions are examples of self-organized critical (SOC) systems. Controls on SOC event-size distributions by virtue are purely endogenous to the system and include the (1) frequency and pattern of ignitions, (2) distribution and size of prior fires, and (3) lagged successional patterns after fires. However, recent work has shown that the largest wildfires often result from extreme climatic events, and that patterns of vegetation and topography may help constrain local fire spread, calling into question the SOC model's simplicity. Using an atlas of >12,000 California wildfires (1950-2012) and maximum likelihood estimation (MLE), we fit four different power-law models and broken-stick regressions to fire-size distributions across 16 Bailey's ecoregions. Comparisons among empirical fire size distributions across ecoregions indicated that most ecoregion's fire-size distributions were significantly different, suggesting that broad-scale top-down controls differed among ecoregions. One-parameter power-law models consistently fit a middle range of fire sizes (~100 to 10000 ha) across most ecoregions, but did not fit to larger and smaller fire sizes. We fit the same four power-law models to patch size distributions of aspect, slope, and curvature topographies and found that the power-law models fit to a similar middle range of topography patch sizes. These results suggested that empirical evidence may exist for topographic controls on fire sizes. To test this, we used neutral landscape modeling techniques to determine if observed fire edges corresponded with aspect breaks more often than expected by random. We found significant differences between the empirical and neutral models for some ecoregions, particularly within the middle range of fire sizes. Our results, combined with other recent work, suggest that controls on ecoregional fire size distributions are multi-scaled and likely are not purely SOC. California wildfire ecosystems appear to be adaptive, governed by stationary and non-stationary controls, which may be either exogenous or endogenous to the system.
Effects on noise properties of GPS time series caused by higher-order ionospheric corrections
NASA Astrophysics Data System (ADS)
Jiang, Weiping; Deng, Liansheng; Li, Zhao; Zhou, Xiaohui; Liu, Hongfei
2014-04-01
Higher-order ionospheric (HOI) effects are one of the principal technique-specific error sources in precise global positioning system (GPS) analysis. These effects also influence the non-linear characteristics of GPS coordinate time series. In this paper, we investigate these effects on coordinate time series in terms of seasonal variations and noise amplitudes. Both power spectral techniques and maximum likelihood estimators (MLE) are used to evaluate these effects quantitatively and qualitatively. Our results show an overall improvement for the analysis of global sites if HOI effects are considered. We note that the noise spectral index that is used for the determination of the optimal noise models in our analysis ranged between -1 and 0 both with and without HOI corrections, implying that the coloured noise cannot be removed by these corrections. However, the corrections were found to have improved noise properties for global sites. After the corrections were applied, the noise amplitudes at most sites decreased, among which the white noise amplitudes decreased remarkably. The white noise amplitudes of up to 81.8% of the selected sites decreased in the up component, and the flicker noise of 67.5% of the sites decreased in the north component. Stacked periodogram results show that, no matter whether the HOI effects are considered or not, a common fundamental period of 1.04 cycles per year (cpy), together with the expected annual and semi-annual signals, can explain all peaks of the north and up components well. For the east component, however, reasonable results can be obtained only based on HOI corrections. HOI corrections are useful for better detecting the periodic signals in GPS coordinate time series. Moreover, the corrections contributed partly to the seasonal variations of the selected sites, especially for the up component. Statistically, HOI corrections reduced more than 50% and more than 65% of the annual and semi-annual amplitudes respectively at the selected sites.
Stanish, Lee F.; Hull, Natalie M.; Robertson, Charles E.; Harris, J. Kirk; Stevens, Mark J.; Spear, John R.; Pace, Norman R.
2016-01-01
The composition and metabolic activities of microbes in drinking water distribution systems can affect water quality and distribution system integrity. In order to understand regional variations in drinking water microbiology in the upper Ohio River watershed, the chemical and microbiological constituents of 17 municipal distribution systems were assessed. While sporadic variations were observed, the microbial diversity was generally dominated by fewer than 10 taxa, and was driven by the amount of disinfectant residual in the water. Overall, Mycobacterium spp. (Actinobacteria), MLE1-12 (phylum Cyanobacteria), Methylobacterium spp., and sphingomonads were the dominant taxa. Shifts in community composition from Alphaproteobacteria and Betaproteobacteria to Firmicutes and Gammaproteobacteria were associated with higher residual chlorine. Alpha- and beta-diversity were higher in systems with higher chlorine loads, which may reflect changes in the ecological processes structuring the communities under different levels of oxidative stress. These results expand the assessment of microbial diversity in municipal distribution systems and demonstrate the value of considering ecological theory to understand the processes controlling microbial makeup. Such understanding may inform the management of municipal drinking water resources. PMID:27362708
Posterior propriety for hierarchical models with log-likelihoods that have norm bounds
Michalak, Sarah E.; Morris, Carl N.
2015-07-17
Statisticians often use improper priors to express ignorance or to provide good frequency properties, requiring that posterior propriety be verified. Our paper addresses generalized linear mixed models, GLMMs, when Level I parameters have Normal distributions, with many commonly-used hyperpriors. It provides easy-to-verify sufficient posterior propriety conditions based on dimensions, matrix ranks, and exponentiated norm bounds, ENBs, for the Level I likelihood. Since many familiar likelihoods have ENBs, which is often verifiable via log-concavity and MLE finiteness, our novel use of ENBs permits unification of posterior propriety results and posterior MGF/moment results for many useful Level I distributions, including those commonlymore » used with multilevel generalized linear models, e.g., GLMMs and hierarchical generalized linear models, HGLMs. Furthermore, those who need to verify existence of posterior distributions or of posterior MGFs/moments for a multilevel generalized linear model given a proper or improper multivariate F prior as in Section 1 should find the required results in Sections 1 and 2 and Theorem 3 (GLMMs), Theorem 4 (HGLMs), or Theorem 5 (posterior MGFs/moments).« less
Anaki, David; Goldenberg, Rosalind; Devisheim, Haim; Rosenfelder, Diana; Falik, Lou; Harif, Idit
2016-06-23
NG is an architect who suffered a left occipital-parietal hemorrhage cerebral vascular accident (CVA) in 2000, resulting in aphasia of Wernicke and conduction types. He was characterized with fluent paraphasic speech, decreased repetition, and impaired object naming. Comprehension was relatively preserved but reading and writing were severely compromised, as well as his auditory working memory. Despite a grim prognosis he underwent intensive aphasia therapy, lasting from 2001 to 2010, at the Center for Cognitive Rehabilitation of the Brain Injured at the Feuerstein Institute. The tailored-made interventions, applied in NG's therapy, were based upon the implementation of the principles of the Structural Mediated Learning Experience (MLE) and the Feuerstein Instrumental Enrichment (FIE) Program, to optimize his rehabilitation. As a result NG improved in most of his impaired linguistic capacities, attested by the results of neuropsychological and linguistic assessments performed throughout the years. More importantly, he was able to manage again his daily functions at a high level, and to resume his occupational role as an architect, a role which he holds to this day.
Emami, Nasir; Sobhani, Reza; Rosso, Diego
2018-04-01
A model was developed for a water resources recovery facility (WRRF) activated sludge process (ASP) in Modified Ludzack-Ettinger (MLE) configuration. Amplification of air requirements and its associated energy consumptions were observed as a result of concurrent circadian variations in ASP influent flow and carbonaceous/nitrogenous constituent concentrations. The indirect carbon emissions associated with the ASP aeration were further amplified due to the simultaneous variations in carbon emissions intensity (kgCO 2,eq (kWh) -1 ) and electricity consumption (kWh). The ratio of peak to minimum increased to 3.4 (for flow), 4.2 (for air flow and energy consumption), and 5.2 (for indirect CO 2,eq emission), which is indicative of strong amplification. Similarly, the energy costs for ASP aeration were further increased due to the concurrency of peak energy consumptions and power demands with time of use peak electricity rates. A comparison between the results of the equilibrium model and observed data from the benchmark WRRF demonstrated under- and over-aeration attributed to the circadian variation in air requirements and limitations associated with the aeration system specification and design.
Grygierek, Krzysztof; Ferdyn-Grygierek, Joanna
2018-01-01
An inappropriate indoor climate, mostly indoor temperature, may cause occupants’ discomfort. There are a great number of air conditioning systems that make it possible to maintain the required thermal comfort. Their installation, however, involves high investment costs and high energy demand. The study analyses the possibilities of limiting too high a temperature in residential buildings using passive cooling by means of ventilation with ambient cool air. A fuzzy logic controller whose aim is to control mechanical ventilation has been proposed and optimized. In order to optimize the controller, the modified Multiobjective Evolutionary Algorithm, based on the Strength Pareto Evolutionary Algorithm, has been adopted. The optimization algorithm has been implemented in MATLAB®, which is coupled by MLE+ with EnergyPlus for performing dynamic co-simulation between the programs. The example of a single detached building shows that the occupants’ thermal comfort in a transitional climate may improve significantly owing to mechanical ventilation controlled by the suggested fuzzy logic controller. When the system is connected to the traditional cooling system, it may further bring about a decrease in cooling demand. PMID:29642525
Stanish, Lee F; Hull, Natalie M; Robertson, Charles E; Harris, J Kirk; Stevens, Mark J; Spear, John R; Pace, Norman R
2016-01-01
The composition and metabolic activities of microbes in drinking water distribution systems can affect water quality and distribution system integrity. In order to understand regional variations in drinking water microbiology in the upper Ohio River watershed, the chemical and microbiological constituents of 17 municipal distribution systems were assessed. While sporadic variations were observed, the microbial diversity was generally dominated by fewer than 10 taxa, and was driven by the amount of disinfectant residual in the water. Overall, Mycobacterium spp. (Actinobacteria), MLE1-12 (phylum Cyanobacteria), Methylobacterium spp., and sphingomonads were the dominant taxa. Shifts in community composition from Alphaproteobacteria and Betaproteobacteria to Firmicutes and Gammaproteobacteria were associated with higher residual chlorine. Alpha- and beta-diversity were higher in systems with higher chlorine loads, which may reflect changes in the ecological processes structuring the communities under different levels of oxidative stress. These results expand the assessment of microbial diversity in municipal distribution systems and demonstrate the value of considering ecological theory to understand the processes controlling microbial makeup. Such understanding may inform the management of municipal drinking water resources.
SABER: a computational method for identifying active sites for new reactions.
Nosrati, Geoffrey R; Houk, K N
2012-05-01
A software suite, SABER (Selection of Active/Binding sites for Enzyme Redesign), has been developed for the analysis of atomic geometries in protein structures, using a geometric hashing algorithm (Barker and Thornton, Bioinformatics 2003;19:1644-1649). SABER is used to explore the Protein Data Bank (PDB) to locate proteins with a specific 3D arrangement of catalytic groups to identify active sites that might be redesigned to catalyze new reactions. As a proof-of-principle test, SABER was used to identify enzymes that have the same catalytic group arrangement present in o-succinyl benzoate synthase (OSBS). Among the highest-scoring scaffolds identified by the SABER search for enzymes with the same catalytic group arrangement as OSBS were L-Ala D/L-Glu epimerase (AEE) and muconate lactonizing enzyme II (MLE), both of which have been redesigned to become effective OSBS catalysts, demonstrated by experiments. Next, we used SABER to search for naturally existing active sites in the PDB with catalytic groups similar to those present in the designed Kemp elimination enzyme KE07. From over 2000 geometric matches to the KE07 active site, SABER identified 23 matches that corresponded to residues from known active sites. The best of these matches, with a 0.28 Å catalytic atom RMSD to KE07, was then redesigned to be compatible with the Kemp elimination using RosettaDesign. We also used SABER to search for potential Kemp eliminases using a theozyme predicted to provide a greater rate acceleration than the active site of KE07, and used Rosetta to create a design based on the proteins identified. Copyright © 2012 The Protein Society.
The biology of DHX9 and its potential as a therapeutic target
Lee, Teresa; Pelletier, Jerry
2016-01-01
DHX9 is member of the DExD/H-box family of helicases with a “DEIH” sequence at its eponymous DExH-box motif. Initially purified from human and bovine cells and identified as a homologue of the Drosophila Maleless (MLE) protein, it is an NTP-dependent helicase consisting of a conserved helicase core domain, two double-stranded RNA-binding domains at the N-terminus, and a nuclear transport domain and a single-stranded DNA-binding RGG-box at the C-terminus. With an ability to unwind DNA and RNA duplexes, as well as more complex nucleic acid structures, DHX9 appears to play a central role in many cellular processes. Its functions include regulation of DNA replication, transcription, translation, microRNA biogenesis, RNA processing and transport, and maintenance of genomic stability. Because of its central role in gene regulation and RNA metabolism, there are growing implications for DHX9 in human diseases and their treatment. This review will provide an overview of the structure, biochemistry, and biology of DHX9, its role in cancer and other human diseases, and the possibility of targeting DHX9 in chemotherapy. PMID:27034008
Speizer, Ilene S; Corroon, Meghan; Calhoun, Lisa; Lance, Peter; Montana, Livia; Nanda, Priya; Guilkey, David
2014-11-06
Family planning is crucial for preventing unintended pregnancies and for improving maternal and child health and well-being. In urban areas where there are large inequities in family planning use, particularly among the urban poor, programs are needed to increase access to and use of contraception among those most in need. This paper presents the midterm evaluation findings of the Urban Reproductive Health Initiative (Urban RH Initiative) programs, funded by the Bill & Melinda Gates Foundation, that are being implemented in 4 countries: India (Uttar Pradesh), Kenya, Nigeria, and Senegal. Between 2010 and 2013, the Measurement, Learning & Evaluation (MLE) project collected baseline and 2-year longitudinal follow-up data from women in target study cities to examine the role of demand generation activities undertaken as part of the Urban RH Initiative programs. Evaluation results demonstrate that, in each country where it was measured, outreach by community health or family planning workers as well as local radio programs were significantly associated with increased use of modern contraceptive methods. In addition, in India and Nigeria, television programs had a significant effect on modern contraceptive use, and in Kenya and Nigeria, the program slogans and materials that were blanketed across the cities (eg, leaflets/brochures distributed at health clinics and the program logo placed on all forms of materials, from market umbrellas to health facility signs and television programs) were also significantly associated with modern method use. Our results show that targeted, multilevel demand generation activities can make an important contribution to increasing modern contraceptive use in urban areas and could impact Millennium Development Goals for improved maternal and child health and access to reproductive health for all. © Speizer et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly cited. To view a copy of the license, visit http://creativecommons.org/licenses/by/3.0/. When linking to this article, please use the following permanent link: http://dx.doi.org/10.9745/GHSP-D-14-00109.
NASA Astrophysics Data System (ADS)
Zheng, Yuejiu; Ouyang, Minggao; Han, Xuebing; Lu, Languang; Li, Jianqiu
2018-02-01
Sate of charge (SOC) estimation is generally acknowledged as one of the most important functions in battery management system for lithium-ion batteries in new energy vehicles. Though every effort is made for various online SOC estimation methods to reliably increase the estimation accuracy as much as possible within the limited on-chip resources, little literature discusses the error sources for those SOC estimation methods. This paper firstly reviews the commonly studied SOC estimation methods from a conventional classification. A novel perspective focusing on the error analysis of the SOC estimation methods is proposed. SOC estimation methods are analyzed from the views of the measured values, models, algorithms and state parameters. Subsequently, the error flow charts are proposed to analyze the error sources from the signal measurement to the models and algorithms for the widely used online SOC estimation methods in new energy vehicles. Finally, with the consideration of the working conditions, choosing more reliable and applicable SOC estimation methods is discussed, and the future development of the promising online SOC estimation methods is suggested.
Feng, Tao; Wang, Jizhe; Tsui, Benjamin M W
2018-04-01
The goal of this study was to develop and evaluate four post-reconstruction respiratory and cardiac (R&C) motion vector field (MVF) estimation methods for cardiac 4D PET data. In Method 1, the dual R&C motions were estimated directly from the dual R&C gated images. In Method 2, respiratory motion (RM) and cardiac motion (CM) were separately estimated from the respiratory gated only and cardiac gated only images. The effects of RM on CM estimation were modeled in Method 3 by applying an image-based RM correction on the cardiac gated images before CM estimation, the effects of CM on RM estimation were neglected. Method 4 iteratively models the mutual effects of RM and CM during dual R&C motion estimations. Realistic simulation data were generated for quantitative evaluation of four methods. Almost noise-free PET projection data were generated from the 4D XCAT phantom with realistic R&C MVF using Monte Carlo simulation. Poisson noise was added to the scaled projection data to generate additional datasets of two more different noise levels. All the projection data were reconstructed using a 4D image reconstruction method to obtain dual R&C gated images. The four dual R&C MVF estimation methods were applied to the dual R&C gated images and the accuracy of motion estimation was quantitatively evaluated using the root mean square error (RMSE) of the estimated MVFs. Results show that among the four estimation methods, Methods 2 performed the worst for noise-free case while Method 1 performed the worst for noisy cases in terms of quantitative accuracy of the estimated MVF. Methods 4 and 3 showed comparable results and achieved RMSE lower by up to 35% than that in Method 1 for noisy cases. In conclusion, we have developed and evaluated 4 different post-reconstruction R&C MVF estimation methods for use in 4D PET imaging. Comparison of the performance of four methods on simulated data indicates separate R&C estimation with modeling of RM before CM estimation (Method 3) to be the best option for accurate estimation of dual R&C motion in clinical situation. © 2018 American Association of Physicists in Medicine.
Alkema, Leontine; New, Jin Rou; Pedersen, Jon; You, Danzhen
2014-01-01
Background In September 2013, the United Nations Inter-agency Group for Child Mortality Estimation (UN IGME) published an update of the estimates of the under-five mortality rate (U5MR) and under-five deaths for all countries. Compared to the UN IGME estimates published in 2012, updated data inputs and a new method for estimating the U5MR were used. Methods We summarize the new U5MR estimation method, which is a Bayesian B-spline Bias-reduction model, and highlight differences with the previously used method. Differences in UN IGME U5MR estimates as published in 2012 and those published in 2013 are presented and decomposed into differences due to the updated database and differences due to the new estimation method to explain and motivate changes in estimates. Findings Compared to the previously used method, the new UN IGME estimation method is based on a different trend fitting method that can track (recent) changes in U5MR more closely. The new method provides U5MR estimates that account for data quality issues. Resulting differences in U5MR point estimates between the UN IGME 2012 and 2013 publications are small for the majority of countries but greater than 10 deaths per 1,000 live births for 33 countries in 2011 and 19 countries in 1990. These differences can be explained by the updated database used, the curve fitting method as well as accounting for data quality issues. Changes in the number of deaths were less than 10% on the global level and for the majority of MDG regions. Conclusions The 2013 UN IGME estimates provide the most recent assessment of levels and trends in U5MR based on all available data and an improved estimation method that allows for closer-to-real-time monitoring of changes in the U5MR and takes account of data quality issues. PMID:25013954
Local Intrinsic Dimension Estimation by Generalized Linear Modeling.
Hino, Hideitsu; Fujiki, Jun; Akaho, Shotaro; Murata, Noboru
2017-07-01
We propose a method for intrinsic dimension estimation. By fitting the power of distance from an inspection point and the number of samples included inside a ball with a radius equal to the distance, to a regression model, we estimate the goodness of fit. Then, by using the maximum likelihood method, we estimate the local intrinsic dimension around the inspection point. The proposed method is shown to be comparable to conventional methods in global intrinsic dimension estimation experiments. Furthermore, we experimentally show that the proposed method outperforms a conventional local dimension estimation method.
Harwell, Glenn R.
2012-01-01
Organizations responsible for the management of water resources, such as the U.S. Army Corps of Engineers (USACE), are tasked with estimation of evaporation for water-budgeting and planning purposes. The USACE has historically used Class A pan evaporation data (pan data) to estimate evaporation from reservoirs but many USACE Districts have been experimenting with other techniques for an alternative to collecting pan data. The energy-budget method generally is considered the preferred method for accurate estimation of open-water evaporation from lakes and reservoirs. Complex equations to estimate evaporation, such as the Penman, DeBruin-Keijman, and Priestley-Taylor, perform well when compared with energy-budget method estimates when all of the important energy terms are included in the equations and ideal data are collected. However, sometimes nonideal data are collected and energy terms, such as the change in the amount of stored energy and advected energy, are not included in the equations. When this is done, the corresponding errors in evaporation estimates are not quantifiable. Much simpler methods, such as the Hamon method and a method developed by the U.S. Weather Bureau (USWB) (renamed the National Weather Service in 1970), have been shown to provide reasonable estimates of evaporation when compared to energy-budget method estimates. Data requirements for the Hamon and USWB methods are minimal and sometimes perform well with remotely collected data. The Hamon method requires average daily air temperature, and the USWB method requires daily averages of air temperature, relative humidity, wind speed, and solar radiation. Estimates of annual lake evaporation from pan data are frequently within 20 percent of energy-budget method estimates. Results of evaporation estimates from the Hamon method and the USWB method were compared against historical pan data at five selected reservoirs in Texas (Benbrook Lake, Canyon Lake, Granger Lake, Hords Creek Lake, and Sam Rayburn Lake) to evaluate their performance and to develop coefficients to minimize bias for the purpose of estimating reservoir evaporation with accuracies similar to estimates of evaporation obtained from pan data. The modified Hamon method estimates of reservoir evaporation were similar to estimates of reservoir evaporation from pan data for daily, monthly, and annual time periods. The modified Hamon method estimates of annual reservoir evaporation were always within 20 percent of annual reservoir evaporation from pan data. Unmodified and modified USWB method estimates of annual reservoir evaporation were within 20 percent of annual reservoir evaporation from pan data for about 91 percent of the years compared. Average daily differences between modified USWB method estimates and estimates from pan data as a percentage of the average amount of daily evaporation from pan data were within 20 percent for 98 percent of the months. Without any modification to the USWB method, average daily differences as a percentage of the average amount of daily evaporation from pan data were within 20 percent for 73 percent of the months. Use of the unmodified USWB method is appealing because it means estimates of average daily reservoir evaporation can be made from air temperature, relative humidity, wind speed, and solar radiation data collected from remote weather stations without the need to develop site-specific coefficients from historical pan data. Site-specific coefficients would need to be developed for the modified version of the Hamon method.
Coughlan, Diarmuid; Yeh, Susan T; O'Neill, Ciaran; Frick, Kevin D
2014-01-01
To inform policymakers of the importance of evaluating various methods for estimating the direct medical expenditures for a low-incidence condition, head and neck cancer (HNC). Four methods of estimation have been identified: 1) summing all health care expenditures, 2) estimating disease-specific expenditures consistent with an attribution approach, 3) estimating disease-specific expenditures by matching, and 4) estimating disease-specific expenditures by using a regression-based approach. A literature review of studies (2005-2012) that used the Medical Expenditure Panel Survey (MEPS) was undertaken to establish the most popular expenditure estimation methods. These methods were then applied to a sample of 120 respondents with HNC, derived from pooled data (2003-2008). The literature review shows that varying expenditure estimation methods have been used with MEPS but no study compared and contrasted all four methods. Our estimates are reflective of the national treated prevalence of HNC. The upper-bound estimate of annual direct medical expenditures of adult respondents with HNC between 2003 and 2008 was $3.18 billion (in 2008 dollars). Comparable estimates arising from methods focusing on disease-specific and incremental expenditures were all lower in magnitude. Attribution yielded annual expenditures of $1.41 billion, matching method of $1.56 billion, and regression method of $1.09 billion. This research demonstrates that variation exists across and within expenditure estimation methods applied to MEPS data. Despite concerns regarding aspects of reliability and consistency, reporting a combination of the four methods offers a degree of transparency and validity to estimating the likely range of annual direct medical expenditures of a condition. © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR) Published by International Society for Pharmacoeconomics and Outcomes Research (ISPOR) All rights reserved.
Alkema, Leontine; New, Jin Rou; Pedersen, Jon; You, Danzhen
2014-01-01
In September 2013, the United Nations Inter-agency Group for Child Mortality Estimation (UN IGME) published an update of the estimates of the under-five mortality rate (U5MR) and under-five deaths for all countries. Compared to the UN IGME estimates published in 2012, updated data inputs and a new method for estimating the U5MR were used. We summarize the new U5MR estimation method, which is a Bayesian B-spline Bias-reduction model, and highlight differences with the previously used method. Differences in UN IGME U5MR estimates as published in 2012 and those published in 2013 are presented and decomposed into differences due to the updated database and differences due to the new estimation method to explain and motivate changes in estimates. Compared to the previously used method, the new UN IGME estimation method is based on a different trend fitting method that can track (recent) changes in U5MR more closely. The new method provides U5MR estimates that account for data quality issues. Resulting differences in U5MR point estimates between the UN IGME 2012 and 2013 publications are small for the majority of countries but greater than 10 deaths per 1,000 live births for 33 countries in 2011 and 19 countries in 1990. These differences can be explained by the updated database used, the curve fitting method as well as accounting for data quality issues. Changes in the number of deaths were less than 10% on the global level and for the majority of MDG regions. The 2013 UN IGME estimates provide the most recent assessment of levels and trends in U5MR based on all available data and an improved estimation method that allows for closer-to-real-time monitoring of changes in the U5MR and takes account of data quality issues.
Salganik, Matthew J; Fazito, Dimitri; Bertoni, Neilane; Abdo, Alexandre H; Mello, Maeve B; Bastos, Francisco I
2011-11-15
One of the many challenges hindering the global response to the human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome (AIDS) epidemic is the difficulty of collecting reliable information about the populations most at risk for the disease. Thus, the authors empirically assessed a promising new method for estimating the sizes of most at-risk populations: the network scale-up method. Using 4 different data sources, 2 of which were from other researchers, the authors produced 5 estimates of the number of heavy drug users in Curitiba, Brazil. The authors found that the network scale-up and generalized network scale-up estimators produced estimates 5-10 times higher than estimates made using standard methods (the multiplier method and the direct estimation method using data from 2004 and 2010). Given that equally plausible methods produced such a wide range of results, the authors recommend that additional studies be undertaken to compare estimates based on the scale-up method with those made using other methods. If scale-up-based methods routinely produce higher estimates, this would suggest that scale-up-based methods are inappropriate for populations most at risk of HIV/AIDS or that standard methods may tend to underestimate the sizes of these populations.
Studies on spatio-temporal filtering of GNSS-derived coordinates
NASA Astrophysics Data System (ADS)
Gruszczynski, Maciej; Bogusz, Janusz; Kłos, Anna; Figurski, Mariusz
2015-04-01
The information about lithospheric deformations may be obtained nowadays by analysis of velocity field derived from permanent GNSS (Global Navigation Satellite System) observations. Despite developing more and more reliable models, the permanent stations residuals must still be considered as coloured noise. Meeting the GGOS (Global Geodetic Observing System) requirements, we are obliged to investigate the correlations between residuals, which are the result of common mode error (CME). This type of error may arise from mismodelling of: satellite orbits, the Earth Orientation Parameters, satellite antenna phase centre variations or unmodelling of large scale atmospheric effects. The above described together cause correlations between stochastic parts of coordinate time series obtained at stations located of even few thousands kilometres from each other. Permanent stations that meet the aforementioned terms form the regional (EPN - EUREF Permanent Network) or local sub-networks of global (IGS - International GNSS Service) network. Other authors (Wdowinski et al., 1997; Dong et al., 2006) dealt with spatio-temporal filtering and indicated three major regional filtering approaches: the stacking, the Principal Component Analysis (PCA) based on the empirical orthogonal function and the Karhunen-Loeve expansion. The need for spatio-temporal filtering is evident today, but the question whether the size of the network affects the accuracy of station's position and its velocity still remains unanswered. With the aim to determine the network's size, for which the assumption of spatial uniform distribution of CME is retained, we used stacking approach. We analyzed time series of IGS stations with daily network solutions processed by the Military University of Technology EPN Local Analysis Centre in Bernese 5.0 software and compared it with the JPL (Jet Propulsion Laboratory) PPP (Precice Point Positioning). The method we propose is based on the division of local GNSS networks into concentric ring-shaped areas. Such an approach allows us to specify the maximum size of the network, where the evident uniform spatial response can be still noticed. In terms of reliable CMEs extraction, the local networks have to be up to 500-600 kilometres extent depending on its character (location). In this study we examined three approaches of spatio-temporal filtering based on stacking procedure. First was based on non-weighted (Wdowinski et. al., 1997) and second on weighted average formula, where the weights are formed by the RMS of individual station position in the corresponding epoch (Nikolaidis, 2002). The third stacking approach, proposed here, was previously unused. It combines the weighted stacking together with the distance between the station and network barycentre into one approach. The analysis allowed to determine the optimal size of local GNSS network and to select the appropriate stacking method for obtaining the most stable solutions for e.g. geodynamical studies. The values of L1 and L2 norms, RMS values of time series (describing stability of the time series) and Pearson correlation coefficients were calculated for the North, East and Up components from more than 200 permanent stations twice: before performing the filtration and after weighted stacking approach. We showed the improvement in the quality of time series analysis using MLE (Maximum Likelihood Estimation) to estimate noise parameters. We demonstrated that the relative RMS improvement of 10, 20 and 30% reduces the noise amplitudes of about 20, 35 and 45%, respectively, what causes the velocity uncertainty to be reduced of 0.3 mm/yr (for the assumption of 7-years of data and flicker noise). The relative decrement of spectral index kappa is 25, 35 and 45%, what means lower velocity uncertainty of even 0.2 mm/yr (when assuming 7 years of data and noise amplitude of 15 mm/yr^-kappa/4) . These results refer to the growing demands on the stability of the series due to their use to realize the kinematic reference frames and for geodynamical studies.
A Novel Residual Frequency Estimation Method for GNSS Receivers.
Nguyen, Tu Thi-Thanh; La, Vinh The; Ta, Tung Hai
2018-01-04
In Global Navigation Satellite System (GNSS) receivers, residual frequency estimation methods are traditionally applied in the synchronization block to reduce the transient time from acquisition to tracking, or they are used within the frequency estimator to improve its accuracy in open-loop architectures. There are several disadvantages in the current estimation methods, including sensitivity to noise and wide search space size. This paper proposes a new residual frequency estimation method depending on differential processing. Although the complexity of the proposed method is higher than the one of traditional methods, it can lead to more accurate estimates, without increasing the size of the search space.
A Monte Carlo Evaluation of Estimated Parameters of Five Shrinkage Estimate Formuli.
ERIC Educational Resources Information Center
Newman, Isadore; And Others
A Monte Carlo study was conducted to estimate the efficiency of and the relationship between five equations and the use of cross validation as methods for estimating shrinkage in multiple correlations. Two of the methods were intended to estimate shrinkage to population values and the other methods were intended to estimate shrinkage from sample…
Direct volume estimation without segmentation
NASA Astrophysics Data System (ADS)
Zhen, X.; Wang, Z.; Islam, A.; Bhaduri, M.; Chan, I.; Li, S.
2015-03-01
Volume estimation plays an important role in clinical diagnosis. For example, cardiac ventricular volumes including left ventricle (LV) and right ventricle (RV) are important clinical indicators of cardiac functions. Accurate and automatic estimation of the ventricular volumes is essential to the assessment of cardiac functions and diagnosis of heart diseases. Conventional methods are dependent on an intermediate segmentation step which is obtained either manually or automatically. However, manual segmentation is extremely time-consuming, subjective and highly non-reproducible; automatic segmentation is still challenging, computationally expensive, and completely unsolved for the RV. Towards accurate and efficient direct volume estimation, our group has been researching on learning based methods without segmentation by leveraging state-of-the-art machine learning techniques. Our direct estimation methods remove the accessional step of segmentation and can naturally deal with various volume estimation tasks. Moreover, they are extremely flexible to be used for volume estimation of either joint bi-ventricles (LV and RV) or individual LV/RV. We comparatively study the performance of direct methods on cardiac ventricular volume estimation by comparing with segmentation based methods. Experimental results show that direct estimation methods provide more accurate estimation of cardiac ventricular volumes than segmentation based methods. This indicates that direct estimation methods not only provide a convenient and mature clinical tool for cardiac volume estimation but also enables diagnosis of cardiac diseases to be conducted in a more efficient and reliable way.
Wu, Hulin; Xue, Hongqi; Kumar, Arun
2012-06-01
Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.
Self-calibration method without joint iteration for distributed small satellite SAR systems
NASA Astrophysics Data System (ADS)
Xu, Qing; Liao, Guisheng; Liu, Aifei; Zhang, Juan
2013-12-01
The performance of distributed small satellite synthetic aperture radar systems degrades significantly due to the unavoidable array errors, including gain, phase, and position errors, in real operating scenarios. In the conventional method proposed in (IEEE T Aero. Elec. Sys. 42:436-451, 2006), the spectrum components within one Doppler bin are considered as calibration sources. However, it is found in this article that the gain error estimation and the position error estimation in the conventional method can interact with each other. The conventional method may converge to suboptimal solutions in large position errors since it requires the joint iteration between gain-phase error estimation and position error estimation. In addition, it is also found that phase errors can be estimated well regardless of position errors when the zero Doppler bin is chosen. In this article, we propose a method obtained by modifying the conventional one, based on these two observations. In this modified method, gain errors are firstly estimated and compensated, which eliminates the interaction between gain error estimation and position error estimation. Then, by using the zero Doppler bin data, the phase error estimation can be performed well independent of position errors. Finally, position errors are estimated based on the Taylor-series expansion. Meanwhile, the joint iteration between gain-phase error estimation and position error estimation is not required. Therefore, the problem of suboptimal convergence, which occurs in the conventional method, can be avoided with low computational method. The modified method has merits of faster convergence and lower estimation error compared to the conventional one. Theoretical analysis and computer simulation results verified the effectiveness of the modified method.
Improvement of Accuracy for Background Noise Estimation Method Based on TPE-AE
NASA Astrophysics Data System (ADS)
Itai, Akitoshi; Yasukawa, Hiroshi
This paper proposes a method of a background noise estimation based on the tensor product expansion with a median and a Monte carlo simulation. We have shown that a tensor product expansion with absolute error method is effective to estimate a background noise, however, a background noise might not be estimated by using conventional method properly. In this paper, it is shown that the estimate accuracy can be improved by using proposed methods.
Rosado-Mendez, Ivan M; Nam, Kibo; Hall, Timothy J; Zagzebski, James A
2013-07-01
Reported here is a phantom-based comparison of methods for determining the power spectral density (PSD) of ultrasound backscattered signals. Those power spectral density values are then used to estimate parameters describing α(f), the frequency dependence of the acoustic attenuation coefficient. Phantoms were scanned with a clinical system equipped with a research interface to obtain radiofrequency echo data. Attenuation, modeled as a power law α(f)= α0 f (β), was estimated using a reference phantom method. The power spectral density was estimated using the short-time Fourier transform (STFT), Welch's periodogram, and Thomson's multitaper technique, and performance was analyzed when limiting the size of the parameter-estimation region. Errors were quantified by the bias and standard deviation of the α0 and β estimates, and by the overall power-law fit error (FE). For parameter estimation regions larger than ~34 pulse lengths (~1 cm for this experiment), an overall power-law FE of 4% was achieved with all spectral estimation methods. With smaller parameter estimation regions as in parametric image formation, the bias and standard deviation of the α0 and β estimates depended on the size of the parameter estimation region. Here, the multitaper method reduced the standard deviation of the α0 and β estimates compared with those using the other techniques. The results provide guidance for choosing methods for estimating the power spectral density in quantitative ultrasound methods.
On the Impact of a Quadratic Acceleration Term in the Analysis of Position Time Series
NASA Astrophysics Data System (ADS)
Bogusz, Janusz; Klos, Anna; Bos, Machiel Simon; Hunegnaw, Addisu; Teferle, Felix Norman
2016-04-01
The analysis of Global Navigation Satellite System (GNSS) position time series generally assumes that each of the coordinate component series is described by the sum of a linear rate (velocity) and various periodic terms. The residuals, the deviations between the fitted model and the observations, are then a measure of the epoch-to-epoch scatter and have been used for the analysis of the stochastic character (noise) of the time series. Often the parameters of interest in GNSS position time series are the velocities and their associated uncertainties, which have to be determined with the highest reliability. It is clear that not all GNSS position time series follow this simple linear behaviour. Therefore, we have added an acceleration term in the form of a quadratic polynomial function to the model in order to better describe the non-linear motion in the position time series. This non-linear motion could be a response to purely geophysical processes, for example, elastic rebound of the Earth's crust due to ice mass loss in Greenland, artefacts due to deficiencies in bias mitigation models, for example, of the GNSS satellite and receiver antenna phase centres, or any combination thereof. In this study we have simulated 20 time series with different stochastic characteristics such as white, flicker or random walk noise of length of 23 years. The noise amplitude was assumed at 1 mm/y-/4. Then, we added the deterministic part consisting of a linear trend of 20 mm/y (that represents the averaged horizontal velocity) and accelerations ranging from minus 0.6 to plus 0.6 mm/y2. For all these data we estimated the noise parameters with Maximum Likelihood Estimation (MLE) using the Hector software package without taken into account the non-linear term. In this way we set the benchmark to then investigate how the noise properties and velocity uncertainty may be affected by any un-modelled, non-linear term. The velocities and their uncertainties versus the accelerations for different types of noise are determined. Furthermore, we have selected 40 globally distributed stations that have a clear non-linear behaviour from two different International GNSS Service (IGS) analysis centers: JPL (Jet Propulsion Laboratory) and BLT (British Isles continuous GNSS Facility and University of Luxembourg Tide Gauge Benchmark Monitoring (TIGA) Analysis Center). We obtained maximum accelerations of -1.8±1.2 mm2/y and -4.5±3.3 mm2/y for the horizontal and vertical components, respectively. The noise analysis tests have shown that the addition of the non-linear term has significantly whitened the power spectra of the position time series, i.e. shifted the spectral index from flicker towards white noise.
Pradhan, Sudeep; Song, Byungjeong; Lee, Jaeyeon; Chae, Jung-Woo; Kim, Kyung Im; Back, Hyun-Moon; Han, Nayoung; Kwon, Kwang-Il; Yun, Hwi-Yeol
2017-12-01
Exploratory preclinical, as well as clinical trials, may involve a small number of patients, making it difficult to calculate and analyze the pharmacokinetic (PK) parameters, especially if the PK parameters show very high inter-individual variability (IIV). In this study, the performance of a classical first-order conditional estimation with interaction (FOCE-I) and expectation maximization (EM)-based Markov chain Monte Carlo Bayesian (BAYES) estimation methods were compared for estimating the population parameters and its distribution from data sets having a low number of subjects. In this study, 100 data sets were simulated with eight sampling points for each subject and with six different levels of IIV (5%, 10%, 20%, 30%, 50%, and 80%) in their PK parameter distribution. A stochastic simulation and estimation (SSE) study was performed to simultaneously simulate data sets and estimate the parameters using four different methods: FOCE-I only, BAYES(C) (FOCE-I and BAYES composite method), BAYES(F) (BAYES with all true initial parameters and fixed ω 2 ), and BAYES only. Relative root mean squared error (rRMSE) and relative estimation error (REE) were used to analyze the differences between true and estimated values. A case study was performed with a clinical data of theophylline available in NONMEM distribution media. NONMEM software assisted by Pirana, PsN, and Xpose was used to estimate population PK parameters, and R program was used to analyze and plot the results. The rRMSE and REE values of all parameter (fixed effect and random effect) estimates showed that all four methods performed equally at the lower IIV levels, while the FOCE-I method performed better than other EM-based methods at higher IIV levels (greater than 30%). In general, estimates of random-effect parameters showed significant bias and imprecision, irrespective of the estimation method used and the level of IIV. Similar performance of the estimation methods was observed with theophylline dataset. The classical FOCE-I method appeared to estimate the PK parameters more reliably than the BAYES method when using a simple model and data containing only a few subjects. EM-based estimation methods can be considered for adapting to the specific needs of a modeling project at later steps of modeling.
Estimating survival of radio-tagged birds
Bunck, C.M.; Pollock, K.H.; Lebreton, J.-D.; North, P.M.
1993-01-01
Parametric and nonparametric methods for estimating survival of radio-tagged birds are described. The general assumptions of these methods are reviewed. An estimate based on the assumption of constant survival throughout the period is emphasized in the overview of parametric methods. Two nonparametric methods, the Kaplan-Meier estimate of the survival funcrion and the log rank test, are explained in detail The link between these nonparametric methods and traditional capture-recapture models is discussed aloag with considerations in designing studies that use telemetry techniques to estimate survival.
DHA Suppresses Primary Macrophage Inflammatory Responses via Notch 1/ Jagged 1 Signaling
Ali, Mehboob; Heyob, Kathryn; Rogers, Lynette K.
2016-01-01
Persistent macrophages were observed in the lungs of murine offspring exposed to maternal LPS and neonatal hyperoxia. Maternal docosahexaenoic acid (DHA) supplementation prevented the accumulation of macrophages and improved lung development. We hypothesized that these macrophages are responsible for pathologies observed in this model and the effects of DHA supplementation. Primary macrophages were isolated from adult mice fed standard chow, control diets, or DHA supplemented diets. Macrophages were exposed to hyperoxia (O2) for 24 h and LPS for 6 h or 24 h. Our data demonstrate significant attenuation of Notch 1 and Jagged 1 protein levels in response to DHA supplementation in vivo but similar results were not evident in macrophages isolated from mice fed standard chow and supplemented with DHA in vitro. Co-culture of activated macrophages with MLE12 epithelial cells resulted in the release of high mobility group box 1 and leukotriene B4 from the epithelial cells and this release was attenuated by DHA supplementation. Collectively, our data indicate that long term supplementation with DHA as observed in vivo, resulted in deceased Notch 1/Jagged 1 protein expression however, DHA supplementation in vitro was sufficient to suppress release LTB4 and to protect epithelial cells in co-culture. PMID:26940787
DHA Suppresses Primary Macrophage Inflammatory Responses via Notch 1/ Jagged 1 Signaling.
Ali, Mehboob; Heyob, Kathryn; Rogers, Lynette K
2016-03-04
Persistent macrophages were observed in the lungs of murine offspring exposed to maternal LPS and neonatal hyperoxia. Maternal docosahexaenoic acid (DHA) supplementation prevented the accumulation of macrophages and improved lung development. We hypothesized that these macrophages are responsible for pathologies observed in this model and the effects of DHA supplementation. Primary macrophages were isolated from adult mice fed standard chow, control diets, or DHA supplemented diets. Macrophages were exposed to hyperoxia (O2) for 24 h and LPS for 6 h or 24 h. Our data demonstrate significant attenuation of Notch 1 and Jagged 1 protein levels in response to DHA supplementation in vivo but similar results were not evident in macrophages isolated from mice fed standard chow and supplemented with DHA in vitro. Co-culture of activated macrophages with MLE12 epithelial cells resulted in the release of high mobility group box 1 and leukotriene B4 from the epithelial cells and this release was attenuated by DHA supplementation. Collectively, our data indicate that long term supplementation with DHA as observed in vivo, resulted in deceased Notch 1/Jagged 1 protein expression however, DHA supplementation in vitro was sufficient to suppress release LTB4 and to protect epithelial cells in co-culture.
Reactivity of a Thick BaO Film Supported on Pt(111): Adsorption and Reaction of NO2, H2O and CO2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mudiyanselage, Kumudu; Yi, Cheol-Woo W.; Szanyi, Janos
2009-09-15
Reactions of NO2, H2O, and CO2 with a thick (> 20 MLE) BaO film supported on Pt(111) were studied with temperature programmed desorption (TPD) and X-ray photoelectron spectroscopy (XPS). NO2 reacts with a thick BaO to form surface nitrite-nitrate ion pairs at 300 K, while only nitrates form at 600 K. In the thermal decomposition process of nitrite–nitrate ion pairs, first nitrites decompose and desorb as NO. Then nitrates decompose in two steps : at lower temperature with the release of NO2 and at higher temperature, nitrates dissociate to NO + O2. The thick BaO layer converts completely to Ba(OH)2more » following the adsorption of H2O at 300 K. Dehydration/dehydroxylation of this hydroxide layer can be fully achieved by annealing to 550 K. CO2 also reacts with BaO to form BaCO3 that completely decomposes to regenerate BaO upon annealing to 825 K. However, the thick BaO film cannot be converted completely to Ba(NOx)2 or BaCO3 under the experimental conditions employed in this study.« less
Wambach, Jennifer A.; Yang, Ping; Wegner, Daniel J.; An, Ping; Hackett, Brian P.; Cole, F. S.; Hamvas, Aaron
2010-01-01
Dominant mutations in coding regions of the surfactant protein-C gene (SFTPC) cause respiratory distress syndrome (RDS) in infants. However, the contribution of variants in noncoding regions of SFTPC to pulmonary phenotypes is unknown. Using a case-control group of infants ≥34 weeks gestation (n=538), we used complete resequencing of SFTPC and its promoter, genotyping, and logistic regression to identify 80 single nucleotide polymorphisms (SNPs). Three promoter SNPs were statistically associated with neonatal RDS among European descent infants. To assess the transcriptional effects of these three promoter SNPs, we selectively mutated the SFTPC promoter and performed transient transfection using MLE-15 cells and a firefly luciferase reporter vector. Each promoter SNP decreased SFTPC transcription. The combination of two variants in high linkage dysequilibrium also decreased SFTPC transcription. In silico evaluation of transcription factor binding demonstrated that the rare allele at g.-1167 disrupts a SOX (SRY-related high mobility group box) consensus motif and introduces a GATA-1 site, at g.-2385 removes a MZF-1 (myeloid zinc finger) binding site, and at g.-1647 removes a potential methylation site. This combined statistical, in vitro, and in silico approach suggests that reduced SFTPC transcription contributes to the genetic risk for neonatal RDS in developmentally susceptible infants. PMID:20539253
NASA Astrophysics Data System (ADS)
Takadama, Keiki; Hirose, Kazuyuki; Matsushima, Hiroyasu; Hattori, Kiyohiko; Nakajima, Nobuo
This paper proposes the sleep stage estimation method that can provide an accurate estimation for each person without connecting any devices to human's body. In particular, our method learns the appropriate multiple band-pass filters to extract the specific wave pattern of heartbeat, which is required to estimate the sleep stage. For an accurate estimation, this paper employs Learning Classifier System (LCS) as the data-mining techniques and extends it to estimate the sleep stage. Extensive experiments on five subjects in mixed health confirm the following implications: (1) the proposed method can provide more accurate sleep stage estimation than the conventional method, and (2) the sleep stage estimation calculated by the proposed method is robust regardless of the physical condition of the subject.
NASA Astrophysics Data System (ADS)
Pachon, Jorge E.; Balachandran, Sivaraman; Hu, Yongtao; Weber, Rodney J.; Mulholland, James A.; Russell, Armistead G.
2010-10-01
In the Southeastern US, organic carbon (OC) comprises about 30% of the PM 2.5 mass. A large fraction of OC is estimated to be of secondary origin. Long-term estimates of SOC and uncertainties are necessary in the evaluation of air quality policy effectiveness and epidemiologic studies. Four methods to estimate secondary organic carbon (SOC) and respective uncertainties are compared utilizing PM 2.5 chemical composition and gas phase data available in Atlanta from 1999 to 2007. The elemental carbon (EC) tracer and the regression methods, which rely on the use of tracer species of primary and secondary OC formation, provided intermediate estimates of SOC as 30% of OC. The other two methods, chemical mass balance (CMB) and positive matrix factorization (PMF) solve mass balance equations to estimate primary and secondary fractions based on source profiles and statistically-derived common factors, respectively. CMB had the highest estimate of SOC (46% of OC) while PMF led to the lowest (26% of OC). The comparison of SOC uncertainties, estimated based on propagation of errors, led to the regression method having the lowest uncertainty among the four methods. We compared the estimates with the water soluble fraction of the OC, which has been suggested as a surrogate of SOC when biomass burning is negligible, and found a similar trend with SOC estimates from the regression method. The regression method also showed the strongest correlation with daily SOC estimates from CMB using molecular markers. The regression method shows advantages over the other methods in the calculation of a long-term series of SOC estimates.
Austin, Peter C
2016-12-30
Propensity score methods are used to reduce the effects of observed confounding when using observational data to estimate the effects of treatments or exposures. A popular method of using the propensity score is inverse probability of treatment weighting (IPTW). When using this method, a weight is calculated for each subject that is equal to the inverse of the probability of receiving the treatment that was actually received. These weights are then incorporated into the analyses to minimize the effects of observed confounding. Previous research has found that these methods result in unbiased estimation when estimating the effect of treatment on survival outcomes. However, conventional methods of variance estimation were shown to result in biased estimates of standard error. In this study, we conducted an extensive set of Monte Carlo simulations to examine different methods of variance estimation when using a weighted Cox proportional hazards model to estimate the effect of treatment. We considered three variance estimation methods: (i) a naïve model-based variance estimator; (ii) a robust sandwich-type variance estimator; and (iii) a bootstrap variance estimator. We considered estimation of both the average treatment effect and the average treatment effect in the treated. We found that the use of a bootstrap estimator resulted in approximately correct estimates of standard errors and confidence intervals with the correct coverage rates. The other estimators resulted in biased estimates of standard errors and confidence intervals with incorrect coverage rates. Our simulations were informed by a case study examining the effect of statin prescribing on mortality. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Comparisons of Four Methods for Estimating a Dynamic Factor Model
ERIC Educational Resources Information Center
Zhang, Zhiyong; Hamaker, Ellen L.; Nesselroade, John R.
2008-01-01
Four methods for estimating a dynamic factor model, the direct autoregressive factor score (DAFS) model, are evaluated and compared. The first method estimates the DAFS model using a Kalman filter algorithm based on its state space model representation. The second one employs the maximum likelihood estimation method based on the construction of a…
The Equivalence of Two Methods of Parameter Estimation for the Rasch Model.
ERIC Educational Resources Information Center
Blackwood, Larry G.; Bradley, Edwin L.
1989-01-01
Two methods of estimating parameters in the Rasch model are compared. The equivalence of likelihood estimations from the model of G. J. Mellenbergh and P. Vijn (1981) and from usual unconditional maximum likelihood (UML) estimation is demonstrated. Mellenbergh and Vijn's model is a convenient method of calculating UML estimates. (SLD)
Methods for determining time of death.
Madea, Burkhard
2016-12-01
Medicolegal death time estimation must estimate the time since death reliably. Reliability can only be provided empirically by statistical analysis of errors in field studies. Determining the time since death requires the calculation of measurable data along a time-dependent curve back to the starting point. Various methods are used to estimate the time since death. The current gold standard for death time estimation is a previously established nomogram method based on the two-exponential model of body cooling. Great experimental and practical achievements have been realized using this nomogram method. To reduce the margin of error of the nomogram method, a compound method was developed based on electrical and mechanical excitability of skeletal muscle, pharmacological excitability of the iris, rigor mortis, and postmortem lividity. Further increasing the accuracy of death time estimation involves the development of conditional probability distributions for death time estimation based on the compound method. Although many studies have evaluated chemical methods of death time estimation, such methods play a marginal role in daily forensic practice. However, increased precision of death time estimation has recently been achieved by considering various influencing factors (i.e., preexisting diseases, duration of terminal episode, and ambient temperature). Putrefactive changes may be used for death time estimation in water-immersed bodies. Furthermore, recently developed technologies, such as H magnetic resonance spectroscopy, can be used to quantitatively study decompositional changes. This review addresses the gold standard method of death time estimation in forensic practice and promising technological and scientific developments in the field.
Comparison of variance estimators for meta-analysis of instrumental variable estimates
Schmidt, AF; Hingorani, AD; Jefferis, BJ; White, J; Groenwold, RHH; Dudbridge, F
2016-01-01
Abstract Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two versions of the delta method (IV before or after pooling), four bootstrap estimators, a jack-knife estimator and a heteroscedasticity-consistent (HC) variance estimator were compared using simulation. Two types of meta-analyses were compared, a two-stage meta-analysis pooling results, and a one-stage meta-analysis pooling datasets. Results: Using a two-stage meta-analysis, coverage of the point estimate using bootstrapped estimators deviated from nominal levels at weak instrument settings and/or outcome probabilities ≤ 0.10. The jack-knife estimator was the least biased resampling method, the HC estimator often failed at outcome probabilities ≤ 0.50 and overall the delta method estimators were the least biased. In the presence of between-study heterogeneity, the delta method before meta-analysis performed best. Using a one-stage meta-analysis all methods performed equally well and better than two-stage meta-analysis of greater or equal size. Conclusions: In the presence of between-study heterogeneity, two-stage meta-analyses should preferentially use the delta method before meta-analysis. Weak instrument bias can be reduced by performing a one-stage meta-analysis. PMID:27591262
Growth and mortality of larval sunfish in backwaters of the upper Mississippi River
Zigler, S.J.; Jennings, C.A.
1993-01-01
The authors estimated the growth and mortality of larval sunfish Lepomis spp. in backwater habitats of the upper Mississippi River with an otolith-based method and a length-based method. Fish were sampled with plankton nets at one station in Navigation Pools 8 and 14 in 1989 and at two stations in Pool 8 in 1990. For both methods, growth was modeled with an exponential equation, and instantaneous mortality was estimated by regressing the natural logarithm of fish catch for each 1-mm size-group against the estimated age of the group, which was derived from the growth equations. At two of the stations, the otolith-based method provided more precise estimates of sunfish growth than the length-based method. We were able to compare length-based and otolith-based estimates of sunfish mortality only at the two stations where we caught the largest numbers of sunfish. Estimates of mortality were similar for both methods in Pool 14, where catches were higher, but the length-based method gave significantly higher estimates in Pool 8, where the catches were lower. The otolith- based method required more laboratory analysis, but provided better estimates of the growth and mortality than the length-based method when catches were low. However, the length-based method was more cost- effective for estimating growth and mortality when catches were large.
Comparison of local- to regional-scale estimates of ground-water recharge in Minnesota, USA
Delin, G.N.; Healy, R.W.; Lorenz, D.L.; Nimmo, J.R.
2007-01-01
Regional ground-water recharge estimates for Minnesota were compared to estimates made on the basis of four local- and basin-scale methods. Three local-scale methods (unsaturated-zone water balance, water-table fluctuations (WTF) using three approaches, and age dating of ground water) yielded point estimates of recharge that represent spatial scales from about 1 to about 1000 m2. A fourth method (RORA, a basin-scale analysis of streamflow records using a recession-curve-displacement technique) yielded recharge estimates at a scale of 10–1000s of km2. The RORA basin-scale recharge estimates were regionalized to estimate recharge for the entire State of Minnesota on the basis of a regional regression recharge (RRR) model that also incorporated soil and climate data. Recharge rates estimated by the RRR model compared favorably to the local and basin-scale recharge estimates. RRR estimates at study locations were about 41% less on average than the unsaturated-zone water-balance estimates, ranged from 44% greater to 12% less than estimates that were based on the three WTF approaches, were about 4% less than the age dating of ground-water estimates, and were about 5% greater than the RORA estimates. Of the methods used in this study, the WTF method is the simplest and easiest to apply. Recharge estimates made on the basis of the UZWB method were inconsistent with the results from the other methods. Recharge estimates using the RRR model could be a good source of input for regional ground-water flow models; RRR model results currently are being applied for this purpose in USGS studies elsewhere.
Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding
NASA Technical Reports Server (NTRS)
Mahmoud, Saad; Hi, Jianjun
2012-01-01
The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of the deviation. This method is more complicated than the Pilot-Guided Method due to the gain control circuitry, but does not have the real-time computation complexity of the Blind Estimation method. Each of these methods can be used to provide an accurate estimation of the combining ratio, and the final selection of the estimation method depends on other design constraints.
Liu, Hong; Yan, Meng; Song, Enmin; Wang, Jie; Wang, Qian; Jin, Renchao; Jin, Lianghai; Hung, Chih-Cheng
2016-05-01
Myocardial motion estimation of tagged cardiac magnetic resonance (TCMR) images is of great significance in clinical diagnosis and the treatment of heart disease. Currently, the harmonic phase analysis method (HARP) and the local sine-wave modeling method (SinMod) have been proven as two state-of-the-art motion estimation methods for TCMR images, since they can directly obtain the inter-frame motion displacement vector field (MDVF) with high accuracy and fast speed. By comparison, SinMod has better performance over HARP in terms of displacement detection, noise and artifacts reduction. However, the SinMod method has some drawbacks: 1) it is unable to estimate local displacements larger than half of the tag spacing; 2) it has observable errors in tracking of tag motion; and 3) the estimated MDVF usually has large local errors. To overcome these problems, we present a novel motion estimation method in this study. The proposed method tracks the motion of tags and then estimates the dense MDVF by using the interpolation. In this new method, a parameter estimation procedure for global motion is applied to match tag intersections between different frames, ensuring specific kinds of large displacements being correctly estimated. In addition, a strategy of tag motion constraints is applied to eliminate most of errors produced by inter-frame tracking of tags and the multi-level b-splines approximation algorithm is utilized, so as to enhance the local continuity and accuracy of the final MDVF. In the estimation of the motion displacement, our proposed method can obtain a more accurate MDVF compared with the SinMod method and our method can overcome the drawbacks of the SinMod method. However, the motion estimation accuracy of our method depends on the accuracy of tag lines detection and our method has a higher time complexity. Copyright © 2015 Elsevier Inc. All rights reserved.
Talker Localization Based on Interference between Transmitted and Reflected Audible Sound
NASA Astrophysics Data System (ADS)
Nakayama, Masato; Nakasako, Noboru; Shinohara, Toshihiro; Uebo, Tetsuji
In many engineering fields, distance to targets is very important. General distance measurement method uses a time delay between transmitted and reflected waves, but it is difficult to estimate the short distance. On the other hand, the method using phase interference to measure the short distance has been known in the field of microwave radar. Therefore, we have proposed the distance estimation method based on interference between transmitted and reflected audible sound, which can measure the distance between microphone and target with one microphone and one loudspeaker. In this paper, we propose talker localization method based on distance estimation using phase interference. We expand the distance estimation method using phase interference into two microphones (microphone array) in order to estimate talker position. The proposed method can estimate talker position by measuring the distance and direction between target and microphone array. In addition, talker's speech is regarded as a noise in the proposed method. Therefore, we also propose combination of the proposed method and CSP (Cross-power Spectrum Phase analysis) method which is one of the DOA (Direction Of Arrival) estimation methods. We evaluated the performance of talker localization in real environments. The experimental result shows the effectiveness of the proposed method.
Ries, Kernell G.; Eng, Ken
2010-01-01
The U.S. Geological Survey, in cooperation with the Maryland Department of the Environment, operated a network of 20 low-flow partial-record stations during 2008 in a region that extends from southwest of Baltimore to the northeastern corner of Maryland to obtain estimates of selected streamflow statistics at the station locations. The study area is expected to face a substantial influx of new residents and businesses as a result of military and civilian personnel transfers associated with the Federal Base Realignment and Closure Act of 2005. The estimated streamflow statistics, which include monthly 85-percent duration flows, the 10-year recurrence-interval minimum base flow, and the 7-day, 10-year low flow, are needed to provide a better understanding of the availability of water resources in the area to be affected by base-realignment activities. Streamflow measurements collected for this study at the low-flow partial-record stations and measurements collected previously for 8 of the 20 stations were related to concurrent daily flows at nearby index streamgages to estimate the streamflow statistics. Three methods were used to estimate the streamflow statistics and two methods were used to select the index streamgages. Of the three methods used to estimate the streamflow statistics, two of them--the Moments and MOVE1 methods--rely on correlating the streamflow measurements at the low-flow partial-record stations with concurrent streamflows at nearby, hydrologically similar index streamgages to determine the estimates. These methods, recommended for use by the U.S. Geological Survey, generally require about 10 streamflow measurements at the low-flow partial-record station. The third method transfers the streamflow statistics from the index streamgage to the partial-record station based on the average of the ratios of the measured streamflows at the partial-record station to the concurrent streamflows at the index streamgage. This method can be used with as few as one pair of streamflow measurements made on a single streamflow recession at the low-flow partial-record station, although additional pairs of measurements will increase the accuracy of the estimates. Errors associated with the two correlation methods generally were lower than the errors associated with the flow-ratio method, but the advantages of the flow-ratio method are that it can produce reasonably accurate estimates from streamflow measurements much faster and at lower cost than estimates obtained using the correlation methods. The two index-streamgage selection methods were (1) selection based on the highest correlation coefficient between the low-flow partial-record station and the index streamgages, and (2) selection based on Euclidean distance, where the Euclidean distance was computed as a function of geographic proximity and the basin characteristics: drainage area, percentage of forested area, percentage of impervious area, and the base-flow recession time constant, t. Method 1 generally selected index streamgages that were significantly closer to the low-flow partial-record stations than method 2. The errors associated with the estimated streamflow statistics generally were lower for method 1 than for method 2, but the differences were not statistically significant. The flow-ratio method for estimating streamflow statistics at low-flow partial-record stations was shown to be independent from the two correlation-based estimation methods. As a result, final estimates were determined for eight low-flow partial-record stations by weighting estimates from the flow-ratio method with estimates from one of the two correlation methods according to the respective variances of the estimates. Average standard errors of estimate for the final estimates ranged from 90.0 to 7.0 percent, with an average value of 26.5 percent. Average standard errors of estimate for the weighted estimates were, on average, 4.3 percent less than the best average standard errors of estima
Estimation of the size of the female sex worker population in Rwanda using three different methods
Kayitesi, Catherine; Gwiza, Aimé; Ruton, Hinda; Koleros, Andrew; Gupta, Neil; Balisanga, Helene; Riedel, David J; Nsanzimana, Sabin
2014-01-01
HIV prevalence is disproportionately high among female sex workers compared to the general population. Many African countries lack useful data on the size of female sex worker populations to inform national HIV programmes. A female sex worker size estimation exercise using three different venue-based methodologies was conducted among female sex workers in all provinces of Rwanda in August 2010. The female sex worker national population size was estimated using capture–recapture and enumeration methods, and the multiplier method was used to estimate the size of the female sex worker population in Kigali. A structured questionnaire was also used to supplement the data. The estimated number of female sex workers by the capture–recapture method was 3205 (95% confidence interval: 2998–3412). The female sex worker size was estimated at 3348 using the enumeration method. In Kigali, the female sex worker size was estimated at 2253 (95% confidence interval: 1916–2524) using the multiplier method. Nearly 80% of all female sex workers in Rwanda were found to be based in the capital, Kigali. This study provided a first-time estimate of the female sex worker population size in Rwanda using capture–recapture, enumeration, and multiplier methods. The capture–recapture and enumeration methods provided similar estimates of female sex worker in Rwanda. Combination of such size estimation methods is feasible and productive in low-resource settings and should be considered vital to inform national HIV programmes. PMID:25336306
Estimation of the size of the female sex worker population in Rwanda using three different methods.
Mutagoma, Mwumvaneza; Kayitesi, Catherine; Gwiza, Aimé; Ruton, Hinda; Koleros, Andrew; Gupta, Neil; Balisanga, Helene; Riedel, David J; Nsanzimana, Sabin
2015-10-01
HIV prevalence is disproportionately high among female sex workers compared to the general population. Many African countries lack useful data on the size of female sex worker populations to inform national HIV programmes. A female sex worker size estimation exercise using three different venue-based methodologies was conducted among female sex workers in all provinces of Rwanda in August 2010. The female sex worker national population size was estimated using capture-recapture and enumeration methods, and the multiplier method was used to estimate the size of the female sex worker population in Kigali. A structured questionnaire was also used to supplement the data. The estimated number of female sex workers by the capture-recapture method was 3205 (95% confidence interval: 2998-3412). The female sex worker size was estimated at 3348 using the enumeration method. In Kigali, the female sex worker size was estimated at 2253 (95% confidence interval: 1916-2524) using the multiplier method. Nearly 80% of all female sex workers in Rwanda were found to be based in the capital, Kigali. This study provided a first-time estimate of the female sex worker population size in Rwanda using capture-recapture, enumeration, and multiplier methods. The capture-recapture and enumeration methods provided similar estimates of female sex worker in Rwanda. Combination of such size estimation methods is feasible and productive in low-resource settings and should be considered vital to inform national HIV programmes. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho
2017-03-01
So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.
Noise Estimation and Quality Assessment of Gaussian Noise Corrupted Images
NASA Astrophysics Data System (ADS)
Kamble, V. M.; Bhurchandi, K.
2018-03-01
Evaluating the exact quantity of noise present in an image and quality of an image in the absence of reference image is a challenging task. We propose a near perfect noise estimation method and a no reference image quality assessment method for images corrupted by Gaussian noise. The proposed methods obtain initial estimate of noise standard deviation present in an image using the median of wavelet transform coefficients and then obtains a near to exact estimate using curve fitting. The proposed noise estimation method provides the estimate of noise within average error of +/-4%. For quality assessment, this noise estimate is mapped to fit the Differential Mean Opinion Score (DMOS) using a nonlinear function. The proposed methods require minimum training and yields the noise estimate and image quality score. Images from Laboratory for image and Video Processing (LIVE) database and Computational Perception and Image Quality (CSIQ) database are used for validation of the proposed quality assessment method. Experimental results show that the performance of proposed quality assessment method is at par with the existing no reference image quality assessment metric for Gaussian noise corrupted images.
New methods of testing nonlinear hypothesis using iterative NLLS estimator
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.
2017-11-01
This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.
NASA Technical Reports Server (NTRS)
Klein, V.
1979-01-01
Two identification methods, the equation error method and the output error method, are used to estimate stability and control parameter values from flight data for a low-wing, single-engine, general aviation airplane. The estimated parameters from both methods are in very good agreement primarily because of sufficient accuracy of measured data. The estimated static parameters also agree with the results from steady flights. The effect of power different input forms are demonstrated. Examination of all results available gives the best values of estimated parameters and specifies their accuracies.
Estimating parameter of Rayleigh distribution by using Maximum Likelihood method and Bayes method
NASA Astrophysics Data System (ADS)
Ardianti, Fitri; Sutarman
2018-01-01
In this paper, we use Maximum Likelihood estimation and Bayes method under some risk function to estimate parameter of Rayleigh distribution to know the best method. The prior knowledge which used in Bayes method is Jeffrey’s non-informative prior. Maximum likelihood estimation and Bayes method under precautionary loss function, entropy loss function, loss function-L 1 will be compared. We compare these methods by bias and MSE value using R program. After that, the result will be displayed in tables to facilitate the comparisons.
Theoretical methods for estimating moments of inertia of trees and boles.
John A. Sturos
1973-01-01
Presents a theoretical method for estimating the mass moments of inertia of full trees and boles about a transverse axis. Estimates from the theoretical model compared closely with experimental data on aspen and red pine trees obtained in the field by the pendulum method. The theoretical method presented may be used to estimate the mass moments of inertia and other...
Methods to Estimate the Variance of Some Indices of the Signal Detection Theory: A Simulation Study
ERIC Educational Resources Information Center
Suero, Manuel; Privado, Jesús; Botella, Juan
2017-01-01
A simulation study is presented to evaluate and compare three methods to estimate the variance of the estimates of the parameters d and "C" of the signal detection theory (SDT). Several methods have been proposed to calculate the variance of their estimators, "d'" and "c." Those methods have been mostly assessed by…
Distance measures and optimization spaces in quantitative fatty acid signature analysis
Bromaghin, Jeffrey F.; Rode, Karyn D.; Budge, Suzanne M.; Thiemann, Gregory W.
2015-01-01
Quantitative fatty acid signature analysis has become an important method of diet estimation in ecology, especially marine ecology. Controlled feeding trials to validate the method and estimate the calibration coefficients necessary to account for differential metabolism of individual fatty acids have been conducted with several species from diverse taxa. However, research into potential refinements of the estimation method has been limited. We compared the performance of the original method of estimating diet composition with that of five variants based on different combinations of distance measures and calibration-coefficient transformations between prey and predator fatty acid signature spaces. Fatty acid signatures of pseudopredators were constructed using known diet mixtures of two prey data sets previously used to estimate the diets of polar bears Ursus maritimus and gray seals Halichoerus grypus, and their diets were then estimated using all six variants. In addition, previously published diets of Chukchi Sea polar bears were re-estimated using all six methods. Our findings reveal that the selection of an estimation method can meaningfully influence estimates of diet composition. Among the pseudopredator results, which allowed evaluation of bias and precision, differences in estimator performance were rarely large, and no one estimator was universally preferred, although estimators based on the Aitchison distance measure tended to have modestly superior properties compared to estimators based on the Kullback-Leibler distance measure. However, greater differences were observed among estimated polar bear diets, most likely due to differential estimator sensitivity to assumption violations. Our results, particularly the polar bear example, suggest that additional research into estimator performance and model diagnostics is warranted.
Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding
2013-01-01
Background In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. Results The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. Conclusions The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the fastest and produced the least biased, the most precise, robust and stable estimates of predictive accuracy. These properties argue for routinely using Methods 5 and 7 to assess predictive accuracy in genomic selection studies. PMID:24314298
Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding.
Ould Estaghvirou, Sidi Boubacar; Ogutu, Joseph O; Schulz-Streeck, Torben; Knaak, Carsten; Ouzunova, Milena; Gordillo, Andres; Piepho, Hans-Peter
2013-12-06
In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the fastest and produced the least biased, the most precise, robust and stable estimates of predictive accuracy. These properties argue for routinely using Methods 5 and 7 to assess predictive accuracy in genomic selection studies.
Fang, Yun; Wu, Hulin; Zhu, Li-Xing
2011-07-01
We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.
[Theory, method and application of method R on estimation of (co)variance components].
Liu, Wen-Zhong
2004-07-01
Theory, method and application of Method R on estimation of (co)variance components were reviewed in order to make the method be reasonably used. Estimation requires R values,which are regressions of predicted random effects that are calculated using complete dataset on predicted random effects that are calculated using random subsets of the same data. By using multivariate iteration algorithm based on a transformation matrix,and combining with the preconditioned conjugate gradient to solve the mixed model equations, the computation efficiency of Method R is much improved. Method R is computationally inexpensive,and the sampling errors and approximate credible intervals of estimates can be obtained. Disadvantages of Method R include a larger sampling variance than other methods for the same data,and biased estimates in small datasets. As an alternative method, Method R can be used in larger datasets. It is necessary to study its theoretical properties and broaden its application range further.
Estimating Tree Height-Diameter Models with the Bayesian Method
Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei
2014-01-01
Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the “best” model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2. PMID:24711733
Estimating tree height-diameter models with the Bayesian method.
Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei
2014-01-01
Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the "best" model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2.
NASA Astrophysics Data System (ADS)
Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod
2010-04-01
For phase estimation in digital holographic interferometry, a high-order instantaneous moments (HIM) based method was recently developed which relies on piecewise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients using the HIM operator. A crucial step in the method is mapping the polynomial coefficient estimation to single-tone frequency determination for which various techniques exist. The paper presents a comparative analysis of the performance of the HIM operator based method in using different single-tone frequency estimation techniques for phase estimation. The analysis is supplemented by simulation results.
Parameter estimation using weighted total least squares in the two-compartment exchange model.
Garpebring, Anders; Löfstedt, Tommy
2018-01-01
The linear least squares (LLS) estimator provides a fast approach to parameter estimation in the linearized two-compartment exchange model. However, the LLS method may introduce a bias through correlated noise in the system matrix of the model. The purpose of this work is to present a new estimator for the linearized two-compartment exchange model that takes this noise into account. To account for the noise in the system matrix, we developed an estimator based on the weighted total least squares (WTLS) method. Using simulations, the proposed WTLS estimator was compared, in terms of accuracy and precision, to an LLS estimator and a nonlinear least squares (NLLS) estimator. The WTLS method improved the accuracy compared to the LLS method to levels comparable to the NLLS method. This improvement was at the expense of increased computational time; however, the WTLS was still faster than the NLLS method. At high signal-to-noise ratio all methods provided similar precisions while inconclusive results were observed at low signal-to-noise ratio. The proposed method provides improvements in accuracy compared to the LLS method, however, at an increased computational cost. Magn Reson Med 79:561-567, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Wedemeyer, Gary A.; Nelson, Nancy C.
1975-01-01
Gaussian and nonparametric (percentile estimate and tolerance interval) statistical methods were used to estimate normal ranges for blood chemistry (bicarbonate, bilirubin, calcium, hematocrit, hemoglobin, magnesium, mean cell hemoglobin concentration, osmolality, inorganic phosphorus, and pH for juvenile rainbow (Salmo gairdneri, Shasta strain) trout held under defined environmental conditions. The percentile estimate and Gaussian methods gave similar normal ranges, whereas the tolerance interval method gave consistently wider ranges for all blood variables except hemoglobin. If the underlying frequency distribution is unknown, the percentile estimate procedure would be the method of choice.
An investigation of new methods for estimating parameter sensitivities
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1989-01-01
The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.
Paule‐Mandel estimators for network meta‐analysis with random inconsistency effects
Veroniki, Areti Angeliki; Law, Martin; Tricco, Andrea C.; Baker, Rose
2017-01-01
Network meta‐analysis is used to simultaneously compare multiple treatments in a single analysis. However, network meta‐analyses may exhibit inconsistency, where direct and different forms of indirect evidence are not in agreement with each other, even after allowing for between‐study heterogeneity. Models for network meta‐analysis with random inconsistency effects have the dual aim of allowing for inconsistencies and estimating average treatment effects across the whole network. To date, two classical estimation methods for fitting this type of model have been developed: a method of moments that extends DerSimonian and Laird's univariate method and maximum likelihood estimation. However, the Paule and Mandel estimator is another recommended classical estimation method for univariate meta‐analysis. In this paper, we extend the Paule and Mandel method so that it can be used to fit models for network meta‐analysis with random inconsistency effects. We apply all three estimation methods to a variety of examples that have been used previously and we also examine a challenging new dataset that is highly heterogenous. We perform a simulation study based on this new example. We find that the proposed Paule and Mandel method performs satisfactorily and generally better than the previously proposed method of moments because it provides more accurate inferences. Furthermore, the Paule and Mandel method possesses some advantages over likelihood‐based methods because it is both semiparametric and requires no convergence diagnostics. Although restricted maximum likelihood estimation remains the gold standard, the proposed methodology is a fully viable alternative to this and other estimation methods. PMID:28585257
Study on Comparison of Bidding and Pricing Behavior Distinction between Estimate Methods
NASA Astrophysics Data System (ADS)
Morimoto, Emi; Namerikawa, Susumu
The most characteristic trend on bidding and pricing behavior distinction in recent years is the increasing number of bidders just above the criteria for low-price bidding investigations. The contractor's markup is the difference between the bidding price and the execution price. Therefore, the contractor's markup is the difference between criteria for low-price bidding investigations price and the execution price in the public works bid in Japan. Virtually, bidder's strategies and behavior have been controlled by public engineer's budgets. Estimation and bid are inseparably linked in the Japanese public works procurement system. The trial of the unit price-type estimation method begins in 2004. On another front, accumulated estimation method is one of the general methods in public works. So, there are two types of standard estimation methods in Japan. In this study, we did a statistical analysis on the bid information of civil engineering works for the Ministry of Land, Infrastructure, and Transportation in 2008. It presents several issues that bidding and pricing behavior is related to an estimation method (several estimation methods) for public works bid in Japan. The two types of standard estimation methods produce different results that number of bidders (decide on bid-no bid strategy) and distribution of bid price (decide on mark-up strategy).The comparison on the distribution of bid prices showed that the percentage of the bid concentrated on the criteria for low-price bidding investigations have had a tendency to get higher in the large-sized public works by the unit price-type estimation method, comparing with the accumulated estimation method. On one hand, the number of bidders who bids for public works estimated unit-price tends to increase significantly Public works estimated unit-price is likely to have been one of the factors for the construction companies to decide if they participate in the biddings.
Estimation Methods for One-Parameter Testlet Models
ERIC Educational Resources Information Center
Jiao, Hong; Wang, Shudong; He, Wei
2013-01-01
This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…
Clement, Matthew; O'Keefe, Joy M; Walters, Brianne
2015-01-01
While numerous methods exist for estimating abundance when detection is imperfect, these methods may not be appropriate due to logistical difficulties or unrealistic assumptions. In particular, if highly mobile taxa are frequently absent from survey locations, methods that estimate a probability of detection conditional on presence will generate biased abundance estimates. Here, we propose a new estimator for estimating abundance of mobile populations using telemetry and counts of unmarked animals. The estimator assumes that the target population conforms to a fission-fusion grouping pattern, in which the population is divided into groups that frequently change in size and composition. If assumptions are met, it is not necessary to locate all groups in the population to estimate abundance. We derive an estimator, perform a simulation study, conduct a power analysis, and apply the method to field data. The simulation study confirmed that our estimator is asymptotically unbiased with low bias, narrow confidence intervals, and good coverage, given a modest survey effort. The power analysis provided initial guidance on survey effort. When applied to small data sets obtained by radio-tracking Indiana bats, abundance estimates were reasonable, although imprecise. The proposed method has the potential to improve abundance estimates for mobile species that have a fission-fusion social structure, such as Indiana bats, because it does not condition detection on presence at survey locations and because it avoids certain restrictive assumptions.
Yurimoto, Terumi; Hara, Shintaro; Isoyama, Takashi; Saito, Itsuro; Ono, Toshiya; Abe, Yusuke
2016-09-01
Estimation of pressure and flow has been an important subject for developing implantable artificial hearts. To realize real-time viscosity-adjusted estimation of pressure head and pump flow for a total artificial heart, we propose the table estimation method with quasi-pulsatile modulation of rotary blood pump in which systolic high flow and diastolic low flow phased are generated. The table estimation method utilizes three kinds of tables: viscosity, pressure and flow tables. Viscosity is estimated from the characteristic that differential value in motor speed between systolic and diastolic phases varies depending on viscosity. Potential of this estimation method was investigated using mock circulation system. Glycerin solution diluted with salty water was used to adjust viscosity of fluid. In verification of this method using continuous flow data, fairly good estimation could be possible when differential pulse width modulation (PWM) value of the motor between systolic and diastolic phases was high. In estimation under quasi-pulsatile condition, inertia correction was provided and fairly good estimation was possible when the differential PWM value was high, which was not different from the verification results using continuous flow data. In the experiment of real-time estimation applying moving average method to the estimated viscosity, fair estimation could be possible when the differential PWM value was high, showing that real-time viscosity-adjusted estimation of pressure head and pump flow would be possible with this novel estimation method when the differential PWM value would be set high.
ERIC Educational Resources Information Center
Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.
2016-01-01
The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…
Shrinkage regression-based methods for microarray missing value imputation.
Wang, Hsiuying; Chiu, Chia-Chun; Wu, Yi-Ching; Wu, Wei-Sheng
2013-01-01
Missing values commonly occur in the microarray data, which usually contain more than 5% missing values with up to 90% of genes affected. Inaccurate missing value estimation results in reducing the power of downstream microarray data analyses. Many types of methods have been developed to estimate missing values. Among them, the regression-based methods are very popular and have been shown to perform better than the other types of methods in many testing microarray datasets. To further improve the performances of the regression-based methods, we propose shrinkage regression-based methods. Our methods take the advantage of the correlation structure in the microarray data and select similar genes for the target gene by Pearson correlation coefficients. Besides, our methods incorporate the least squares principle, utilize a shrinkage estimation approach to adjust the coefficients of the regression model, and then use the new coefficients to estimate missing values. Simulation results show that the proposed methods provide more accurate missing value estimation in six testing microarray datasets than the existing regression-based methods do. Imputation of missing values is a very important aspect of microarray data analyses because most of the downstream analyses require a complete dataset. Therefore, exploring accurate and efficient methods for estimating missing values has become an essential issue. Since our proposed shrinkage regression-based methods can provide accurate missing value estimation, they are competitive alternatives to the existing regression-based methods.
Brady, Eoghan; Hill, Kenneth
2017-01-01
Under-five mortality estimates are increasingly used in low and middle income countries to target interventions and measure performance against global development goals. Two new methods to rapidly estimate under-5 mortality based on Summary Birth Histories (SBH) were described in a previous paper and tested with data available. This analysis tests the methods using data appropriate to each method from 5 countries that lack vital registration systems. SBH data are collected across many countries through censuses and surveys, and indirect methods often rely upon their quality to estimate mortality rates. The Birth History Imputation method imputes data from a recent Full Birth History (FBH) onto the birth, death and age distribution of the SBH to produce estimates based on the resulting distribution of child mortality. DHS FBHs and MICS SBHs are used for all five countries. In the implementation, 43 of 70 estimates are within 20% of validation estimates (61%). Mean Absolute Relative Error is 17.7.%. 1 of 7 countries produces acceptable estimates. The Cohort Change method considers the differences in births and deaths between repeated Summary Birth Histories at 1 or 2-year intervals to estimate the mortality rate in that period. SBHs are taken from Brazil's PNAD Surveys 2004-2011 and validated against IGME estimates. 2 of 10 estimates are within 10% of validation estimates. Mean absolute relative error is greater than 100%. Appropriate testing of these new methods demonstrates that they do not produce sufficiently good estimates based on the data available. We conclude this is due to the poor quality of most SBH data included in the study. This has wider implications for the next round of censuses and future household surveys across many low- and middle- income countries.
Allen, Marcus; Zhong, Qiang; Kirsch, Nicholas; Dani, Ashwin; Clark, William W; Sharma, Nitin
2017-12-01
Miniature inertial measurement units (IMUs) are wearable sensors that measure limb segment or joint angles during dynamic movements. However, IMUs are generally prone to drift, external magnetic interference, and measurement noise. This paper presents a new class of nonlinear state estimation technique called state-dependent coefficient (SDC) estimation to accurately predict joint angles from IMU measurements. The SDC estimation method uses limb dynamics, instead of limb kinematics, to estimate the limb state. Importantly, the nonlinear limb dynamic model is formulated into state-dependent matrices that facilitate the estimator design without performing a Jacobian linearization. The estimation method is experimentally demonstrated to predict knee joint angle measurements during functional electrical stimulation of the quadriceps muscle. The nonlinear knee musculoskeletal model was identified through a series of experiments. The SDC estimator was then compared with an extended kalman filter (EKF), which uses a Jacobian linearization and a rotation matrix method, which uses a kinematic model instead of the dynamic model. Each estimator's performance was evaluated against the true value of the joint angle, which was measured through a rotary encoder. The experimental results showed that the SDC estimator, the rotation matrix method, and EKF had root mean square errors of 2.70°, 2.86°, and 4.42°, respectively. Our preliminary experimental results show the new estimator's advantage over the EKF method but a slight advantage over the rotation matrix method. However, the information from the dynamic model allows the SDC method to use only one IMU to measure the knee angle compared with the rotation matrix method that uses two IMUs to estimate the angle.
An evaluation of methods for estimating decadal stream loads
NASA Astrophysics Data System (ADS)
Lee, Casey J.; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.
2016-11-01
Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen - lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale's ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.
An evaluation of methods for estimating decadal stream loads
Lee, Casey; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.
2016-01-01
Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen – lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale’s ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.
Warren, L.P.; Church, P.E.; Turtora, Michael
1996-01-01
Hydraulic conductivities of a sand and gravel aquifer were estimated by three methods: constant- head multiport-permeameter tests, grain-size analyses (with the Hazen approximation method), and slug tests. Sediment cores from 45 boreholes were undivided or divided into two or three vertical sections to estimate hydraulic conductivity based on permeameter tests and grain-size analyses. The cores were collected from depth intervals in the screened zone of the aquifer in each observation well. Slug tests were performed on 29 observation wells installed in the boreholes. Hydraulic conductivities of 35 sediment cores estimated by use of permeameter tests ranged from 0.9 to 86 meters per day, with a mean of 22.8 meters per day. Hydraulic conductivities of 45 sediment cores estimated by use of grain-size analyses ranged from 0.5 to 206 meters per day, with a mean of 40.7 meters per day. Hydraulic conductivities of aquifer material at 29 observation wells estimated by use of slug tests ranged from 0.6 to 79 meters per day, with a mean of 32.9 meters per day. The repeatability of estimated hydraulic conductivities were estimated to be within 30 percent for the permeameter method, 12 percent for the grain-size method, and 9.5 percent for the slug test method. Statistical tests determined that the medians of estimates resulting from the slug tests and grain-size analyses were not significantly different but were significantly higher than the median of estimates resulting from the permeameter tests. Because the permeameter test is the only method considered which estimates vertical hydraulic conductivity, the difference in estimates may be attributed to vertical or horizontal anisotropy. The difference in the average hydraulic conductivities estimated by use of each method was less than 55 percent when compared to the estimated hydraulic conductivity determined from an aquifer test conducted near the study area.
Gelbrich, Bianca; Frerking, Carolin; Weiss, Sandra; Schwerdt, Sebastian; Stellzig-Eisenhauer, Angelika; Tausche, Eve; Gelbrich, Götz
2015-01-01
Forensic age estimation in living adolescents is based on several methods, e.g. the assessment of skeletal and dental maturation. Combination of several methods is mandatory, since age estimates from a single method are too imprecise due to biological variability. The correlation of the errors of the methods being combined must be known to calculate the precision of combined age estimates. To examine the correlation of the errors of the hand and the third molar method and to demonstrate how to calculate the combined age estimate. Clinical routine radiographs of the hand and dental panoramic images of 383 patients (aged 7.8-19.1 years, 56% female) were assessed. Lack of correlation (r = -0.024, 95% CI = -0.124 to + 0.076, p = 0.64) allows calculating the combined age estimate as the weighted average of the estimates from hand bones and third molars. Combination improved the standard deviations of errors (hand = 0.97, teeth = 1.35 years) to 0.79 years. Uncorrelated errors of the age estimates obtained from both methods allow straightforward determination of the common estimate and its variance. This is also possible when reference data for the hand and the third molar method are established independently from each other, using different samples.
Nariai, N; Kim, S; Imoto, S; Miyano, S
2004-01-01
We propose a statistical method to estimate gene networks from DNA microarray data and protein-protein interactions. Because physical interactions between proteins or multiprotein complexes are likely to regulate biological processes, using only mRNA expression data is not sufficient for estimating a gene network accurately. Our method adds knowledge about protein-protein interactions to the estimation method of gene networks under a Bayesian statistical framework. In the estimated gene network, a protein complex is modeled as a virtual node based on principal component analysis. We show the effectiveness of the proposed method through the analysis of Saccharomyces cerevisiae cell cycle data. The proposed method improves the accuracy of the estimated gene networks, and successfully identifies some biological facts.
NASA Astrophysics Data System (ADS)
Gao, Lingli; Pan, Yudi
2018-05-01
The correct estimation of the seismic source signature is crucial to exploration geophysics. Based on seismic interferometry, the virtual real source (VRS) method provides a model-independent way for source signature estimation. However, when encountering multimode surface waves, which are commonly seen in the shallow seismic survey, strong spurious events appear in seismic interferometric results. These spurious events introduce errors in the virtual-source recordings and reduce the accuracy of the source signature estimated by the VRS method. In order to estimate a correct source signature from multimode surface waves, we propose a mode-separated VRS method. In this method, multimode surface waves are mode separated before seismic interferometry. Virtual-source recordings are then obtained by applying seismic interferometry to each mode individually. Therefore, artefacts caused by cross-mode correlation are excluded in the virtual-source recordings and the estimated source signatures. A synthetic example showed that a correct source signature can be estimated with the proposed method, while strong spurious oscillation occurs in the estimated source signature if we do not apply mode separation first. We also applied the proposed method to a field example, which verified its validity and effectiveness in estimating seismic source signature from shallow seismic shot gathers containing multimode surface waves.
Genetic Algorithm-Based Motion Estimation Method using Orientations and EMGs for Robot Controls
Chae, Jeongsook; Jin, Yong; Sung, Yunsick
2018-01-01
Demand for interactive wearable devices is rapidly increasing with the development of smart devices. To accurately utilize wearable devices for remote robot controls, limited data should be analyzed and utilized efficiently. For example, the motions by a wearable device, called Myo device, can be estimated by measuring its orientation, and calculating a Bayesian probability based on these orientation data. Given that Myo device can measure various types of data, the accuracy of its motion estimation can be increased by utilizing these additional types of data. This paper proposes a motion estimation method based on weighted Bayesian probability and concurrently measured data, orientations and electromyograms (EMG). The most probable motion among estimated is treated as a final estimated motion. Thus, recognition accuracy can be improved when compared to the traditional methods that employ only a single type of data. In our experiments, seven subjects perform five predefined motions. When orientation is measured by the traditional methods, the sum of the motion estimation errors is 37.3%; likewise, when only EMG data are used, the error in motion estimation by the proposed method was also 37.3%. The proposed combined method has an error of 25%. Therefore, the proposed method reduces motion estimation errors by 12%. PMID:29324641
Improving estimates of genetic maps: a meta-analysis-based approach.
Stewart, William C L
2007-07-01
Inaccurate genetic (or linkage) maps can reduce the power to detect linkage, increase type I error, and distort haplotype and relationship inference. To improve the accuracy of existing maps, I propose a meta-analysis-based method that combines independent map estimates into a single estimate of the linkage map. The method uses the variance of each independent map estimate to combine them efficiently, whether the map estimates use the same set of markers or not. As compared with a joint analysis of the pooled genotype data, the proposed method is attractive for three reasons: (1) it has comparable efficiency to the maximum likelihood map estimate when the pooled data are homogeneous; (2) relative to existing map estimation methods, it can have increased efficiency when the pooled data are heterogeneous; and (3) it avoids the practical difficulties of pooling human subjects data. On the basis of simulated data modeled after two real data sets, the proposed method can reduce the sampling variation of linkage maps commonly used in whole-genome linkage scans. Furthermore, when the independent map estimates are also maximum likelihood estimates, the proposed method performs as well as or better than when they are estimated by the program CRIMAP. Since variance estimates of maps may not always be available, I demonstrate the feasibility of three different variance estimators. Overall, the method should prove useful to investigators who need map positions for markers not contained in publicly available maps, and to those who wish to minimize the negative effects of inaccurate maps. Copyright 2007 Wiley-Liss, Inc.
Inventory-based estimates of forest biomass carbon stocks in China: A comparison of three methods
Zhaodi Guo; Jingyun Fang; Yude Pan; Richard Birdsey
2010-01-01
Several studies have reported different estimates for forest biomass carbon (C) stocks in China. The discrepancy among these estimates may be largely attributed to the methods used. In this study, we used three methods [mean biomass density method (MBM), mean ratio method (MRM), and continuous biomass expansion factor (BEF) method (abbreviated as CBM)] applied to...
Gilliom, Robert J.; Helsel, Dennis R.
1986-01-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations, for determining the best performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilliom, R.J.; Helsel, D.R.
1986-02-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensoredmore » observations, for determining the best performing parameter estimation method for any particular data det. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.« less
NASA Astrophysics Data System (ADS)
Acharya, S.; Mylavarapu, R.; Jawitz, J. W.
2012-12-01
In shallow unconfined aquifers, the water table usually shows a distinct diurnal fluctuation pattern corresponding to the twenty-four hour solar radiation cycle. This diurnal water table fluctuation (DWTF) signal can be used to estimate the groundwater evapotranspiration (ETg) by vegetation, a method known as the White [1932] method. Water table fluctuations in shallow phreatic aquifers is controlled by two distinct storage parameters, drainable porosity (or specific yield) and the fillable porosity. Yet, it is implicitly assumed in most studies that these two parameters are equal, unless hysteresis effect is considered. The White based method available in the literature is also based on a single drainable porosity parameter to estimate the ETg. In this study, we present a modification of the White based method to estimate ETg from DWTF using separate drainable (λd) and fillable porosity (λf) parameters. Separate analytical expressions based on successive steady state moisture profiles are used to estimate λd and λf, instead of the commonly employed hydrostatic moisture profile approach. The modified method is then applied to estimate ETg using the DWTF data observed in a field in northeast Florida and the results are compared with ET estimations from the standard Penman-Monteith equation. It is found that the modified method resulted in significantly better estimates of ETg than the previously available method that used only a single, hydrostatic-moisture-profile based λd. Furthermore, the modified method is also used to estimate ETg even during rainfall events which produced significantly better estimates of ETg as compared to the single λd parameter method.
Lee, Jewon; Moon, Seokbae; Jeong, Hyeyun; Kim, Sang Woo
2015-11-20
This paper proposes a diagnosis method for a multipole permanent magnet synchronous motor (PMSM) under an interturn short circuit fault. Previous works in this area have suffered from the uncertainties of the PMSM parameters, which can lead to misdiagnosis. The proposed method estimates the q-axis inductance (Lq) of the faulty PMSM to solve this problem. The proposed method also estimates the faulty phase and the value of G, which serves as an index of the severity of the fault. The q-axis current is used to estimate the faulty phase, the values of G and Lq. For this reason, two open-loop observers and an optimization method based on a particle-swarm are implemented. The q-axis current of a healthy PMSM is estimated by the open-loop observer with the parameters of a healthy PMSM. The Lq estimation significantly compensates for the estimation errors in high-speed operation. The experimental results demonstrate that the proposed method can estimate the faulty phase, G, and Lq besides exhibiting robustness against parameter uncertainties.
Assessing Interval Estimation Methods for Hill Model ...
The Hill model of concentration-response is ubiquitous in toxicology, perhaps because its parameters directly relate to biologically significant metrics of toxicity such as efficacy and potency. Point estimates of these parameters obtained through least squares regression or maximum likelihood are commonly used in high-throughput risk assessment, but such estimates typically fail to include reliable information concerning confidence in (or precision of) the estimates. To address this issue, we examined methods for assessing uncertainty in Hill model parameter estimates derived from concentration-response data. In particular, using a sample of ToxCast concentration-response data sets, we applied four methods for obtaining interval estimates that are based on asymptotic theory, bootstrapping (two varieties), and Bayesian parameter estimation, and then compared the results. These interval estimation methods generally did not agree, so we devised a simulation study to assess their relative performance. We generated simulated data by constructing four statistical error models capable of producing concentration-response data sets comparable to those observed in ToxCast. We then applied the four interval estimation methods to the simulated data and compared the actual coverage of the interval estimates to the nominal coverage (e.g., 95%) in order to quantify performance of each of the methods in a variety of cases (i.e., different values of the true Hill model paramet
NASA Astrophysics Data System (ADS)
Gorthi, Sai Siva; Rajshekhar, Gannavarpu; Rastogi, Pramod
2010-06-01
Recently, a high-order instantaneous moments (HIM)-operator-based method was proposed for accurate phase estimation in digital holographic interferometry. The method relies on piece-wise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients from the HIM operator using single-tone frequency estimation. The work presents a comparative analysis of the performance of different single-tone frequency estimation techniques, like Fourier transform followed by optimization, estimation of signal parameters by rotational invariance technique (ESPRIT), multiple signal classification (MUSIC), and iterative frequency estimation by interpolation on Fourier coefficients (IFEIF) in HIM-operator-based methods for phase estimation. Simulation and experimental results demonstrate the potential of the IFEIF technique with respect to computational efficiency and estimation accuracy.
Comparison of haemoglobin estimates using direct & indirect cyanmethaemoglobin methods.
Bansal, Priyanka Gupta; Toteja, Gurudayal Singh; Bhatia, Neena; Gupta, Sanjeev; Kaur, Manpreet; Adhikari, Tulsi; Garg, Ashok Kumar
2016-10-01
Estimation of haemoglobin is the most widely used method to assess anaemia. Although direct cyanmethaemoglobin method is the recommended method for estimation of haemoglobin, but it may not be feasible under field conditions. Hence, the present study was undertaken to compare indirect cyanmethaemoglobin method against the conventional direct method for haemoglobin estimation. Haemoglobin levels were estimated for 888 adolescent girls aged 11-18 yr residing in an urban slum in Delhi by both direct and indirect cyanmethaemoglobin methods, and the results were compared. The mean haemoglobin levels for 888 whole blood samples estimated by direct and indirect cyanmethaemoglobin method were 116.1 ± 12.7 and 110.5 ± 12.5 g/l, respectively, with a mean difference of 5.67 g/l (95% confidence interval: 5.45 to 5.90, P<0.001); which is equivalent to 0.567 g%. The prevalence of anaemia was reported as 59.6 and 78.2 per cent by direct and indirect methods, respectively. Sensitivity and specificity of indirect cyanmethaemoglobin method were 99.2 and 56.4 per cent, respectively. Using regression analysis, prediction equation was developed for indirect haemoglobin values. The present findings revealed that indirect cyanmethaemoglobin method overestimated the prevalence of anaemia as compared to the direct method. However, if a correction factor is applied, indirect method could be successfully used for estimating true haemoglobin level. More studies should be undertaken to establish agreement and correction factor between direct and indirect cyanmethaemoglobin methods.
Testing an automated method to estimate ground-water recharge from streamflow records
Rutledge, A.T.; Daniel, C.C.
1994-01-01
The computer program, RORA, allows automated analysis of streamflow hydrographs to estimate ground-water recharge. Output from the program, which is based on the recession-curve-displacement method (often referred to as the Rorabaugh method, for whom the program is named), was compared to estimates of recharge obtained from a manual analysis of 156 years of streamflow record from 15 streamflow-gaging stations in the eastern United States. Statistical tests showed that there was no significant difference between paired estimates of annual recharge by the two methods. Tests of results produced by the four workers who performed the manual method showed that results can differ significantly between workers. Twenty-two percent of the variation between manual and automated estimates could be attributed to having different workers perform the manual method. The program RORA will produce estimates of recharge equivalent to estimates produced manually, greatly increase the speed od analysis, and reduce the subjectivity inherent in manual analysis.
Improving the S-Shape Solar Radiation Estimation Method for Supporting Crop Models
Fodor, Nándor
2012-01-01
In line with the critical comments formulated in relation to the S-shape global solar radiation estimation method, the original formula was improved via a 5-step procedure. The improved method was compared to four-reference methods on a large North-American database. According to the investigated error indicators, the final 7-parameter S-shape method has the same or even better estimation efficiency than the original formula. The improved formula is able to provide radiation estimates with a particularly low error pattern index (PIdoy) which is especially important concerning the usability of the estimated radiation values in crop models. Using site-specific calibration, the radiation estimates of the improved S-shape method caused an average of 2.72 ± 1.02 (α = 0.05) relative error in the calculated biomass. Using only readily available site specific metadata the radiation estimates caused less than 5% relative error in the crop model calculations when they were used for locations in the middle, plain territories of the USA. PMID:22645451
Huizinga, Richard J.; Rydlund, Jr., Paul H.
2004-01-01
The evaluation of scour at bridges throughout the state of Missouri has been ongoing since 1991 in a cooperative effort by the U.S. Geological Survey and Missouri Department of Transportation. A variety of assessment methods have been used to identify bridges susceptible to scour and to estimate scour depths. A potential-scour assessment (Level 1) was used at 3,082 bridges to identify bridges that might be susceptible to scour. A rapid estimation method (Level 1+) was used to estimate contraction, pier, and abutment scour depths at 1,396 bridge sites to identify bridges that might be scour critical. A detailed hydraulic assessment (Level 2) was used to compute contraction, pier, and abutment scour depths at 398 bridges to determine which bridges are scour critical and would require further monitoring or application of scour countermeasures. The rapid estimation method (Level 1+) was designed to be a conservative estimator of scour depths compared to depths computed by a detailed hydraulic assessment (Level 2). Detailed hydraulic assessments were performed at 316 bridges that also had received a rapid estimation assessment, providing a broad data base to compare the two scour assessment methods. The scour depths computed by each of the two methods were compared for bridges that had similar discharges. For Missouri, the rapid estimation method (Level 1+) did not provide a reasonable conservative estimate of the detailed hydraulic assessment (Level 2) scour depths for contraction scour, but the discrepancy was the result of using different values for variables that were common to both of the assessment methods. The rapid estimation method (Level 1+) was a reasonable conservative estimator of the detailed hydraulic assessment (Level 2) scour depths for pier scour if the pier width is used for piers without footing exposure and the footing width is used for piers with footing exposure. Detailed hydraulic assessment (Level 2) scour depths were conservatively estimated by the rapid estimation method (Level 1+) for abutment scour, but there was substantial variability in the estimates and several substantial underestimations.
Multiple-Parameter Estimation Method Based on Spatio-Temporal 2-D Processing for Bistatic MIMO Radar
Yang, Shouguo; Li, Yong; Zhang, Kunhui; Tang, Weiping
2015-01-01
A novel spatio-temporal 2-dimensional (2-D) processing method that can jointly estimate the transmitting-receiving azimuth and Doppler frequency for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise and an unknown number of targets is proposed. In the temporal domain, the cross-correlation of the matched filters’ outputs for different time-delay sampling is used to eliminate the spatial colored noise. In the spatial domain, the proposed method uses a diagonal loading method and subspace theory to estimate the direction of departure (DOD) and direction of arrival (DOA), and the Doppler frequency can then be accurately estimated through the estimation of the DOD and DOA. By skipping target number estimation and the eigenvalue decomposition (EVD) of the data covariance matrix estimation and only requiring a one-dimensional search, the proposed method achieves low computational complexity. Furthermore, the proposed method is suitable for bistatic MIMO radar with an arbitrary transmitted and received geometrical configuration. The correction and efficiency of the proposed method are verified by computer simulation results. PMID:26694385
Yang, Shouguo; Li, Yong; Zhang, Kunhui; Tang, Weiping
2015-12-14
A novel spatio-temporal 2-dimensional (2-D) processing method that can jointly estimate the transmitting-receiving azimuth and Doppler frequency for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise and an unknown number of targets is proposed. In the temporal domain, the cross-correlation of the matched filters' outputs for different time-delay sampling is used to eliminate the spatial colored noise. In the spatial domain, the proposed method uses a diagonal loading method and subspace theory to estimate the direction of departure (DOD) and direction of arrival (DOA), and the Doppler frequency can then be accurately estimated through the estimation of the DOD and DOA. By skipping target number estimation and the eigenvalue decomposition (EVD) of the data covariance matrix estimation and only requiring a one-dimensional search, the proposed method achieves low computational complexity. Furthermore, the proposed method is suitable for bistatic MIMO radar with an arbitrary transmitted and received geometrical configuration. The correction and efficiency of the proposed method are verified by computer simulation results.
Levesque, V.A.; Hammett, K.M.
1997-01-01
The Myakka and Peace River Basins constitute more than 60 percent of the total inflow area and contribute more than half the total tributary inflow to the Charlotte Harbor estuarine system. Water discharge and nutrient enrichment have been identified as significant concerns in the estuary, and consequently, it is important to accurately estimate the magnitude of discharges and nutrient loads transported by inflows from both rivers. Two methods for estimating discharge and nutrient loads from tidally affected reaches of the Myakka and Peace Rivers were compared. The first method was a tidal-estimation method, in which discharge and nutrient loads were estimated based on stage, water-velocity, discharge, and water-quality data collected near the mouths of the rivers. The second method was a traditional basin-ratio method in which discharge and nutrient loads at the mouths were estimated from discharge and loads measured at upstream stations. Stage and water-velocity data were collected near the river mouths by submersible instruments, deployed in situ, and discharge measurements were made with an acoustic Doppler current profiler. The data collected near the mouths of the Myakka River and Peace River were filtered, using a low-pass filter, to remove daily mixed-tide effects with periods less than about 2 days. The filtered data from near the river mouths were used to calculate daily mean discharge and nutrient loads. These tidal-estimation-method values were then compared to the basin-ratio-method values. Four separate 30-day periods of differing streamflow conditions were chosen for monitoring and comparison. Discharge and nutrient load estimates computed from the tidal-estimation and basin-ratio methods were most similar during high-flow periods. However, during high flow, the values computed from the tidal-estimation method for the Myakka and Peace Rivers were consistently lower than the values computed from the basin-ratio method. There were substantial differences between discharges and nutrient loads computed from the tidal-estimation and basin-ratio methods during low-flow periods. Furthermore, the differences between the methods were not consistent. Discharges and nutrient loads computed from the tidal-estimation method for the Myakka River were higher than those computed from the basin-ratio method, whereas discharges and nutrients loads computed by the tidal-estimation method for the Peace River were not only lower than those computed from the basin-ratio method, but they actually reflected a negative, or upstream, net movement. Short-term tidal measurement results should be used with caution, because antecedent conditions can influence the discharge and nutrient loads. Continuous tidal data collected over a 1- or 2-year period would be necessary to more accurately estimate the tidally affected discharge and nutrient loads for the Myakka and Peace River Basins.
Estimating the Entropy of Binary Time Series: Methodology, Some Theory and a Simulation Study
NASA Astrophysics Data System (ADS)
Gao, Yun; Kontoyiannis, Ioannis; Bienenstock, Elie
2008-06-01
Partly motivated by entropy-estimation problems in neuroscience, we present a detailed and extensive comparison between some of the most popular and effective entropy estimation methods used in practice: The plug-in method, four different estimators based on the Lempel-Ziv (LZ) family of data compression algorithms, an estimator based on the Context-Tree Weighting (CTW) method, and the renewal entropy estimator. METHODOLOGY: Three new entropy estimators are introduced; two new LZ-based estimators, and the “renewal entropy estimator,” which is tailored to data generated by a binary renewal process. For two of the four LZ-based estimators, a bootstrap procedure is described for evaluating their standard error, and a practical rule of thumb is heuristically derived for selecting the values of their parameters in practice. THEORY: We prove that, unlike their earlier versions, the two new LZ-based estimators are universally consistent, that is, they converge to the entropy rate for every finite-valued, stationary and ergodic process. An effective method is derived for the accurate approximation of the entropy rate of a finite-state hidden Markov model (HMM) with known distribution. Heuristic calculations are presented and approximate formulas are derived for evaluating the bias and the standard error of each estimator. SIMULATION: All estimators are applied to a wide range of data generated by numerous different processes with varying degrees of dependence and memory. The main conclusions drawn from these experiments include: (i) For all estimators considered, the main source of error is the bias. (ii) The CTW method is repeatedly and consistently seen to provide the most accurate results. (iii) The performance of the LZ-based estimators is often comparable to that of the plug-in method. (iv) The main drawback of the plug-in method is its computational inefficiency; with small word-lengths it fails to detect longer-range structure in the data, and with longer word-lengths the empirical distribution is severely undersampled, leading to large biases.
A phase match based frequency estimation method for sinusoidal signals
NASA Astrophysics Data System (ADS)
Shen, Yan-Lin; Tu, Ya-Qing; Chen, Lin-Jun; Shen, Ting-Ao
2015-04-01
Accurate frequency estimation affects the ranging precision of linear frequency modulated continuous wave (LFMCW) radars significantly. To improve the ranging precision of LFMCW radars, a phase match based frequency estimation method is proposed. To obtain frequency estimation, linear prediction property, autocorrelation, and cross correlation of sinusoidal signals are utilized. The analysis of computational complex shows that the computational load of the proposed method is smaller than those of two-stage autocorrelation (TSA) and maximum likelihood. Simulations and field experiments are performed to validate the proposed method, and the results demonstrate the proposed method has better performance in terms of frequency estimation precision than methods of Pisarenko harmonic decomposition, modified covariance, and TSA, which contribute to improving the precision of LFMCW radars effectively.
A source number estimation method for single optical fiber sensor
NASA Astrophysics Data System (ADS)
Hu, Junpeng; Huang, Zhiping; Su, Shaojing; Zhang, Yimeng; Liu, Chunwu
2015-10-01
The single-channel blind source separation (SCBSS) technique makes great significance in many fields, such as optical fiber communication, sensor detection, image processing and so on. It is a wide range application to realize blind source separation (BSS) from a single optical fiber sensor received data. The performance of many BSS algorithms and signal process methods will be worsened with inaccurate source number estimation. Many excellent algorithms have been proposed to deal with the source number estimation in array signal process which consists of multiple sensors, but they can not be applied directly to the single sensor condition. This paper presents a source number estimation method dealing with the single optical fiber sensor received data. By delay process, this paper converts the single sensor received data to multi-dimension form. And the data covariance matrix is constructed. Then the estimation algorithms used in array signal processing can be utilized. The information theoretic criteria (ITC) based methods, presented by AIC and MDL, Gerschgorin's disk estimation (GDE) are introduced to estimate the source number of the single optical fiber sensor's received signal. To improve the performance of these estimation methods at low signal noise ratio (SNR), this paper make a smooth process to the data covariance matrix. By the smooth process, the fluctuation and uncertainty of the eigenvalues of the covariance matrix are reduced. Simulation results prove that ITC base methods can not estimate the source number effectively under colored noise. The GDE method, although gets a poor performance at low SNR, but it is able to accurately estimate the number of sources with colored noise. The experiments also show that the proposed method can be applied to estimate the source number of single sensor received data.
Lin, Chen-Yen; Halabi, Susan
2017-01-01
We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox’s proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer. PMID:29326496
Lin, Chen-Yen; Halabi, Susan
2017-01-01
We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox's proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer.
NASA Astrophysics Data System (ADS)
Agata, Ryoichiro; Ichimura, Tsuyoshi; Hori, Takane; Hirahara, Kazuro; Hashimoto, Chihiro; Hori, Muneo
2018-04-01
The simultaneous estimation of the asthenosphere's viscosity and coseismic slip/afterslip is expected to improve largely the consistency of the estimation results to observation data of crustal deformation collected in widely spread observation points, compared to estimations of slips only. Such an estimate can be formulated as a non-linear inverse problem of material properties of viscosity and input force that is equivalent to fault slips based on large-scale finite-element (FE) modeling of crustal deformation, in which the degree of freedom is in the order of 109. We formulated and developed a computationally efficient adjoint-based estimation method for this inverse problem, together with a fast and scalable FE solver for the associated forward and adjoint problems. In a numerical experiment that imitates the 2011 Tohoku-Oki earthquake, the advantage of the proposed method is confirmed by comparing the estimated results with those obtained using simplified estimation methods. The computational cost required for the optimization shows that the proposed method enabled the targeted estimation to be completed with moderate amount of computational resources.
Distributed State Estimation Using a Modified Partitioned Moving Horizon Strategy for Power Systems.
Chen, Tengpeng; Foo, Yi Shyh Eddy; Ling, K V; Chen, Xuebing
2017-10-11
In this paper, a distributed state estimation method based on moving horizon estimation (MHE) is proposed for the large-scale power system state estimation. The proposed method partitions the power systems into several local areas with non-overlapping states. Unlike the centralized approach where all measurements are sent to a processing center, the proposed method distributes the state estimation task to the local processing centers where local measurements are collected. Inspired by the partitioned moving horizon estimation (PMHE) algorithm, each local area solves a smaller optimization problem to estimate its own local states by using local measurements and estimated results from its neighboring areas. In contrast with PMHE, the error from the process model is ignored in our method. The proposed modified PMHE (mPMHE) approach can also take constraints on states into account during the optimization process such that the influence of the outliers can be further mitigated. Simulation results on the IEEE 14-bus and 118-bus systems verify that our method achieves comparable state estimation accuracy but with a significant reduction in the overall computation load.
Hocalar, A; Türker, M; Karakuzu, C; Yüzgeç, U
2011-04-01
In this study, previously developed five different state estimation methods are examined and compared for estimation of biomass concentrations at a production scale fed-batch bioprocess. These methods are i. estimation based on kinetic model of overflow metabolism; ii. estimation based on metabolic black-box model; iii. estimation based on observer; iv. estimation based on artificial neural network; v. estimation based on differential evaluation. Biomass concentrations are estimated from available measurements and compared with experimental data obtained from large scale fermentations. The advantages and disadvantages of the presented techniques are discussed with regard to accuracy, reproducibility, number of primary measurements required and adaptation to different working conditions. Among the various techniques, the metabolic black-box method seems to have advantages although the number of measurements required is more than that for the other methods. However, the required extra measurements are based on commonly employed instruments in an industrial environment. This method is used for developing a model based control of fed-batch yeast fermentations. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Park, J. Y.; Ramachandran, G.; Raynor, P. C.; Kim, S. W.
2011-10-01
Surface area was estimated by three different methods using number and/or mass concentrations obtained from either two or three instruments that are commonly used in the field. The estimated surface area concentrations were compared with reference surface area concentrations (SAREF) calculated from the particle size distributions obtained from a scanning mobility particle sizer and an optical particle counter (OPC). The first estimation method (SAPSD) used particle size distribution measured by a condensation particle counter (CPC) and an OPC. The second method (SAINV1) used an inversion routine based on PM1.0, PM2.5, and number concentrations to reconstruct assumed lognormal size distributions by minimizing the difference between measurements and calculated values. The third method (SAINV2) utilized a simpler inversion method that used PM1.0 and number concentrations to construct a lognormal size distribution with an assumed value of geometric standard deviation. All estimated surface area concentrations were calculated from the reconstructed size distributions. These methods were evaluated using particle measurements obtained in a restaurant, an aluminum die-casting factory, and a diesel engine laboratory. SAPSD was 0.7-1.8 times higher and SAINV1 and SAINV2 were 2.2-8 times higher than SAREF in the restaurant and diesel engine laboratory. In the die casting facility, all estimated surface area concentrations were lower than SAREF. However, the estimated surface area concentration using all three methods had qualitatively similar exposure trends and rankings to those using SAREF within a workplace. This study suggests that surface area concentration estimation based on particle size distribution (SAPSD) is a more accurate and convenient method to estimate surface area concentrations than estimation methods using inversion routines and may be feasible to use for classifying exposure groups and identifying exposure trends.
Ezoe, Satoshi; Morooka, Takeo; Noda, Tatsuya; Sabin, Miriam Lewis; Koike, Soichi
2012-01-01
Men who have sex with men (MSM) are one of the groups most at risk for HIV infection in Japan. However, size estimates of MSM populations have not been conducted with sufficient frequency and rigor because of the difficulty, high cost and stigma associated with reaching such populations. This study examined an innovative and simple method for estimating the size of the MSM population in Japan. We combined an internet survey with the network scale-up method, a social network method for estimating the size of hard-to-reach populations, for the first time in Japan. An internet survey was conducted among 1,500 internet users who registered with a nationwide internet-research agency. The survey participants were asked how many members of particular groups with known population sizes (firepersons, police officers, and military personnel) they knew as acquaintances. The participants were also asked to identify the number of their acquaintances whom they understood to be MSM. Using these survey results with the network scale-up method, the personal network size and MSM population size were estimated. The personal network size was estimated to be 363.5 regardless of the sex of the acquaintances and 174.0 for only male acquaintances. The estimated MSM prevalence among the total male population in Japan was 0.0402% without adjustment, and 2.87% after adjusting for the transmission error of MSM. The estimated personal network size and MSM prevalence seen in this study were comparable to those from previous survey results based on the direct-estimation method. Estimating population sizes through combining an internet survey with the network scale-up method appeared to be an effective method from the perspectives of rapidity, simplicity, and low cost as compared with more-conventional methods.
Staley, James R; Burgess, Stephen
2017-05-01
Mendelian randomization, the use of genetic variants as instrumental variables (IV), can test for and estimate the causal effect of an exposure on an outcome. Most IV methods assume that the function relating the exposure to the expected value of the outcome (the exposure-outcome relationship) is linear. However, in practice, this assumption may not hold. Indeed, often the primary question of interest is to assess the shape of this relationship. We present two novel IV methods for investigating the shape of the exposure-outcome relationship: a fractional polynomial method and a piecewise linear method. We divide the population into strata using the exposure distribution, and estimate a causal effect, referred to as a localized average causal effect (LACE), in each stratum of population. The fractional polynomial method performs metaregression on these LACE estimates. The piecewise linear method estimates a continuous piecewise linear function, the gradient of which is the LACE estimate in each stratum. Both methods were demonstrated in a simulation study to estimate the true exposure-outcome relationship well, particularly when the relationship was a fractional polynomial (for the fractional polynomial method) or was piecewise linear (for the piecewise linear method). The methods were used to investigate the shape of relationship of body mass index with systolic blood pressure and diastolic blood pressure. © 2017 The Authors Genetic Epidemiology Published by Wiley Periodicals, Inc.
Staley, James R.
2017-01-01
ABSTRACT Mendelian randomization, the use of genetic variants as instrumental variables (IV), can test for and estimate the causal effect of an exposure on an outcome. Most IV methods assume that the function relating the exposure to the expected value of the outcome (the exposure‐outcome relationship) is linear. However, in practice, this assumption may not hold. Indeed, often the primary question of interest is to assess the shape of this relationship. We present two novel IV methods for investigating the shape of the exposure‐outcome relationship: a fractional polynomial method and a piecewise linear method. We divide the population into strata using the exposure distribution, and estimate a causal effect, referred to as a localized average causal effect (LACE), in each stratum of population. The fractional polynomial method performs metaregression on these LACE estimates. The piecewise linear method estimates a continuous piecewise linear function, the gradient of which is the LACE estimate in each stratum. Both methods were demonstrated in a simulation study to estimate the true exposure‐outcome relationship well, particularly when the relationship was a fractional polynomial (for the fractional polynomial method) or was piecewise linear (for the piecewise linear method). The methods were used to investigate the shape of relationship of body mass index with systolic blood pressure and diastolic blood pressure. PMID:28317167
Kwon, Deukwoo; Reis, Isildinha M
2015-08-12
When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.
Uechi, Ken; Asakura, Keiko; Ri, Yui; Masayasu, Shizuko; Sasaki, Satoshi
2016-02-01
Several estimation methods for 24-h sodium excretion using spot urine sample have been reported, but accurate estimation at the individual level remains difficult. We aimed to clarify the most accurate method of estimating 24-h sodium excretion with different numbers of available spot urine samples. A total of 370 participants from throughout Japan collected multiple 24-h urine and spot urine samples independently. Participants were allocated randomly into a development and a validation dataset. Two estimation methods were established in the development dataset using the two 24-h sodium excretion samples as reference: the 'simple mean method' estimated by multiplying the sodium-creatinine ratio by predicted 24-h creatinine excretion, whereas the 'regression method' employed linear regression analysis. The accuracy of the two methods was examined by comparing the estimated means and concordance correlation coefficients (CCC) in the validation dataset. Mean sodium excretion by the simple mean method with three spot urine samples was closest to that by 24-h collection (difference: -1.62 mmol/day). CCC with the simple mean method increased with an increased number of spot urine samples at 0.20, 0.31, and 0.42 using one, two, and three samples, respectively. This method with three spot urine samples yielded higher CCC than the regression method (0.40). When only one spot urine sample was available for each study participant, CCC was higher with the regression method (0.36). The simple mean method with three spot urine samples yielded the most accurate estimates of sodium excretion. When only one spot urine sample was available, the regression method was preferable.
Eren, Metin I.; Chao, Anne; Hwang, Wen-Han; Colwell, Robert K.
2012-01-01
Background Estimating assemblage species or class richness from samples remains a challenging, but essential, goal. Though a variety of statistical tools for estimating species or class richness have been developed, they are all singly-bounded: assuming only a lower bound of species or classes. Nevertheless there are numerous situations, particularly in the cultural realm, where the maximum number of classes is fixed. For this reason, a new method is needed to estimate richness when both upper and lower bounds are known. Methodology/Principal Findings Here, we introduce a new method for estimating class richness: doubly-bounded confidence intervals (both lower and upper bounds are known). We specifically illustrate our new method using the Chao1 estimator, rarefaction, and extrapolation, although any estimator of asymptotic richness can be used in our method. Using a case study of Clovis stone tools from the North American Lower Great Lakes region, we demonstrate that singly-bounded richness estimators can yield confidence intervals with upper bound estimates larger than the possible maximum number of classes, while our new method provides estimates that make empirical sense. Conclusions/Significance Application of the new method for constructing doubly-bound richness estimates of Clovis stone tools permitted conclusions to be drawn that were not otherwise possible with singly-bounded richness estimates, namely, that Lower Great Lakes Clovis Paleoindians utilized a settlement pattern that was probably more logistical in nature than residential. However, our new method is not limited to archaeological applications. It can be applied to any set of data for which there is a fixed maximum number of classes, whether that be site occupancy models, commercial products (e.g. athletic shoes), or census information (e.g. nationality, religion, age, race). PMID:22666316
An investigation of new methods for estimating parameter sensitivities
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1988-01-01
Parameter sensitivity is defined as the estimation of changes in the modeling functions and the design variables due to small changes in the fixed parameters of the formulation. There are currently several methods for estimating parameter sensitivities requiring either difficult to obtain second order information, or do not return reliable estimates for the derivatives. Additionally, all the methods assume that the set of active constraints does not change in a neighborhood of the estimation point. If the active set does in fact change, than any extrapolations based on these derivatives may be in error. The objective here is to investigate more efficient new methods for estimating parameter sensitivities when the active set changes. The new method is based on the recursive quadratic programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RPQ algorithm. Inital testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity. To handle changes in the active set, a deflection algorithm is proposed for those cases where the new set of active constraints remains linearly independent. For those cases where dependencies occur, a directional derivative is proposed. A few simple examples are included for the algorithm, but extensive testing has not yet been performed.
Jirapatnakul, Artit C; Fotin, Sergei V; Reeves, Anthony P; Biancardi, Alberto M; Yankelevitz, David F; Henschke, Claudia I
2009-01-01
Estimation of nodule location and size is an important pre-processing step in some nodule segmentation algorithms to determine the size and location of the region of interest. Ideally, such estimation methods will consistently find the same nodule location regardless of where the the seed point (provided either manually or by a nodule detection algorithm) is placed relative to the "true" center of the nodule, and the size should be a reasonable estimate of the true nodule size. We developed a method that estimates nodule location and size using multi-scale Laplacian of Gaussian (LoG) filtering. Nodule candidates near a given seed point are found by searching for blob-like regions with high filter response. The candidates are then pruned according to filter response and location, and the remaining candidates are sorted by size and the largest candidate selected. This method was compared to a previously published template-based method. The methods were evaluated on the basis of stability of the estimated nodule location to changes in the initial seed point and how well the size estimates agreed with volumes determined by a semi-automated nodule segmentation method. The LoG method exhibited better stability to changes in the seed point, with 93% of nodules having the same estimated location even when the seed point was altered, compared to only 52% of nodules for the template-based method. Both methods also showed good agreement with sizes determined by a nodule segmentation method, with an average relative size difference of 5% and -5% for the LoG and template-based methods respectively.
Comparison of volume estimation methods for pancreatic islet cells
NASA Astrophysics Data System (ADS)
Dvořák, JiřÃ.; Å vihlík, Jan; Habart, David; Kybic, Jan
2016-03-01
In this contribution we study different methods of automatic volume estimation for pancreatic islets which can be used in the quality control step prior to the islet transplantation. The total islet volume is an important criterion in the quality control. Also, the individual islet volume distribution is interesting -- it has been indicated that smaller islets can be more effective. A 2D image of a microscopy slice containing the islets is acquired. The input of the volume estimation methods are segmented images of individual islets. The segmentation step is not discussed here. We consider simple methods of volume estimation assuming that the islets have spherical or ellipsoidal shape. We also consider a local stereological method, namely the nucleator. The nucleator does not rely on any shape assumptions and provides unbiased estimates if isotropic sections through the islets are observed. We present a simulation study comparing the performance of the volume estimation methods in different scenarios and an experimental study comparing the methods on a real dataset.